Trends, change points & hypotheses

by Judith Curry

Jonathan Leake asks in the Sunday Times: “Why has it warmed so much less than the IPCC predicted?

The article provides a good overview on the debate.  Some summary excerpts:

Is it really true that global temperatures have not risen since 1997?

The simple answer is: they have risen, but not by very much. “Our records for the past 15 years suggest the world has warmed by about 0.051C over that period,” said the Met Office. In layman’s terms that is 51 thousandths of a degree.

One [dataset], held at the National Climate Data Centre (NCDC), run by America’s National Oceanic and Atmospheric Administration, suggests that global temperatures rose by an average of 0.074C since 1997. That’s small, too — but it is another rise.

A third and very different data set is overseen by John Christy. . . “From 1997-2011 our data show a global temperature rise of 0.15C,” he said. 

Overall, then, the world has got slightly warmer since 1997. Perhaps the real question is: why has it warmed so much less than was predicted by the climate models?  

For the critics of climate science this is a crucial point — but why? The answer goes back to the 2001 and 2007 science reports from the Intergovernmental Panel on Climate Change that had predicted the world was likely to warm by an average of about 0.2C a decade. The implication was that temperatures would rise steadily, not with 15-year gaps. The existence of such gaps, the critics argue, implies the climate models themselves are too flawed to be relied on.

Some scientists appear to be warning we will fry, while other sources fear we will freeze.

How we interpret the 20th century temperature data has implications for how we project future temperature variability and change.

Climate trend statistics and graphs

So, how should we analyze the recent time series of temperature global or local temperature?   Various blog posts have attempted to instruct us on this matter:

Josh encapsulates all this with a cartoon.

An argument for change-point analysis and analysis of partial time series is provided by Raymond Sneyers:  Climate Chaotic Instability: Statistical Determination and Theoretical Background [sneyers environometrics].

Abstract.  The paper concerns the determination of statistical climate properties, a problem especially important for climate prediction validation. After a brief review of the times series analyses applied on secular series of observations, an appropriate method is described for characterizing these properties which finally reduces itself to the search for existing change-points. The examples of the Jones North Hemispheric land temperature averages (1856±1995) and of the Prague Klementinum ones (1771±1993) are given and results discussed. Relating the observed chaotic character of the climatological series to the non-linearity of the equations ruling the weather and thus climate evolution, and presenting the example of a solution of the Lorenz non-linear equations showing that non-linearity may be responsible for the instability of the generated process, it seems justified to conclude that there are severe limits to climate predictability at all scales.

Three competing hypotheses  

Consider the following three hypotheses that explain 20th century climate variability and change, with implied future projections:

I.  IPCC AGW hypothesis:  20th century climate variability/change is explained by external forcing, with natural internal variability providing high frequency ‘noise’.  In the latter half of the 20th century, this external forcing has been dominated by anthropogenic gases and aerosols.   The implications for temperature change in the 21st century is 0.2C per decade until 2050. Challenges:  convincing explanations of the warming 1910-1940, explaining the flat trend between mid 1940’s and mid 1970’s, explaining the flat trend for the past 15 years.

II. Multi-decadal oscillations plus trend hypothesis:  20th century climate variability/change is explained by the large multidecadal oscillations (e.g NAO, PDO, AMO) with a superimposed trend of external forcing (AGW warming).  The implications for temperature change in the 21st century is relatively constant temperatures for the next several decades, or possible cooling associated with solar.  Challenges: separating forced from unforced changes in the observed time series, lack of predictability of the multidecadal oscillations.

III:  Climate shifts hypothesis: 20th century climate variability/change is explained by synchronized chaos arising from nonlinear oscillations of the coupled ocean/atmosphere system plus external forcing   (e.g. Tsonis, Douglass).  The most recent shift occurred 2001/2002, characterized by flattening temperatures and more frequent LaNina’s.  The implications for the next several decades are that the current trend will continue until the next climate shift, at some unknown point in the future.  External forcing (AGW, solar)  will have more or less impact on trends depending on the regime, but how external forcing materializes in terms of surface temperature in the context of spatiotemporal chaos is not known. Note:  hypothesis III is consistent with Sneyers’ arguments re change-point analysis.   Challenges:  figuring out the timing (and characteristics) of the next climate shift.

There are other hypotheses, but these three seem to cover most of the territory.  The three hypotheses are not independent, but emphasize to varying degrees natural internal variability vs external forcing,  and an interpretation of natural variability that is oscillatory versus phase locked shifts.   Hypothesis I derives from the 1D  energy balance, thermodynamic view of the climate system, whereas Hypothesis III  derives from a nonlinear dynamical system  characterized by spatiotemporal chaos.  Hypothesis II derives from climate diagnostics and data analysis.

Each of these three hypotheses provides a different interpretation of the 20th century attribution and has different implications for 21st century climate.   Hypothesis III is the hypothesis that I find most convincing, from a theoretical perspective and in terms of explaining historical observations, although this kind of perspective of the climate system is in its infancy.

Cherry picking data, or testing alternative hypotheses?

Back to the issue of cherry picking data, and interpreting the temperature time series for the past two decades.

Is the first decade+ of the 21st century the warmest in the past 100 years (as per Peter Gleick’s argument)?  Yes, but the very small positive trend is not consistent with the expectation of 0.2C/decade provided by the IPCC AR4.  In terms of anticipating temperature change in the coming decades, the AGW dominated prediction of 0.2C/decade does not seem like a good bet, particularly with the prospect of reduced solar radiation.

Has there been any warming since 1997 (Jonathan Leake’s question)?  There has been slight warming during the past 15 years.  Is it “cherry picking” to start a trend analysis at 1998?  No, not if you are looking for a long period of time where there is little or no warming, in efforts to refute Hypothesis I.

In terms of projecting what might happen in coming decades, Hypothesis III is the best bet IMO, although it is difficult to know when the next change point might occur.  Hypothesis III implies using 2002 as the starting point for analysis of the recent trend.

And finally, looking at global average temperatures makes sense in context of Hypothesis I, but isn’t very useful in terms of Hypothesis III.

And none of this data analysis is very satisfying or definitive owing to deficiencies in the data sets, particularly over the ocean.

IMO, the standard 1D energy balance model of the Earth’s climate system will provide little in the way of further insights; rather we need to bring additional physics and theory (e.g. entropy and the 2nd law) into the simple models, and explore the complexity of coupled nonlinear climate system characterized by spatiotemporal chaos.

1,022 responses to “Trends, change points & hypotheses

  1. Very good post. Sensible. Thanks.

    • Above, in this comment, Judith asks me:

      I don’t see how this statement by Leake is misleading:

      “For the critics of climate science this is a crucial point — but why? The answer goes back to the 2001 and 2007 science reports from the Intergovernmental Panel on Climate Change that had predicted the world was likely to warm by an average of about 0.2C a decade. The implication was that temperatures would rise steadily, not with 15-year gaps. The existence of such gaps, the critics argue, implies the climate models themselves are too flawed to be relied on.”

      Previously I focused on what is misleading about the headline.

      Now Judith asks me about an extract. What’s misleading is the big I have put in bold. As we’ve seen already, even in AR1 the predictions cited above conclude with the explicit statement:

      The rise will not be steady because of other factors.

      Leake’s articles in climate are nearly always highly misleading. Your extract is no exception. Leake explicitly suggests the IPCC proposed temperatures would rise steadily. The IPCC, on the other hand, explicitly says the opposite.

      It bothers me, frankly, that you don’t acknowledge how misleading Leake is in his presentation. Decadal scale variation is an important question worth looking at; and Leake makes a total hash of it.

      • Chris, the IPCC said 0.2C/per decade for two decades. Then there is 15 years without warming. 15 years out of 20. What is the problem here?

      • The problem is that Leake said the rise would be steady, and the IPCC said it would NOT be steady.

      • Having explicitly told you want was misleading in the extract you provided from Leake, let’s go on to what is misleading in your own comments, Judith.

        You say:

        Chris, the IPCC said 0.2C/per decade for two decades. Then there is 15 years without warming.

        First, citation please. Where is this prediction for two decades? Do you mean the 1992 supplement cited earlier in this thread with a prediction to 2025? That’s more than 30 years. Or something else? WHICH report?

        Second. What’s this “without warming”? Even Leake doesn’t make that mistake. The issue is 15 years with a small trend, not with no warming at all.

      • Chris, this is my last response to you. As I cited earlier

        http://www.ipcc.ch/publications_and_data/ar4/wg1/en/spmsspm-projections-of.html

        For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios. Even if the concentrations of all greenhouse gases and aerosols had been kept constant at year 2000 levels, a further warming of about 0.1°C per decade would be expected. {10.3, 10.7}

      • Yes, this is simply wrong.

        Judith you do yourself no credit by going along with this nonsense.

      • Markus Fitzhenry.

        “Michael | February 7, 2012 at 10:43 pm |
        Yes, this is simply wrong.
        Judith you do yourself no credit by going along with this nonsense.”

        Michael really means, you do yourself no credit because you’re not on our side anymore.

      • I haven’t reviewed all the AR4 statements, but I tend to agree with Anteros that the issue isn’t enormously important, although I think Chris Ho-Stuart is technically correct. Interdecadal variability is a well recognized climate reality. If a 0.2C/decade average out to 2050 was predicted, that wouldn’t preclude a lack of warming in some particular decade. For the separate AR4 2007 claim of 0/2C/decade for “the next two decades”, am I wrong in interpreting that to mean the interval from 2007 to 2027? Clearly, we haven’t proceeded far enough into that interval to judge that prediction.

      • Chris, this is my last response to you. As I cited earlier

        Judith, that would be very disappointing. I aim to be robust, but fair; and I’d really like to help you get a better support for the discussions here. (SkyDragon; sorry I have been slow on that!)

        I’d like you to reconsider the above please… but I won’t refrain from challenging what I think is misleading or wrong. I would hope that isn’t the problem! Take me seriously, please.

        For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios. Even if the concentrations of all greenhouse gases and aerosols had been kept constant at year 2000 levels, a further warming of about 0.1°C per decade would be expected. {10.3, 10.7}

        Good! That’s a citation we can use, going to AR4, published in 2007.

        Problem is… that ISN’T going to help with the slow down over the last 15 years. You said earlier:

        Chris, the IPCC said 0.2C/per decade for two decades. Then there is 15 years without warming. 15 years out of 20. What is the problem here?

        But AR4 was published in 2007! So no, there HASN’T been 15 years “without warming” (or with reduced warming) since the prediction you are citing.

        Don’t just run away from that. Talk to me. I won’t pull punches where I think you are wrong, but I’m not going to just dismiss or insult you.

        I said in another comment that I expect the recent lull to change and warming to accelerate somewhat. That’s not just my prediction. That’s based in part on an expected change in conditions that impacted the earlier 15 years. TSI is likely to go up. ENSO is likely to start warming things up a bit faster. If this doesn’t happen, then I’ll seriously have to review my position.

        It’s really confusing following this when there are different reports being cited all over the place. When you spoke of predictions bearing upon the 15 years just past, I was pretty sure you must have been referring to reports from before that lull.

        The standard conclusion, as I have always understood it, is that warming is not steady, and that lulls over a decade or more are common. I expect — in line with the AR4 extract you are citing — that the recent lull is going to show up as a lull, with stronger warming before and after. A rise of 0.2 C/decade (or better, of between 0.15 and 0.3) over the next two decades (from 2007, if you like!) sounds pretty sensible to me.

        Summary.
        (1) Leake’s initial question is misleading. IPCC predictions recognize the existence of changes in the pace of warming over those time scales.
        (2) Leake’s claim of IPCC meaning “steady rise” is flatly contradicted by the actual IPCC report.
        (3) The expectation in AR4 (2007) of warming over the coming two decades is not falsified by slower warming over the period before it was published.

        Cheers — Chris

      • Chris, I think part of the problem is that the AR4 used the same scenario chart as the TAR.
        http://i122.photobucket.com/albums/o252/captdallas2/ipccscenarioschartsmilie.png

        The smilie is off a couple years, but it sure looks like about 0.4 by 2020. I think this boils down to the IPCC not updating their documents.

      • Chris:

        “Good! That’s a citation we can use, going to AR4, published in 2007.

        Problem is… that ISN’T going to help with the slow down over the last 15 years. You said earlier:

        Chris, the IPCC said 0.2C/per decade for two decades. Then there is 15 years without warming. 15 years out of 20. What is the problem here?

        But AR4 was published in 2007! So no, there HASN’T been 15 years “without warming” (or with reduced warming) since the prediction you are citing.”

        The projection of .2C per decade is based on the scenarios.
        The scenarios used projected forcing from at least the year 2001 going forward. The SRES were published in Nov of 2000.

        So, if we want to compare the projection of .2C to observations, 2001 is probably the most defensible starting point. Since 2001 the observations
        fall outside a 95% confidence interval for a .2C projection. That can happen for a variety of reasons.

        1. Some of the models run too hot. For example, the mean estimate for sensitivity is 3C. more than half of the models have sensitivity higher than this.

        2. Emissions did not track with A1B projections or other forcings did
        not track with projections.

        3. rare events happen, and in shorter time scales they are more likely
        to occur

        Finally, there is no hard and fast minimum number of years required to reject the projection.
        There is simply a probability that one can calculate. For example,
        If observations ran 10C cooler after 5 years or 10C warmer after 5 years we would be right to conclude that something was amiss with the models.

        The problem is that there hasnt been enough attention paid to WHY the models are not tracking observations more accurately. For example,
        the new NCAR model looks to be even more out of wack with observations.

        A review of Ar4 projections indicates that there is a case to be made for models running too hot. That possibility, hasnt been addressed or investigated or eliminated as a possibility. preliminary Ar5 results are still running too hot in TLT as Santer recently showed at AGU 2011.

      • Latimer Alder

        @chris ho-stuart

        Stop digging, matey. You’re in a big enough hole already without shouting and screaming to bring more attention to it.

      • steven mosher

        Fred:

        “For the separate AR4 2007 claim of 0/2C/decade for “the next two decades”, am I wrong in interpreting that to mean the interval from 2007 to 2027? Clearly, we haven’t proceeded far enough into that interval to judge that prediction.”

        The closest thing to a near term prediction is

        http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch10s10-3-1.html

        That means we can wait until 2030 to see if the models run in the year
        2000 were correct. Until then, why trust them.

        The other point is the model results published in 2007, actually start in 2001, so thats the data you want to start your test.

        All that said. If 2012, turns out to be 10C cooler than 2001, what would
        you conclude about the models? would you say that its too early to render a judgement? 5 C cooler? 1C cooler?.

        The bottom line is
        this. The models were run with historical forcing up to 2000 and projected forcing after that. Second, differences in projected forcing dont change outputs until 20+ years down the line. the models clearly cant exhibit any behavior they want in those 0-20 years and still hit the projections
        that close 30 years out. At all times along the process ( 0-20 years) we can surely note whether the models are falling above or below the observations. And we can note that the longer the models run hotter than observations, the more rapid the warming will have to be to catch up and
        hit the window.

      • Chris:

        “But AR4 was published in 2007! So no, there HASN’T been 15 years “without warming” (or with reduced warming) since the prediction you are citing.”

        A) See Fig SPM.5 from the reference. Model projections start 2000.

        B) See Fig SPM.5 from the reference. The trends for all of the scenarios for the period 2000-2040 are effectively linear, similar to or lower than the trend 1997-2000 that informed the model start point, and on the scale of the predicted 0.2C increase there is no variability to speak of in any of them.

        The model projections cannot be reconciled with the last 15 years of flat temps, regardless of when you start.

        To achieve the predicted 0.4C rise, we would need to see about 25 years worth of 1990’s style warming in the next five years.

        Are you a betting man?

      • The believers cannot stand having the tenets of their faith challenged. Chris, as we see, likes to call those who point out problems in his faith, ‘liar’.
        For AGW believers to be so allegedly obsessed with communication, it is ironic how often they retreat to simply…denying….what other people say and declaring them untruthful.

      • Steve Mosher – Ordinarily I wouldn’t add to the excessive column space already devoted to this not very important point, but since you addressed your comment to me, I’ll respond.

        Chris Ho-Stuart is correct in criticizing Leake’s claim that the warming has been less than predicted. The various IPCC curves that have been cited are all drawn with the understanding that decade to decade variation from them is something to be expected – the curves are smooth simply because there’s no way of knowing which decades will vary from the projected mean and in which direction. Not to belabor the point, but predictions of 0.2/decade average out to mid century can’t be invalidated by data from the first decade of the century. Predictions of 0.2 per decade for a specified two decades (“the next two decades”) can be invalidated by data from 2007 to 2027, but not by anything that hasn’t gone beyond early 2012. The fact that projected curves were drawn starting in 2001 isn’t a test of “the next two decades”.

        You are right that if 2012 is 10 degrees colder than 2001, the models will be in trouble.; And so will the rest of us. The same will be true if it’s 10 degrees hotter.

        The most important point I wanted to make was in the first sentence. This is inconsequential stuff, as is obvious if one looks at the last 100 years rather than the last 10 to 15. Quibbling about it seems to me to be more about scoring points than understanding what is going on now, or will in the future. With that in mind, I’ll try to refrain from getting caught up in the arguing if these points continue to generate further comments, and to respond only if something new and important is added to the discussion.

      • Fred,

        “Chris Ho-Stuart is correct in criticizing Leake’s claim that the warming has been less than predicted. The various IPCC curves that have been cited are all drawn with the understanding that decade to decade variation from them is something to be expected – the curves are smooth simply because there’s no way of knowing which decades will vary from the projected mean and in which direction.”

        Honestly. It’s like watching someone try to play Twister. Here’s a graph:

        http://www.metoffice.gov.uk/research/climate/seasonal-to-decadal/decadal-prediction

        Look at the confidence levels. Did they mean those? Perhaps they meant to say “don’t forget to add in all the extra extra variability that we haven’t crowbarred into our models yet.”

      • Chris Ho Stuart: The problem is that Leake said the rise would be steady, and the IPCC said it would NOT be steady.

        Did IPCC language allow for the possibility that the rate of increase might possibly be way below the predicted rate for 50 to 100 years? I put it this way because of the vagueness in the phrase “would NOT be steady”. A large number of us interpreted the IPCC language as excluding the possibility of next to no warming for a period of 15 years. Had they seriously considered what has happened as a possibility, warnings of disaster would have been less shrill.

        Thus, I think that Leake and Curry have interpreted the IPCC language fairly.

        curryja: Chris, the IPCC said 0.2C/per decade for two decades. Then there is 15 years without warming. 15 years out of 20.

        Almost for sure, had the IPCC anticipated what we have as a real possibility, they’d have written differently. For example, they might have written, “there may be 15 – 30 years of no warming before the overall warming trend resumes, and 0.2C/decade is the anticipated mean rate over the century.” But they didn’t.

        Thus, I think that Leake and Curry have interpreted the IPCC language fairly.

      • P.S.

        Here’s the updated Met Office prediction graph:
        http://www.metoffice.gov.uk/research/climate/seasonal-to-decadal/long-range/decadal-fc

        Keep a copy in your files.

      • James Evans – The link and graph you refer to are interesting, and I think might help readers better appreciate Chris Ho-Stuart’s conclusion that predictions can’t yet be made accurately for individual decades, which is why Leake’s criticism of the IPCC model-based curves was misguided.

        The models you refer to are very different from those cited by the IPCC. They involve the DePreSys approach to decadal climate modeling, based on the premise that better decadal forecasts will be possible if more attention is devoted to model initialization. The latter is not a major focus of the GCMs cited by the IPCC because over multiple decades, their projections converge toward the same trajectory from different initial conditions.

        The graph you linked to shows that the DePreSys attempts may be a step in the right direction, but still have a long way to go. Notice for example, the great deviation in the hindcast due to the 1991 Pinatubo eruption, which obviously couldn’t be anticipated by better initializations.

        Basically, the point made by Chris is pretty universally understood within climate science. Average temperature anomalies can’t be expected to anticipate actual values for any single decade, and are intended to be interpreted over multiple decades. It does seem to me that a great deal of time is being wasted here arguing about that, and could be better spent on assessing the relationship between interval lengths and signal to noise ratios.

      • Fred –

        I agree with you.

        Isn’t it noticeable that this is an observation [that a decade is harder to predict/less meaningful than longer periods] that should be made equally by people wherever they are on the climate spectrum?

        Although it is tempting, I do find it tiresome that partisans jump on the tiniest ‘trend’ as containing large amounts of ‘meaning’. It could almost be used as a test of partisanship – anybody claiming [clearly] unjustified significance could be put in a sin-bin of ‘not to be taken seriously for a month’.

        That would thin out the debate a bit

      • the key issue here is the length of such pauses that is “allowable” by H1. The IPCC and its proponents are emphatic that the flat, cool trend from mid 1940’s to mid 1950’s is not natural variability, but anthropogenic aerosol forcing. So pauses of 10-15 years are now expected, but not pauses approaching 30 years?

      • So pauses of 10-15 years are now expected, but not pauses approaching 30 years?

        As I understand it, pauses of 30 years would be expected, albeit very rarely. Although it would certainly raise questions about the accuracy of models, the existence of such a period wouldn’t disprove AGW nor IPCC predictions of decadal averages; only those predictions that are explicit about specific 30 year time periods would be disproven.

        Why did you omit the following even though you spoke of predictions that were right next door?

        The rise will not be steady because of other factors.

      • “The IPCC and its proponents are emphatic that the flat, cool trend from mid 1940′s to mid 1950′s is not natural variability, but anthropogenic aerosol forcing.”

        Judy – I’m somewhat familiar with the IPCC reports (mostly AR4, less so for earlier ones), but I haven’t seen that claim. Could you cite the exact section and words where the IPCC emphatically attributes the temperature fluctuations between the mid 1940’s tand mid-1950s to anthropogenic aerosols while excluding natural variability? My own reading of the evidence is that much of the fluctuation during that interval was due to natural unforced variability from internal climate dynamics, with aerosols perhaps adding some cooling after 1950 but not necessarily a major player before 1950 nor necessarily an exclusive player from 1950 to the mid 1950s. If you could quote the exact IPCC assertion in this regard, it would be helpful.

      • there seems to be a hot-button issue with respect to whether it was implied by IPCC and scientists that there would be no more than a “few” years of pauses, slowdowns, coolings etc.

        i think we are conducting a proxy argument about spin that was spun in previous years and to which the original spinners, whoever they are, would now respin.

      • Fred M,

        I’ve just been reading back issues of Isaac Held’s blog and came across this gem:

        “The model results give a hint of mid-century flattening, which is typically attributed to an increase in cooling aerosols, although not as pronounced as in the GISS curve, nor exactly contemporaneous with it. Is this just part of the random variation inherent in the model runs, or can some of the flattening be attributed to changes in WMGGs?

        Reply
        Isaac Held says:
        March 29, 2011 at 9:19 am
        There is some flattening of the CO2 evolution prescribed here (between 1935-1945) which is based on Etheridge et al.”

        It adds another dimension and something I was unaware of before.

      • On gets the impression you are of the opinion that most of us lack reading skills and therefore you feel it is your duty to tell us what is being said.

      • BillC – Thanks for quoting my question to Isaac Held. The possible CO2 “flattening” was pre-1945 and therefore largely irrelevant to the abrupt post-1945 dip before the curve flattened out between about 1950 and 1976. The reason for a reduced CO2 rate of rise was probably not due to a reduction in emission rates, but it may have reflected carbon cycle feedbacks that slightly altered the balance between atmospheric CO2 and terrestrial and oceanic sinks. Of the sinks, ENSO phenomena appear to play a significant (but transient) role in altering terrestrial CO2 uptake, but I don’t know how well that correlates with those early observations. The early 1940s were characterized by strong El Ninos that probably contributed to the spike around 1945, and whose cessation contributed to the post-1945 decline, possibly in combination with PDO changes.

      • Fred Moolten: This is inconsequential stuff, as is obvious if one looks at the last 100 years rather than the last 10 to 15.

        You ignore two important facts. (1) The model forecasts were recently made, so is is the recent record that tests the forecasts. (2) The last 10 to 15 years have the highest CO2 concentrations.

      • MattsStat – Your point 1 hasn’t been ignored but has been addressed by Chris, me, and others. Please see our comments. Your point 2 hasn’t been ignored either, but doesn’t change the principle that predictions for individual decades aren’t useful at the current state of our ability to predict. The large scale interdecadal variability is apparent in the climate record of the past 100 years and is not a recent phenomenon.. That too is encompassed in our earlier comments.

      • When you read through this thread of comments it seems clear that some people have blind faith in the modelling approach used by the IPCC.

        This faith is in spite of all evidence that shows the approach of averaging the results of models demonstrates that they have no particular model that can be relied upon for accuracy and the fact that none of the models outputs match observations.

        What is the justification for such faith?

      • The problem here is the intentional ambiguity in the IPCC reports. They amass hundreds of pages of scientific research. They assume a basis for all this, the radiative heat absorption by CO2(this is in their founding documents), and produce massive summaries, generally including long term ordinary linear regression in approriately applied to a time series, and then make a statement such as “an increase of .2 deg C/decade”. Hidden away is the caveat “The rise will not be steady because of other factors.”

        Pardon me, but you can’t hammer away at an argument that CO2 is the cause of the temperature increase for page after page and then cover your ass with a few lines here and there that “the rise will not be steady”, or the effect of clouds is poorly understood. Statements like this deserve at least as much page space as the other arguments because they point out major weaknesses that are not fully assessed.

        In this context Leake’s statement “The implication was that temperatures would rise steadily, not with 15-year gaps. ” is a perfectly resonable takeaway. That was a major implication to my mind in almost every paper I’ve read that predicted global warming. In most cases the author’s seamlessly slid from solid conclusions into “this is not inconsistent with CO2 causing global warming” and some statement about rising temperatures. So don’t be too surprised when people pick up on the implied conclusions that are so forcefully expounded.

      • Fred Moolten: MattsStat – Your point 1 hasn’t been ignored but has been addressed by Chris, me, and others. Please see our comments. Your point 2 hasn’t been ignored either, but doesn’t change the principle that predictions for individual decades aren’t useful at the current state of our ability to predict. The large scale interdecadal variability is apparent in the climate record of the past 100 years and is not a recent phenomenon.. That too is encompassed in our earlier comments.

        Those points that you restate so clearly are the particular points why looking at the last 100 years instead of focusing on the last 15 would be a misdirection. We know already that interdecadal variability is great and that predictions for individual decades are not useful. However, the prediction (scenario, hypothesis, whatever) is tested by the data that came after it was made, and the data since the prediction have diverged from the prediction more than was expected by the people who made the prediction. It is possible for the 50 year prediction to be more accurate than the 15 year prediction, but until such a potentiality has been actually demonstrated to be true, every year that the data diverge from the prediction discredits the theory on which the prediction rested.

        Your points clearly express why the sentence that I quoted was a misdirection, i.e. a bad recommendation.

      • Fred,

        What a load of blah.

        “Notice for example, the great deviation in the hindcast due to the 1991 Pinatubo eruption, which obviously couldn’t be anticipated by better initializations.”

        I can’t see anything special about that dip in temperatures. There have been many similar dips in the global temp graph. (Just look at it.) That particular dip is asccociated with Mt Pinatubo, because it helped dig the models out of a hole at the time. Which huge volcanic eruptions caused the other dips?

    • Oops. Sorry, located the above comment badly. Sorry.

      • That’s OK. I was just what I was looking for, thank you. Ready…

        http://www.guardian.co.uk/environment/2012/feb/08/glaciers-mountains?intcmp=122

        more oops looks like.

      • Chief Hydrologist

        Is there anyone seriously trying to disprove simple radiative physics in the atmosphere? At least – no one who is taken seriously.

        So here is the 20th Century signal cleaned of the decadal signal – http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view&current=MONOTONIC.gif – it is from a Tsonis et al paper.

        The residual warming in the 50 years to 2000 was about 0.08 degree C. The IPCC is wrong – the models are wrong – because they missed this mode of internal variability without which no sense can be made of any trend.

        The whole box and dice of global warming is totally kaput – it needs a fundamental rethink. The maximum rate of warming in the 20th century was 0.08 degrees C/decade from all other factors. Can we use the 20th century natural variability to predict 21st century natural variability? I don’t think so.

        Robert I Ellison
        Chief Hydrologist

      • The report of my illness grew out of his illness. The report of my death was an exaggeration.

        Mark Twain

        It appears the report of the Himalayan glaciers’ demise has been equally exaggerated out of all proportion.

        Recent contributions of glaciers and ice caps to sea level rise Thomas Jacob, John Wahr, W. Tad Pfeffer & Sean Swenson, Nature (2012) doi:10.1038/nature10847

        The [Glaciers and ice caps] (GICs) rate for 2003–2010 is about 30 per cent smaller than the previous mass balance estimate that most closely matches our study period. The high mountains of Asia, in particular, show a mass loss of only 4 ± 20 Gt yr−1 for 2003–2010, compared with 47–55 Gt yr−1 in previously published estimates2, 5.

        Contrast 2010: Settling the science on Himalayan glaciers
        Nature Reports Climate Change doi:10.1038/climate.2010.19

        Current estimates suggest there are about 12,000 to 15,000 (glaciers) in the Himalayas and about 5,000 in the Karakoram. Of these thousands of glaciers, only 15 have been measured on the ground to see if they are gaining or losing ice overall. Despite the scarcity of data, trends are emerging. “It is pretty clear that the Himalayan glaciers have been losing mass, with markedly greater loss in the past decade than earlier,”. . .

        That was only a 1000 fold extrapolation!

        That 2012 net loss measurement of Himalayas’ glaciers is only 8% of the previous scientific evaluations. That’s > 90% error, not 90% confidence! Contrast IPCC’s claim (based on gray literature):

        “Glaciers in the Himalaya are receding faster than in any other part of the world and, if the present rate continues, the likelihood of them disappearing by the year 2035 and perhaps sooner is very high.”

        Whatever happened to IPCC as a review of the science?

        The uncertainties in the Himalayan glaciers appear to be greater than IPCC’s claimed confidence!

  2. It is simply not true that there is an “IPCC hypothesis” that is falsified by a short term (scale of a decade or so) variation that is somewhat above or below the general trend.

    Short term variations like this ARE a matter of scientific interest and hypothesis and competing ideas. They are not a matter of a clear consensus. And neither does the IPCC make strong claims or hypotheses on them — other than the statement that they ARE comparatively short term and that we expect the longer term trend to continue upwards.

    It is also a misrepresentation of what the IPCC does to speak of IPCC “hypotheses”. The IPCC is not a research body. They don’t do scientific work. They summarize it. They make statements with associated confidence levels, based on the combined work of a lot of scientists, but these are not in the form of a “hypothesis”, but a conclusion. Whether you agree with them or not, the distinction matters.

    • The post does not claim that the AGW hypothesis is falsified solely by the recent lack of warming. It merely points out two competing hypotheses. (The falsification involves other factors, in my view.) Calling it the IPCC hypothesis makes sense because their endorsement is the focus of the debate.

      • This post starts out by saying it has warmed less than the IPCC predicted. No prediction is cited. That is because it doesn’t exist.

      • IPCC AR4 prediction: 0.2C/decade during the first half of the 21st century. This prediction is cited in Leake’s article and also in my post.

      • Chris –

        If a particular prediction was required, the post could have used the prediction from the FAR that if there were few or no steps taken to limit greenhouse gases, temperatures would rise by 0.3 degrees per decade. This was predicted to mean a rise of 1 degree C by 2025.

        Of course, times have changed, but if we pretend those predictions were never made, how can we learn from them?

      • IPCC prediction: 0.2C/decade during the first half of the 21st century. This prediction is cited in Leake’s article and also in my post.

        What year is this? How many years in a century?

      • In the AR4, out to 2050

      • Judith, the prediction is not for a rise of 0.2 every decade. If you think there is a prediction, for heavens sake quote it.

        I don’t see anything which I would call a “citation” to any such prediction. A cite is more than just “the IPCC says”. You give some other cites, but not one for the alleged IPCC prediction. You AND Leake confuse the magnitude of the long term change expected with a prediction that applies to the last ten years.

        Anteros, you also. QUOTE or CITE your alleged prediction.

      • Judith –

        As an average rise per decade.

        It is legitimate to question, as the linked article does, whether:

        The implication was that temperatures would rise steadily, not with 15-year gaps.

        And if so, what the implications might be. Bring that discussion on.

        But that is a different matter, entirely, than saying that “predictions,” or even a “hypothesis” have been falsified by 12 years of 21rst century data.

      • While I wrote the previous, Judith — for the first time — did give a citation. “In the AR4”. You can do better than that for such an enormous report, Judith! — but then you say “out to 2050” — which underlines the very point I am making.

        You reinforce that Leake’s original question refers to a prediction which simply does not exist.

      • I assume that the readers are familiar with contents of the AR4 WG1 Summary for Policy Makers

        http://www.ipcc.ch/publications_and_data/ar4/wg1/en/spmsspm-projections-of.html

        For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios. Even if the concentrations of all greenhouse gases and aerosols had been kept constant at year 2000 levels, a further warming of about 0.1°C per decade would be expected. {10.3, 10.7}

        Since IPCC’s first report in 1990, assessed projections have suggested global average temperature increases between about 0.15°C and 0.3°C per decade for 1990 to 2005. This can now be compared with observed values of about 0.2°C per decade, strengthening confidence in near-term projections. {1.2, 3.2}

        Model experiments show that even if all radiative forcing agents were held constant at year 2000 levels, a further warming trend would occur in the next two decades at a rate of about 0.1°C per decade, due mainly to the slow response of the oceans. About twice as much warming (0.2°C per decade) would be expected if emissions are within the range of the SRES scenarios. Best-estimate projections from models indicate that decadal average warming over each inhabited continent by 2030 is insensitive to the choice among SRES scenarios and is very likely to be at least twice as large as the corresponding model-estimated natural variability during the 20th century. {9.4, 10.3, 10.5, 11.2–11.7, Figure TS.29}

      • Chris –

        I vaguely assumed you would be familiar with the predictions of the IPCC FAR. They are quite memorable.

        I was quoting the numbers. BAU = 0.3 degrees per decade, leading to 1 degree 2025. Of course, they say ‘about’ 1 degree which is reasonable caveat but still, a prediction is a prediction.

        http://www.ipcc.ch/ipccreports/1992%20IPCC%20Supplement/IPCC_1990_and_1992_Assessments/English/ipcc_90_92_assessments_far_overview.pdf

      • Anteros –

        I’m sure you saw the last sentence of the paragraph in question.

        The rise will not be steady because of other factors.

        So we have “likely” for 1% with another 13 years to go, and a prediction for an unsteady rise of a certain average over a longer time frame.

        So where does that leave us?

      • Markus Fitzhenry.

        “”Joshua | February 7, 2012 at 7:47 pm |
        IPCC prediction: 0.2C/decade during the first half of the 21st century. This prediction is cited in Leake’s article and also in my post.
        What year is this? How many years in a century?””
        A1. 2012. score 100%
        A2. 100 score 100%

      • Joshua –

        Firstly I’d refer you to my comment below. I think if push comes to shove, that’s where I’ll place my vote – not enough info to say very much at all. Certainly nothing meaningful.

        However, the predictions of the FAR are dramatic, and assessing the reasonableness of dramatic predictions is very different from looking at a noisy signal over an even shorter period of time that isn’t doing very much at all, and asking what the signal is saying for itself

        As I said, it is easier to discern trends if the signal to noise ratio is quite high. The same point is true of predictions. As the FAR predicted a very strong signal it is easier to identify the signal being other than predicted.

        Back to my initial point – I wouldn’t go anywhere so far as to say that it was a failed or wrong prediction. What I would say is that it isn’t doing very well so far. Which isn’t of course saying very much at all.

        I piped up because Chris Ho Stuart was going on about a lack of predictions from the IPCC. Of course everybody forgets that up until 2001, they were spraying predictions around like confetti [ish..]

      • Anteros — I did not say a lack of predictions. I said no prediction for rise over the last decade. That’s because I DO know pretty well what the IPCC reports say.

        The IPCC simply does not have a prediction for short term changes like that. You’ve not shown anything other than the longer term predictions, along with explicit recognition that there are expected to be unpredictable short term variation, on the scale of decades.

        Sheesh!

      • Chris –

        I don’t think you actually read my comment.

        I was responding to this, from you –

        This post starts out by saying it has warmed less than the IPCC predicted. No prediction is cited. That is because it doesn’t exist.

        Now, I said that if a prediction was wanted the FAR could be used – as indeed it can. The last 15 years are relevant to that and I think it is entirely reasonable to say that there has been less warming than the IPCC predicted. Since 1990, or 1995 or whenever.

        It’s not a major point and I don’t think it means very much – as I say elsewhere. But it is true that there has been less warming than the IPCC predicted – I think it is unreasonable to deny it irrespective of caveats and short periods of time.

        It sounds like a desperate attempt to defend something that doesn’t need to be defended

        Why don’t you say that the IPCC changed its prediction from 0.3C per decade to 0.2C per decade in 1995 [which it did] when it realised its estimation of climate sensitivity [among other things] was too high?

      • Thanks Judith… I AM familiar with the AR4, of course; but it should still be cited properly and any predictions quoted more accurately. Leake got it wrong. Your extract confirms it.

        It is, of course, true that warming over the last decade has been less than the long term trend. There’s nothing particular surprising about that, in the sense that we don’t have the capacity to predict at that level with any confidence.

        Understanding these short term changes in rate is an important and legitimate question. It’s a fair guess that the next few years will see a speed up in warming again. TSI is increasing and the ENSO appears to be moving back towards a push in extra heating; but that’s more of an educated guess than a strong consensus supported prediction. There are also other factors, like aerosols, which continue to be very tough to model. We’ll see.

        The main thing I wanted to underline is that Leake was distorting the nature of predictions. As is his wont, I might add.

      • I don’t see how this statement by Leake is misleading:

        “For the critics of climate science this is a crucial point — but why? The answer goes back to the 2001 and 2007 science reports from the Intergovernmental Panel on Climate Change that had predicted the world was likely to warm by an average of about 0.2C a decade. The implication was that temperatures would rise steadily, not with 15-year gaps. The existence of such gaps, the critics argue, implies the climate models themselves are too flawed to be relied on.”

        It would have been more clear to state that this projection of warming from the AR4 applied to the first two decades, such as in this statement of the IPCC

        “For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios.”

        but since we are talking about the period 2000-2011, there doesn’t seem to be anything misleading in Leake’s statement as far as I can see

      • Anteros, you’re still barking up the wrong tree completely. This whole thing is about a warming lull over the last decade. Leake and Judith are both using the latest IPCC reports. You should too. And you should pay attention to the recognition from all the reports that short term changes from decade to decade exist and are not predictable at present, and were not predicted back in 1992 either.

        Understanding short term variations is a good and fair open question.

      • Joshua and Chris Ho-Stuart

        The IPCC projection was fairly clear.

        In AR4(SPM) it was

        For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios. Even if the concentrations of all greenhouse gases and aerosols had been kept constant at 2000 levels, a further warming of about 0.1°C per decade would be expected.

        In the earlier TAR report IPCC was a bit less specific, projecting a range of 0.15°C to 0.3°C per decade.

        What actually happened?

        Greenhouse gases continued to rise unabated, BUT instead of a warming there was a net cooling of the globally averaged land and sea surface temperature anomaly (HadCRUT3).

        For the most recent decade, this was around -0.1°C.

        In Las Vegas, Macao, Atlantic City, Monaco or anywhere else, IPCC would have lost the bet.

        So the IPCC models made a lousy forecast.

        But the real question here is:

        If IPCC models cannot even predict the temperature of the next decade, why are we to put any confidence whatsoever in their ability to project temperatures for the next several decades – or even century?

        Answer: We should be extremely skeptical of any model-based temperature projections cited by IPCC.

        Max

      • Here are a couple of relevant changes from AR1 to AR4, I think.

        (1) Better handling of uncertainty and ranges of outcomes. AR1 mentions ranges, but the expressed prediction as a single number obscures that. AR4 does better.
        (2) CO2 forcing is less. Back in 1992, the CO2 forcing was estimated at about 6.3 W/m^2 per natural log CO2. By AR3 a more accurate value of 5.35 had been obtained, mainly from consideration of the shortwave interactions as well as longwave.)
        (3) Sensitivity estimates haven’t actually changed all that much. The range has narrowed a little, but sensitivity remains something between 2 and 4.5 degrees per doubling; or about 0.5 to 1.2 degrees per W/m^2 forcing.
        (4) There are now more models. In 1992 the model results quoted probably depended overmuch on a GISS model, which then had sensitivity on the high side. (Caveat: transient response sensitivity is probably more useful than equilibrium sensitivity for looking at shorter scales and I’m not so sure of the numbers obtained in 1992.)

        Judith, Leake’s question was phrased as follows: “Why has it warmed so much less than the IPCC predicted?” That’s highly misleading, because in actual fact that IPCC did NOT predict how much it would warm over scales Leake is considering.

        Leake also asks:

        Overall, then, the world has got slightly warmer since 1997. Perhaps the real question is: why has it warmed so much less than was predicted by the climate models?

        That also is just silly. Climate models show variations over these time scales just like the real world does. The difference is that there’s no correlation in those short term variations. One model might have a slow down from 1990 to 2000; another from 2015 to 2025. The models predict, if anything, that you are going to unpredictable short term increases and decreases.

        Leake includes sensible quotes in his article, but he continues to make — and emphasize in his headline — the absurd implication that models, or the IPCC, is making predictions that clash with the observed small scale slow down. That’s just flatly false.

        I repeat. Examining and explaining decadal scale changes is a perfectly good and sensible open question. The phenomenon is real. The problem is real.

        The idea that it violates an IPCC prediction is not real.

      • I don’t see how this statement by Leake is misleading: – JC

        “For the critics of climate science this is a crucial point — but why? The answer goes back to the 2001 and 2007 science reports from the Intergovernmental Panel on Climate Change that had predicted the world was likely to warm by an average of about 0.2C a decade. The implication was that temperatures would rise steadily, not with 15-year gaps. The existence of such gaps, the critics argue, implies the climate models themselves are too flawed to be relied on.”

        Yes, it’s misleading, and Leake gives the game away himself; “…The implication…”.

        Even Leake has to concede it’s only implied.

        And when did the concept of ‘average’ become so hard to understand??

      • Latimer Alder

        I advised Chris H-S earlier to stop digging as he was in a deep hole.

        On reflection, this was not good advice. For it is stupid academic squabbles like this that continue to harm the reputation of climatology and climatologits in the public mind.

        Keep on squabbling guys. Whatever the exact weight put upon appendix 7 subsection 3 caveat 7, the general public have been led to believe – by constant propaganda from climatologits and their political allies for a decade or more – that we live in a dangerously warming world and so must immediately make sacrifices and do counter-intuitive things for the good of the planet.

        I do not recall that the take-home message of AIT was that warming was going to be a sort of on/off/maybe next year phenomenon but that it was happening now was real and was dangerous.

        Maybe it is different in the US, but in the UK at least we have a well-founded and deep suspicion of salesmen who get you to sign up to something for its many benefits, and only discover years later that the small print buried deep in the Appendix means that the policy doesn’t apply when you need it most.

        So Mr Chris H-S, keep on pointing out that on a close reading of subsection 7, clause 7 para 16 (as amended by subsequent resolutions as needed) means that what the IPCC said wasn’t exactly what they meant and so they have suddenly invented academic wriggle room. Shout it loud from the rooftops! Writhe and rend your raiment about how tough the press are on you and how a journalist this time hasn’t presented your case in the most favourable light.

        Then point me to all the writings in the last twenty years where you and colleagues have been equally loudly shouting that warming was only going to be intermittent, that the idea of ever-increasing warming was wrong, that Al Gore had vastly overstated the case and about how the sceptics got that right.

        When you can produce an extensive library of such documentation I’ll be happy for you to consider yourself vindicated.

        But until then, no deal.

      • steven mosher

        Chris,

        If the IPCC does not intend people to believe that they have made short term projections, then they need to change the way they draw graphs.
        If they truly believe that the next 10 years are a total mystery then they need to stop drawing graphs that tend to convey that message.
        Its not that hard to be clear about this. They go the extra mile to make
        sure that some graphs are not misunderstood. they should extend
        this to all their presentations. If they truly have no idea what the next 10 years will hold, but are confident about 20 years from now, they should
        clearly say so when they present charts

        the IPCC publsihes a chart like this: Look at the care they take in the legend.

        http://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-10-4.html

        “Figure 10.4. Multi-model means of surface warming (relative to 1980–1999) for the scenarios A2, A1B and B1, shown as continuations of the 20th-century simulation. Values beyond 2100 are for the stabilisation scenarios (see Section 10.7). Linear trends from the corresponding control runs have been removed from these time series. Lines show the multi-model means, shading denotes the ±1 standard deviation range of individual model annual means. Discontinuities between different periods have no physical meaning and are caused by the fact that the number of models that have run a given scenario is different for each period and scenario, as indicated by the coloured numbers given for each period and scenario at the bottom of the panel. For the same reason, uncertainty across scenarios should not be interpreted from this figure (see Section 10.5.4.6 for uncertainty estimates).

        They took care in the legend to advise people NOT to interpret discontinuities in the lines. Did they take care in the legend
        to tell people not to take the short term lines seriously? No.
        If they really have no clue about the next 10 years, then they need to show that and explain that.

      • Steven mosher, I don’t see the problem with the graph you mention. It includes a shading envelope that indicates a range of possibles. It has a horizontal scale on which ten years doesn’t even show (the tick marks are 50 years apart). You look and you immediately see that the increase over 10 or 15 years is small by comparison with the shading width.

        Communication can always be improved, and I don’t want to get sucked into defending the IPCC as perfect communicators. I’m simply saying that Jonathan Leake was incorrect to speak of the IPCC making predictions that conflict with the lull in trend in the last 15 years. Inferring unstated predictions from a graph is invalid, whether the legend warns against this or not.

      • Latimer,

        The IPCC, as Chris points out, makes it clear that warming is not expected to occur in a linear fashion. This point is also made time and time again by climate scientists. If the general public doesn’t understand this then it might be in part due to poor communication by climate scientists and journalists, but what the general public might believe is not the issue here – this is a forum for people who actually take an active interest in the subject so there should be an expectation that they are rather better informed than the average man on the street, especially if they are going to make confident pronouncements about the supposed flaws in the IPCC position (and other things). So in order to argue about whether the IPCC is right or wrong on on the subject its necessary to make a bit of effort to understand what the IPCC is actually saying. Of course if the purpose is simply to find reasons to say the IPCC is wrong then I guess it’s not so important.

      • AA,

        And communication is a two-way street…..issues at the receiving end can, and do, exist independently of the sender.

      • Michael,

        Yes, absolutely.

      • This might be some help to those agonising over how an average of 0.2C per decade over a long period might not manifest as a 0.2C increase every decade.

        http://www.agu.org/pubs/crossref/pip/2011JD016328.shtml

      • Why don’t you say that the IPCC changed its prediction from 0.3C per decade to 0.2C per decade in 1995 [which it did] when it realised its estimation of climate sensitivity [among other things] was too high?

        It would be completely incorrect to say that, so there’s a good reason ;) The best estimate and range of climate sensitivities used in the SAR are exactly the same as in the FAR. The difference in the projections is due partly to lower emissions scenarios (less CO2, methane and CFCs in particular), and partly to the introduction of aerosols into the scenarios.

      • @andrew adams

        ‘The IPCC, as Chris points out, makes it clear that warming is not expected to occur in a linear fashion. This point is also made time and time again by climate scientists.’

        I’ll look forward to seeing the evidence to support your assertion that the point is made ‘time and time again’ by climatologists. Becasue it certainly doesn’t gel with my memory. Seems to me that the emphasis on this point has only recently been remembered once it has become ever clearer that the temperatures are stubbornly failing to do what they are told.

        So I’m sorry, but without further confirmation of an extensive library of quotations and presentations where this point has been drummed home, I have to provisionally put the ‘time and time again’ assertion in the same pigeon hole as Tony Blair’s ‘as I have said many times before and made my position absolutely clear’.

        This, of course, was pure Blair speak for ‘f..k me I’ve never thought of that before and need a second or two to dream up something plausible’.

        And you might want to reflect on this helpful remark from Richard Betts of the Met Office and teh IPCC, commenting today at Bishop Hill

        http://bishophill.squarespace.com/blog/2012/2/7/information-commissioner-on-academic-data.html?lastPage=true#comment16812437

        He says:

        ‘I think the problem is that an own-goal has been scored in the communication of climate model projections. They are normally presented with the results smoothed over time, or as decadal means, for clarity of presentation. This has meant that it was not at all obvious that natural variability is important in the the shorter term’

        which is a polite (as ever from Richard) and face-saving way of saying ‘Its a fair cop, guv. You’ve got me bang to rights’

      • Paul S [8th@8.26am] –

        Fair point.

        I can’t argue with that so I’d better take it on the chin.

        Thanks for the clarification.

      • Chris Ho-Stuart: Communication can always be improved, and I don’t want to get sucked into defending the IPCC as perfect communicators.

        Taken on the whole, that’s what you have done. Not that IPCC was perfect, but that it was so clear that it was nearly perfect.

      • Chris hijacked my post, but the point remains. Multiple hypotheses are in play. If the AGW hypothesis advocates claim it will take decades to resolve this issue, then so be it. Suspend all action in the interim. How is 2040 for a decision date? Sorry, but the absurdity is showing.

      • Latimer,

        Are you serious? Read Chris’s arguments above – are you really saying that you have never seen them made before? Maybe you should try actually listening to what scientists say and giving their arguments proper consideration rather than automatically dismissing them out of hand, then they might come as less of a surprise next time.

      • Latimer Alder

        @andrew adams

        I have read chris h-s’s arguments. And pretty thin they are too

        1. They seem primarily to rely on appealing to the ‘disclaimers’ that the IPCC published…like those bring bits on TV ads that say things like ‘your results may differ’, ‘applications subject to status’, ‘no guarantee express or implied’, and most importantly

        ‘this advert is probably a load of s**t but we hope you won’t notice’.

        Any organisation that has to rely on such get-out clauses to counter charges that their product or service does not remotely perform as advertised merely shows that they are disreputable and not to be trusted. More at the shoddy end of e-Bay rather than the John Lewis of scientific organisations.

        2. He then proceeds via some devious route to try to pull more wool over our eyes. That legalistically, the IPCC is only the summariser of information provided by others. So any mistakes in the basic data aren’t the IPCC’s fault and it is, therefore, completely blameless for anything published under its name.

        Which might just possibly have some traction if the group of people doing the summarising were completely independent of those doing the basc research. But they aren’t. They are the same people. By definition. The IPCC makes a point of picking the people who are its authors from exactly the same community as those doing the research.

        So when, for example, researcher PJ (as we may call him) writes that MM’s paper on some old teleconnection shit is the hottest thing since sliced bread and conclusively proves that Thremageddon is expected two weeks come next Michaelmas, he is reviewing his long-standing research buddy’s work. No wonder that he gives a seal of approval. And, surprise, surprise we will find MM saying nice things about PJ also.

        The IPCC can’t just wriggle away from its responsibility by pleading that the review was written by an ‘independent guy, and any consequences are nothing to do with them.

        If this is really news to you, please study Donna LaFramboise’s lovely tome ‘The Delinquent Teenager’. Great stuff.

        3. Timing

        Even with all the deficiencies above, I’d have a bit more sympathy if people like Chris H-S had a demonstrable track record of loudly shouting that the trend would be intermittent going back some years.

        And they haven’t.

        Instead we’ve had such lunacies as the Mad Woman from the Met Office going on national TV to assure us that last years cold and snowy winter in the UK was a direct consequence of global warming. I don’t recollect H-S – or indeed yourself – protesting that she was wrong and that this merely showed that warming has paused.

        We had the man in 2000 telling us that because of global warming, snow in UK would be a thing of the past. I don’t recall his immediate rebuttal that he had been misquoted and that really it might be a whole generation before any such effects came into play.

        You claimed earlier that climate scientists have made this ‘intermittency’ point ‘time and time again’. And I asked you to provide some documented verification that they have indeed regularly and firmly stressed this point.

        So far you do not seem to have been able to. Maybe it is still in preparation?

        But hey, what do I care? Shenanigans like this word-chopping a la Ho-Stuart merely serve to reduce the climatologits credibility yet further. As Betts from the Met Office so nicely put it:

        ‘It’s been clear to me for a while that the field of climate science has a lot of work to do in regaining trust’

        and stuff like this makes that task ever harder.

      • Latimer,

        I don’t consider myself to be an expert by any means but in the few years I have been taking an interest in the subject of climate change I have tried to educate myself as much as possible about the various scientific arguments surrounding the subject, and one thing that has constantly been impressed upon my mind is that when there is a long term trend caused by increasing GHG levels there will periods when it is masked (or accentuated) by short term natural variability. And that this is reflected in individual model runs but as the timing of events such as El Nino/La Nina, volcanic eruptions etc. is unpredictable when projections are made based on ensemble runs then they will tend to average out and the projection will show a fairly steady trend.

        Now funnily enough I don’t bookmark every interesting web page I visit or every informative blog comment I see, but here are a couple of examples

        http://thingsbreak.wordpress.com/2009/09/10/of-moles-and-whacking-climate-models-didnt-predict-this-lack-of-warming/

        http://www.realclimate.org/index.php/archives/2008/05/what-the-ipcc-models-really-say/

        (there are loads more examples at RC)

        but it’s a point that has come up countless times in discussions on the various climate blogs I visit, including this one. I simply can’t believe you are unfamiliar with this argument. And to point it out isn’t to seek some kind of disclaimer or get out clause, it is an absolutely valid point and an essential part of the argument about how the IPCC projections compare to actual observations. As I said above, even if (and I’m not saying this is the case) scientists have been poor at communicating this point to the public there is no excuse for anyone who actually takes an interest in the subject to the point where they feel competent to make confident pronouncements on the state of climate science and the reality or otherwise of (C)AGW not to be aware of it. If you want to pass judgement on an issue and have your opinion takes seriously you have a responsibility to actually make an effort to understand it.

        Regarding David Viner’s comment I think that if he was quoted correctly then it was a silly thing to say, although the kind of winters we have had in the last couple years have certainly been less common than they used to be. I’m not familiar with the particular comment from his female colleague which you refer to but the notion that global warming and its side effects such as the large reduction in the extent and volume of arctic sea ice could have a significant impact on atmospheric circulation patterns is one I have seen raised on occasions and doesn’t seem inherently implausible. It is interesting that the last couple of winters when we in the UK were suffering unusually (by recent standards any way) severe conditions other parts of the NH such as Greenland and eastern Canada were enjoying unusually warm weather. I find it interesting that you automatically assume her to be incorrect, I wonder what you base this assumption on.

      • Chris

        The point of the matter is quite simple.

        If IPCC cannot even project temperatures for one decade into the future, there is no reason to believe that they can do so for several decades or even centuries into the future.

        Figure 10.4 goes to year 2300! How totally absurd!

        The problem is the “slope of the line”, Chris (i.e. the model-derived 2xCO2 climate sensitivity used for making the projection).

        If this were corrected downward to around 1.0 to 1.5C, instead of 3.2C on average, the projections would fit fairly well.

        Incidentally, this range is also the observed long-term CO2 temperature response since 1850.

        Max

      • max:

        If IPCC cannot even project temperatures for one decade into the future, there is no reason to believe that they can do so for several decades or even centuries into the future.

        Questions about the implications of temperature trends to the validity of IPCC predictions, and questions about the utility of wide error-ranges aside, the illogic of that statement is just stunning.

        Even if I can’t predict how many times you will post comments at Climate Etc. in the next hour, I can predict that in the next week you will post somewhere between 1 and 1,000,000 comments.

      • @andrew adams

        @chris ho-stuart

        AR4 summary for policy makers.

        ‘For the next two decades, a warming of about
        0.2°C per decade is projected for a range of SRES
        emission scenarios. Even if the concentrations of
        all greenhouse gases and aerosols had been kept
        constant at year 2000 levels, a further warming of
        about 0.1°C per decade would be expected’ (p12)

        In a big box highlighted by a tasteful background colour to stand out from the rest.The first and most important box on the topic ‘Projections of future changes in climate’

        This is the message that the ‘climatology consensus’ wanted the politicians and the public and the press to take away with them. This was their projection. Even if they read nothing else at all about climate change this is what they wanted Blair and Bush and their officials to know. This what they expected to be included in the front page article in the London Times and the NYT and the WSJ This was what they wanted to be the discussion point on the TV and the radio.

        No caveats.No ifs or buts. No ‘your results may differ’. No cautionary notes. A definite unequivocal statement. The summation of 20-odd years work by thousands of people and tens of billions of dollars of public money

        If the IPCC had wanted to give a different message, there are many other ways it could phrased this paragraph. But it didn’t.

        My case rests

      • Latimer,

        Leaving aside for now the question of how effectively the IPCC communicated the expected nature of future rising temps, would would be your expectation based on your understanding of the science?

        Assuming for argument’s sake that the IPCC’s calculation of the long term trend was broadly correct, would you expect temps to rise in a more or less linear fashion or would you expect there to be periods when temps were flat or even falling?

    • Steven Mosher

      correct. the ipcc does not do science. its documents are not scientific. I think
      that citing AR4 as a source is no better
      than citing wikipedia. further if we want
      to evaluate ipcc documents the standards
      we need to apply are standards like

      accuracy
      openness
      transparency
      traceability

      • Steven, Dr. Curry, there is a point to what Fred and Chris have been about. But I don’t think they understand what it means in terms of the discussion here. From the figure you and Steven posted and the write up in AR4, IPCC claim is that by 2030 it will be such and such, and that the anthropogenic influence will be twice the natural variation. It is true that the scenarios are from 2000, so using 2000 is a good choice. At 2030, the anomoly is expected to be about 0.7 C, but starts at 0.2 for year 2000. 1/3 of that is .167 C That offfset is the maximum that natural variability can account for at 2030. The estimate for 2011 is about .5 on the graph. So, accounting for all the natural variance for the first 11 years since 2000, the anomoly has to be about 0.333 C. But the trend from 2000 is showing us at about .28 C at this time on the same baseline, unless I have been wrongfooted.

        So maybe saying what Leake said is not exact, but it is accurate. indicating the models are running high about 0.016 C per year. This agrees with other posters such as Lucia, in general and approximately. So, taking into account what was actually said at 10.3 and 10.4, in AR4 it cannot be claimed at present that the models have not been “falsified.” And as pointed out, it would be expected at such a short number of years that such could occur, and that the final trend could actually be higher than what the AR4 stated. It is too early to claim victory, but not too early to point out that the odds are that the models are running high compared to what the IPCC actually stated.

    • steven mosher

      “(4) There are now more models. In 1992 the model results quoted probably depended overmuch on a GISS model, which then had sensitivity on the high side. (Caveat: transient response sensitivity is probably more useful than equilibrium sensitivity for looking at shorter scales and I’m not so sure of the numbers obtained in 1992.)”

      hmm. ModelE has a sensitivity of 2.7. Not sure what you are refering to

      • This model which turned out to have a lower sensitivity was developed around 1997 I think. If you read one of the Hansen 1997 papers he talks about using a new model with a lower sensitivity. Before that it was something like 4- 4.5ºC.

        However the projections in the FAR were not very dependent on the spread of model sensitivites – the ‘best estimate’ was produced by comparing model experiments with observations and scaling to infer a climate sensitivity of 2.5ºC (2.1ºC if compared to current 2xCO2 RF formulation).

      • There are many trends found in nature and the works of man that have the characteristics described for temperature. They are sine waves of varying amplitudes with sawtooth irregularities.

        Should the IPCC have used better description of the waveform their models create? Yes, but I’ll bet they didn’t really expect general readership to get involved.

        If the frequency of the temperature trend is the long term forcings (mostly from things that happened to the ocean 800 years ago) and the sawtooth irregularities are what we can measure with satellites, ARGOs and the occasional thermometer at the airport, this waveform looks like a gazillion (pardon the technical description) others.

        If you are convinced that you have separated the sawtooth irregularities from the actual signal, then you can set them aside when doing major calculations.

        If on the other hand there is some uncertainty as to what forms part of the sawtooth variation and what is part of the underlying signal, you need to pay pretty close attention to all components of the information you receive.

        Which I think is a good description of where we are at the moment.

    • Chris is now deploying the AGW wack-a-mole defense: When things are going the way believers want, the IPCC is the paragon of climate science and those who dispute that are denialist scum. When the IPCC gets in trouble, the same believers claim it never even makes a prediction, and those who claim otherwise are liars.

    • Chris Ho-Stuart: It is also a misrepresentation of what the IPCC does to speak of IPCC “hypotheses”. The IPCC is not a research body. They don’t do scientific work. They summarize it. They make statements with associated confidence levels, based on the combined work of a lot of scientists, but these are not in the form of a “hypothesis”, but a conclusion. Whether you agree with them or not, the distinction matters.

      That is nonsense. The hypothesis is there whether you want to assign it to the IPCC or not.

      Short term variations like this ARE a matter of scientific interest and hypothesis and competing ideas. They are not a matter of a clear consensus. And neither does the IPCC make strong claims or hypotheses on them — other than the statement that they ARE comparatively short term and that we expect the longer term trend to continue upwards.

      That is a nice clear statement of the hypothesis. If the short-term lower-than-predicted temp records continues, then the hypothesis will be discredited. Right now, all we can say is that the prediction based on the hypothesis does not have a demonstrated record of accuracy.

      • Matt –

        Right now, all we can say is that the prediction based on the hypothesis does not have a demonstrated record of accuracy.

        If there was no prediction for the first 12 years of the 21rst century, then it seems a bit misleading, in 2012, to say that we do not have a demonstrated record of accuracy for predictions of average temperature change for the 21rst century, or through 2050, or even through 2025.

        Of course, I would expect that if temperatures during these 12 years had increased at a rate consistent with the predictions of average increase through longer time periods, some would say it was evidence that the predictions were accurate.

        The predictions are information. The record of temperature trends over the past 15 years is information.

        The implications of the predictions that were made is worthy of discussion. And discussion of the meaning of the trends and predictions, and implications, without a mention of the caveats that were made:

        “The rise will not be steady because of other factors.”

        is not particularly useful. I would rate it at about on the same order or meaning as a discussion of the hypothesis without discussing the temperature trends subsequent to predictions that were made.

        So the question I would have is why didn’t Judith or Leake mention the caveat the IPCC put in right there along side the predictions they (Judith and Leake) spoke of?

      • Joshua: If there was no prediction for the first 12 years of the 21rst century, then it seems a bit misleading, in 2012, to say that we do not have a demonstrated record of accuracy for predictions of average temperature change for the 21rst century, or through 2050, or even through 2025.

        Whatever you wish to call them, they have no demonstrated record of accuracy. They were presented to the public as though they were accurate descriptions of what would happen imminently and persistently without immediate action. Only after it was clear that they were wrong was there the increased “clarity of communication” that they did not really rule out 12 years of nearly no increase in mean temp, and that they were not intended to tell us what would really happen without action.

        What’s more, it’s still extremely important to act now (we have been warmed) because the “non-prediction” now is that the non-warming can’t last, though the warming may not be “steady” by some post-hoc redefinition of steady.

        Chris Ho-Stuart has been attempting to define “non – steady” in a way that no one took it when the IPCC report was written.

    • Latimer Alder

      @hris h-s

      In the eyes of the general public you are making a distinction without a difference.

      We/they do not really give a toss about whether the declaration is made by the IPCC in its capacity as the IPCC or by the individual members of the IPCC in their individual capacities and then summarised by those self-same members in their capacities as members of the IPCC . It doesn’t matter one jot which hat they are wearing at the time. They are all climatologists..and (dare I say) members of the consensus.

      The crucial point that you are all dancing around and refusing to confront is that the idea of ‘Trust us, we’re climate scientists’ has taken another huge battering in the public mind. The last two years – since the Blessed Liberation of the Climategate 1000 and the Gods dumping snow and other s**t all over Copenhagen – has seen endless further revelations that the theories are ‘incomplete’ (at best) and that climatologists are no more trustworthy than the average Joe Sixpack,…and in some cases quite considerably less so.

      And – in Europe at least – several unusually harsh winters explained way by the faithful as yet more evidence that global warming is real and that we’re all freezing because the planet is getting dangerously warm (?) are in danger of turning you all from the high status of ‘trusted advisers’ a few years back into laughing stocks.

      You should be very worried by this, because, government funded as you all are, when hard financial times come – as they have – the easiest way to make cuts is to take away money from the softest targets. And climatology is now one of those.

      I am just amazed that you collectively have no response other than to reassert that you are right and deserve to be trusted. And now to start legalistically rewriting history when your predictions don’t turn out right. I don’t remember there being any such uncertianties when you were proclaiming ‘The Science is Settled’ and other such BS.

      And it is not good enough to say individually ‘it wasn’t me guv’. ‘Al Gore misrepresented my views’, ‘We always knew that the warming would stop’ and all that. Until a few weeks ago you were all proudly boasting about how much of a consensus there was. It was (occasionally) your ‘killer punch’. 97% of you all agreed. The flipside of 97% agreement is that 97% of you also have to take the rap.

      So keep on arguing abou teh exact wording of who said what to whom and when – and whether they were cating in their individual capacity or collectively or as memebers of the consesnus. It really doesn’t matter any more. The general public will look on with amused bewilderment as you try to argue that black iwhite that hot is cold and that you deserve out trust.

      From Hero to Zero is but a short downhill slide. And you guys are starting your descent and accelerating like the slope of a hokey stick.

      • Markus Fitzhenry

        “The general public will look on with amused bewilderment as you try to argue that black is white that hot is cold and that you deserve out trust.”

        I think you are absolutely right Latimer. They will wonder why Climatology measured average temperature at surface yet with solar fluctuations of Wm2 at top of atmosphere in the mismatch. Peaches and plums.

        If they had only measured incoming solar energy percentage of distribution through a atmosphere with a mixed albedo.

        Anyway, Bob Fernley-Jones recons it works something like that.

  3. They make statements with associated confidence levels,…

    Would that Judith would do likewise on a more consistent basis.

  4. In complete contrast to the point I want to make, RSS have published in the last 24 hours their data showing precisely no warming at all since 1997.

    They also, for those who have been waiting to celebrate for a long while [genuine alarmists who want to be proven wrong] show that for the last 15 years – since the beginning of February 1997, there has been global cooling.

    Perhaps this isn’t in contrast to my point after all, which is that such things are essentially meaningless. The globally averaged temperature anomaly has a tiny modicum of virtue solely because there is precious little else. To quote thousandths of a degree is insanity, whereas a tenth or two is just a basic misunderstanding of noise, averages, chaotic systems and the vaguest of measuring coverage.

    It seems to me to make some sense to say that the 20th century saw a rise in temperature of approximately three quarters of a degree. But that seemingly included 3 30 year periods that were different to what came before and after. And even then, these observations are barely discernible from a realistic distance.

    It strikes me as a little irrational – though very human – to attempt to extract genuine meaning from 15 years of data. A third of a century? Possibly, maybe, just about – depending on the strength of the signal, but tempting as it is, I think staring hard at messy little bits of noise (from less than half of that time) hoping to see signs and wonders is a little too much to ask.

    Check this out from Richard Lindzen – not because it is partisan [it isn’t, in this context] but because he uses visual means to make the very same point as I have tried to do.

    • Thanks, Anteros, for the link to the excellent presentation by MIT’s Professor Richard Lindzen!

    • Anteros

      Professor Lindzen’s brief presentation puts it all nicely into perspective.

      I hope Joshua and Chris take the time to see it.

      Max

      • It sounds like they all had a laugh on us too, at 2:13… who said scientists don’t have a good cents of humor.

    • Vaughan Pratt

      Very compelling visuals in the Lindzen video, Anteros, thank you for posting that.

      However one has to wonder whether there was anyone in that audience both competent in statistics and willing to challenge Lindzen on the following omission from his presentation.

      If each point in the right slide is obtained as the average of 100 more or less normally distributed points in the left slide, the errors bars shrink by a factor of sqrt(100) = 10. Lindzen did not mention this.

      I took Lindzen to be implying, both by this omission and his subsequent remarks, that in fact they don’t shrink, and that it is therefore misleading to zoom in on the right by a factor of sqrt(n) (n the number of points on the left producing one point on the right) without also increasing the length of the error bars in proportion.

      Now imagine that McIntyre was in the audience. Would he have raised this point with Lindzen at question time, or would he have passed over it in silence?

      Now further imagine that the speaker had been Mann instead of Lindzen, with the exact same talk, slides, and emphases, and ask again what would McIntyre have done.

      It would be a very interesting poll to see who believes McIntyre would be just as likely to have raised this point with Lindzen as with Mann, and who believes otherwise. Especially if McIntyre himself were among those polled.

      As the one who posted this video, Anteros, what do you think?

  5. In a better world you would think that slide would settle it.

    This isn’t a better world, it’s a declining one at the moment. AGW advocacy is evidence of decline.

  6. Interesting post Dr Curry.

    Non linear and non ergodic systems do not produce data that can be used for prediction. It may be possible, however, to separate some of the Earth systems which interact to produce climate and climate change and that some of these systems may well prove to be ergodic and any non linearities may yield to discretisation techniques.

    It may well be the case that only the systems that produce forcings eg volcanic eruption, solar winds, sunspot activity and the like could be non ergodic but still capable of yielding scenarios for climate modelling purposes.

  7. I need someone much smarter at statistics than I am to explain the real differences between II and III in the opening post

  8. “IMO, the standard 1D energy balance model of the Earth’s climate system will provide little in the way of further insights; rather we need to bring additional physics and theory (e.g. entropy and the 2nd law) into the simple models, and explore the complexity of coupled nonlinear climate system characterized by spatiotemporal chaos.”

    No doubt about that, even a basic 3D model would years ahead of the game.

  9. Lack of predictability is not a challenge of H3, it is an intrinsic feature of chaos. This limit is what we should be trying to establish, not H1 sensitivity.

  10. Markus Fitzhenry.

    The whole of Climate Chaotic Instability: Statistical Determination and Theoretical Background assumes a partial argument. Rather religiously actually. It offers three hypotheses that explain 20th century climate variability and change, nothing about 21st century hypotheses though. It discovers that only the three investigated scenarios have merit in the scientific debate of atmosophere.

    What is it’s purpose, if only to reflect on a possibly invalid theory? The consensus is very much alive. No mention at all of any contrarion perspective.

    It allows only three scenarios in the mix. What about Macro-climatology?
    As there are no verses in the book about it, scientists don’t have to considerate it and can remain in a bliss of rhetoric.

    Is Climate Science such a religion, it must issue fatwas against scepticism and logic. I can easily understand how how difficult it was for Galileo to shift an incorrect scientific paradigm.

    Climate Scientists can fall of the end of the earth whilst reason will remain firmly planted on this beautiful mother of a Earth.

  11. If we have a predicted rate and we have real data, can we not work out the minimum degree of ‘noise’ in the system. If the model states 0.2 degrees per decade, then true-modeled give us the current noise. However, from this random noise we can work out the likely hood of different temperature swings; again based on true-model. We can then see if we can fit 1900-2012 without a slope and see the probability of it occurring at random.

    • Markus Fitzhenry.

      “a predicted rate – If the model – then true-modeled – again based on true-model – see if we can fit”

      The only models worth consideration that I know, are the blonde ones.

    • A very sensible idea.

      And all scientists must always be open to alternative hypotheses and explanations of the data, as this post illustrates. The more time goes on, the less confident we are in the main IPCC hypotheses.

      Doing the data analysis you suggest would help further the science tremendously. Hopefully someone is doing it somewhere.

  12. In order to derive a temperature shenomaly dT, one needs to know an initial temperature T1 and an end temperature T2.

    So I ask, what was the global temperature T1 in the year 1750, 1850, 1900 (you choose). How was it arrived at and what a shmuck you are if you think I’m going to accept an answer to hundreths of a degree.

    AGW stands on the pillars of a SHMUCK SHENOMALY.

    Highly educated doctors and PhDs but not an ounce of commonse sense amongst them. It’s bloody well embarrassing.

  13. Obviously they should have put error bars on the 0.2 degrees per decade. If you use two years a decade apart, and the interannual variability is several tenths of a degree, you are not going to get an 0.2 degree trend very accurately. I don’t know how people are using 1997 to compute a trend, but they should average at least a few years on each end to reduce the error bars to something where 0.2 degrees would be detectable. I always recommend at least a 10-year average which gets rid of solar cycles too (hint: not good starting at a solar max and ending in a min 15 years later). Using decadal averages the last decade was 0.15 degrees than the previous one, and the error bars are actually smaller than the trend.

    • Jim D

      I agree with you that IPCC should have put error bars on the 0.2 degrees per decade.

      Since the most recent 10-year period shows cooling of -0.1 degrees per decade, the error bars should have been +0.2+/-0.3 degC per decade.

      Or, more accurately, cooling of -0.2+/-0.1 degC per decade.

      Right?

      Max

      • Statisticians, and I’m not one, should realize that when the detrended standard deviation is 0.1 degrees (which it is close to), you can’t get an accurate trend of tenths of a degree per year from two years separated by ten years. A zero trend is just as likely as an 0.2 degree trend if the real trend is 0.1. Averaging more years reduces the standard deviation by the square root of the number of years, so by the time you average ten years the standard deviation for a decade is down to 0.03 degrees. Now you can get a trend from two decades with much smaller error bars and a trend of 0,1 degrees would be more likely to be seen.

      • Jim, averaging more years does not reduce the SD. You are confusing years with samples of the same thing.

    • smoothing? Why not just examine that actual data using a method designed to detect anomolies that does not reduce the information content?

      Cusum is your friend

      http://books.google.com/books?id=cTwwtyBX7PAC&pg=PA192#v=onepage&q&f=false

      http://www.cs.tamu.edu/academics/tr/tamu-cs-tr-2007-1-2

      • What is special about an annual average? A decadal average is just as useful and has smaller error bars in addition to removing sunspot cycles quite well. Maybe you prefer the raw daily or hourly data?

      • Averaging always loses information. A decadal average loses ten times as much as an annual. And a decadal average is worthless if you are interested in the behavior within that decade.

  14. Models, shmodels.

    While there may be some die-hards who haven’t gotten the word yet, we know now that our climate cannot be successfully modeled.

    Too many uncertainties

    Too many unknowns

    Too much chaos.

    For a good treatise on WHY model predictions – especially those covering longer time periods – do not work, and why “experts” have a worse chance of predicting something correctly than “non-experts”, read Nassim Taleb’s The Black Swan.

    Max

    • Manacker, You convey a mix of Luddite, Malthusian, and Cornucopian perspectives in the way you convey exactly what you would like to see. This opinion of yours is on the Luddite side.

      Taleb’s book is not a dire warning of hopelessness. It is in fact a motivating influence and call-to-arms for engineers, scientists, mathematicians, and statisticians to get their act together. No one should be constrained to using Normal or Gaussian statistics any longer for environmental models. The natural world contains many fat-tail behaviors that were previously ignored because the Normal thin-tail statistics was the way that we were taught.

      That’s what Taleb was saying. Uncertainties are bigger than we think, but it has no relevance to actually making predictions. You just have to use the correct fat-tail statistics.

      Unfortunately, as Taleb wanted to sell books, he didn’t put a lot of math details into The Black Swan. It was left to the astute readers to figure this out. I use Taleb’s ideas heavily and they should be part of any uncertainty analysis toolbox.

      • “Cornucopian”? Copernican?

      • WHT

        Classifying my statement into some arbitrary categories you have picked out does not change the fact that the models cited by IPCC were unable to correctly project the temperature for the first decade of this century, as I stated (and as Jonathan Leake points out)..

        That is the issue here, Web.

        Climate models cannot predict climate, because they fixate too myopically on human forcing while ignoring or simply not understanding everything else.

        My question to you (and it’s a serious question, Web):

        Since climate models have demonstrated that they are unable forecast our climate one decade into the future, should we have any confidence in their ability to project climate changes over several decades?

        A simple YES/NO answer is OK, followed by a one sentence reasoning for why you chose this answer.

        Max

      • WHT

        Congratulations on having read Taleb’s book.

        So did I.

        It was not about climate change at all – it was simply about the utter futility and absurdity of trying to make long-term predictions in chaotic systems with more unknowns than knowns..

        No. There is not a lot of “math” in the book (Taleb is not a “nerd” – and the book wasn’t written for “nerds”).

        But there is a whole lot of “common sense”, which (unfortunately) is missing in the projections of future climate change being sold by IPCC, starting with the failed projection of 0.2 degC warming for the first decade of the century (topic of this thread) and going on to the forecasts of 1.8 degC to 4.0 degC warming by the end of this century.

        Max

      • Latimer Alder

        Just a simple question

        If climate models are no good at predicting the future even ten years out, what use are they at all? Why do we spend any money at all on funding them and their staffs?

        It may be that there are some good reasons out there…but they haven’t yet come to my notice.

      • If climate models are no good at predicting the future even ten years out, what use are they at all?

        I think the word ‘even’ here suggests you are leading yourself astray in a belief that short-term prediction should be easier than long-term prediction. Analogously, you could ask why quantum mechanics is useful if it can’t predict the outcome of a single experiment.

        Short-term prediction carries a couple of major complications I can think of right now:
        1) dependence on initial conditions to define evolution of internal variability mechanisms. Similarly to weather forecasting, efforts can be made to setup a model to match initial conditions at a certain point in time but they are likely to break down pretty quickly because we lack the quantity and quality of data to be precise enough in the setup (and possibly because the chosen model does not accurately produce variability similar to that observed on Earth).
        2) possibility of influence from unpredictable factors (volcanic eruptions, solar variability).

        These factors carry less importance on multi-decadal timescales, in a probablistic sense at least. A string of very large volcanic eruptions or long period of extremely low solar activity would carry some significance but are not likely occurrances within the frame of, say, 50 years. I can’t remember where but I’ve seen it discussed that a ‘sweet spot’ for climate projections would be about 30-50 years. Shorter than that, unpredictable factors can have a considerable effect. Longer than that, the particular scenario (i.e. what humans will do) becomes an important factor and there is also the potential for dynamic ‘surprises’.

        On a more general point, projections of the future are just one possibility for use of GCMs. They are also indispensible tools for exploring factors which affect climate.

        There’s a useful discussion of decadal model predictions here.

      • Latimer Alder

        @paul s

        ‘you could ask why quantum mechanics is useful if it can’t predict the outcome of a single experiment’

        That you phrase the question that way shows that you don’t know much about QM.

        But my question about models still stands. Let me phrase it another way.

        We have spent something like $100 billion on climatology in the last 25 years. The purpose has been for us to understand better what will happen in the future wrt climate. And the final outcome of all that $100 billion is the climate models. Everything else is just part of the ‘scaffolding’ to that goes into the construction of those models.

        And it is apparent that they don’t work very well – if at all – on the stuff we want them to do. It is even beginning to seem likely that they can never be made to do the things we would like them to do. That the nature of the climate system means that it is as insoluble a problem as is the behaviour of an individual wave/particle in teh QM world.

        So, before we write off our $100 billion as just money wasted, I wonder if there are any side benefits of these models that we can point to and say ‘well at least we got ……’. Much like some think that going to the moon was a total waste of money but that we got teflon saucepans as a spin off.

        So – are there spin offs from climate modelling? Have we found out (by accident perhaps) anything useful from them?

      • The Quantum Mechanics analogy is adept.

        You can not predict the path of a single photon in the dual slit experiment, but you can predict the emergence of a diffraction pattern.

      • And the final outcome of all that $100 billion is the climate models. Everything else is just part of the ‘scaffolding’ to that goes into the construction of those models.

        I’m now wondering what your understanding is of what climate models are, how they are built and how they work. Your characterisation simply doesn’t make sense to me – in many ways climate models are the starting point for research into the climate system.

        Climate models are built using quite simple (or in some cases, not quite so simple) rules such as the law of gravitation, planck function, ideal gas law, pressure gradients etc. More recent models incorporate atmospheric chemistry e.g. methane oxidising to CO2. The large scale complexity that emerges from these rules is due to the amount of different objects/forces which are interacting according to them.

        I was listening to a Feynmann lecture the other day and he was talking about a particular theorem related to Quantum Mechanics which had been known for about twenty years but never tested (this was in the 60s). The mathematics described by this theorem when applied to a real situation became so complicated that, at the time, the theoretical consequences couldn’t be calculated. Without an ability to model the consequences of a theory it can’t be tested. This is what climate models, GCMs in particular, offer – the ability to explore the consequences of physical laws that we think are having an effect on climatic systems so that they can be compared with observations. For example, if you ran a GCM without simulating the rotation of the planet there would be a huge difference in weather and climatic patterns.

        Where modelled consequences clearly don’t match observations the differences can be used to explore what’s missing or not quite right – perhaps the modelled elevation of land in certain areas is not quite right, causing a difference in the flow of wind currents, or maybe the grid resolution of the model is too coarse for certain features to properly resolve.

        Depending on what the problem is found to be the model can be improved or the error in the model can be quantified and taken into account in any analysis involving it.

        And it is apparent that they don’t work very well – if at all – on the stuff we want them to do.

        They seem to do produce a generally good approximation of Earth’s climate. Not sure what you want them to do.

        So, before we write off our $100 billion as just money wasted, I wonder if there are any side benefits of these models that we can point to and say ‘well at least we got ……’

        Firstly, even if it is the case that $100bn has been spent on climate research very little of that would have gone on climate model development. Probably the largest expense in modelling would be purchasing and upkeep of the supercomputers used to run them.

        Regarding the side benefits, well the end game for climate science would be the potential for geoengineering, of our own planet or perhaps another one in the distant future. The various space programs may eventually be able to get us to another planet but the chances of encountering a planet habitable to humans would be greatly improved if we can cause it to be habitable. Along the way climate research has aided in vast improvements to forecasting of weather + El Nino and Monsoons.

      • Latimer Alder

        @paul s

        Thanks, I think I understand – and have always understood – how models are constructed. Many years ago I was briefly involved in similar efforts, so I am not a complete newbie.

        But you misunderstand the ‘we’ that I am using. ‘We’ in this case are the taxpayers. The people who ultimately pay your grants and bills and expenses and all that. And we (in that sense) only really fund climatology because we’d like an answer to the question about whether the climate is really changing in ways that might be detrimental to humanity, if so when will it happen and how much. will it be. And maybe to help to give some ideas about what (if anything) we can/need to do about it.

        We ask you to find this out on our behalf and give you a pot of money, expecting you to come back with the answers.

        And you haven’t.

        Instead you’ve constructed a load of models (do we really need more than 20?) that can’t even tell us about the climate a few years out. They may be extremely intellectually interesting , crafted by the finest minds (though everything I read tells me that they are more thrown together like a heap of junk and that its a miracle if they can be run twice without major realtime surgery becuase the coding and methods are so archaic) and beautiful in their elegance.

        But all those things are irrelevant. The do not do the job we have paid for them to do. They do not fulfil your side of the contract. You have had $100 billion dollars, we have got nothing. They are junk. If you were a commercial organisation you’d be so deep in lawsuits as to be drowning.

        As to your belief that only a small part of the $100 billion went directly on climate modelling, that is about as daft as to say that the cost of going to the Moon was only the cost of the Lunar Module, since that was the only bit that actually got there. All of climatology – satellites, paleo, philosophy, datasets – whatever it may be is spent in the end to support the models..to help you guys ake better models and to answer the questions posed above. Splitting out one particular area of specialisation and saying ‘well we didn’t get all the cash it must be Joe down the hall who did’ is a cop out. It is all money for climatology whatever its precise allocation within the system.

        And your spin offs onto other planets are so far into the future as to be little more than a wishlist. That we can now forecast some weather events better is indeed good news. But could we not have got the same result more quickly and more cheaply by just improving weather forecasting?

        Economic times are harder. Budgets are under pressure. In all government expenditure there is increasing pressure to deliver excellent value for money. This is as true of ‘research’ as it is of welfare or the military or anywhere else in public service.

        Seems to me that you guys have been left alone with your sandpit for far too long developing whatever caught your fancy and have taken your eye off the big picture. We don’t give you all this money to write papers, or to Kill the Deniers or to go to conferences. We give it to you to solve a particular supposed problem. And you haven’t done so.

        Time to start either getting the effort back on the right track or to admit defeat and resin yourselves to the fact that it simply can’t be done. Your choice..but one you will have to make soon.

  15. “Our records for the past 15 years suggest the world has warmed by about 0.051C over that period.”

    I have my doubts that scientists can measure global average temperature to within tenths of a degree at any given time. Does anyone seriously believe that “we” actually know the trend over 15 years to within 5 hundredths of a degree? Seriously?

    When did we develop such precise and reliable instrumentation?

    • GaryM,
      Shhhhh! Pointing out the trivial incredibly dubious nature of what AGW is built on is not fair.

    • Latimer Alder

      I believe the instruments in question are tree rings. And they work by a mystical process called ‘teleconnection’. Somehow they are able to give a precise record of temperatures hundreds of miles from their location.

      The theory of teleconnections has been written up by Drs C. H. Arlatan and S. H. Yster and is often cited in the climatology literature. Usually just after the horoscope page.

  16. Free at last, free at last; thank God Almighty hypothesis III is free at last. We can actually talk about it. We can discuss the math of Tomas Milanovic and the physics of Robert Ellison. The Tsonas paper can now be read and reread as a construct that is an alternative to the trace gas radiative transfer model. Is this really the promised land?
    Now really, I do have some tidbits that I have harbored, wondered about, and seem to fit with the implications of Hypothesis III:
    VS on Bart Verheegen’s blog March 2010 demonstrated that for the time series of 1880 to 2008, temperature fell within natural variation; clarifying for me the falsehood of “unprecedented warming” in the late 20th Century. The other tidbit was an observation: global temperatures responded in a homeostatic way to perturbations by the volcanic eruption of Mt Pinatubo and the El Nino of 1998. Each time the temperature was forced up or down, the global temperatures returned to their previous baseline. Homeostatic mechanisms, as applied to climate change would mean that climate sensitivity is very low: i.e., near zero. Therefore arguments and calculations of climate sensitivity, particularly at the 3.5 C guesstimate of IPCC, didn’t make sense to me. Now the climate scientists, other than the Team of course, can pursuit identifying the precursors of and eventually be able to predicts future abrupt climate changes. Pursuing this line of research is more likely than not to lead to better decade weather and climate changes forecasts. Can we put the jibberjabbing aside for a while and concentrate on some science?

    • I agree that a lot of work has to be done on H III but don’t think it can be done on this blog.

      I think that a wiki should be set up and only invited contributors from many different disciplines use it.

      I rather doubt that any mainstream climate scientist would be a suitable candidate for this wiki. Too much linear/rational thinking to be undone methinks :)

    • the volcanic eruption of Mt Pinatubo and the El Nino of 1998. Each time the temperature was forced up or down, the global temperatures returned to their previous baseline.

      Gravity applied to airborne particulates will do this.

      • Gravity influences El Nino swings?

      • “Gravity influences El Nino swings?”

        Mt. Pinatubo generated particulates which eventually fell to the ground due to gravity.

        I view most skepticism as a lack of understanding.

      • So you are only half correct.

        Which pretty much applies to climate science. The part about CO2 is correct, but the part about positive forcings is yet to be proven correct and the part about it all leading to disaster is most certainly overblown to the point of being science fiction rather than science fact.

      • Web, we view your view as a lack of understanding. Evidence based.

    • What is more fun is that starting in 1998 or 2000 is not longer cherry picking, it is comparative analysis :) woohoo! its got a fancy name and everything!

    • Mt. Pinatubo generated particulates which eventually fell to the ground due to gravity.

      Yes, I think the conceptual error can be described by analogy to the basic laws of motion. The OP is making an assumption that volcanic eruptions apply a force to planetary temperature, which is then free to do as it likes within the reference frame of the planetary climate system and that appears to be causing it to flip back into place.

      The reality is that the volcanic eruption applies a force, through the release of reflective aerosols into the stratosphere, moving the planetary temperature, but then a force of equal magnitude is applied as the aerosols are scrubbed out of the atmosphere. There are no clear homeostatic implications – flipping back into place is simply an expected consequence of the sum of forces.

  17. “Note: hypothesis III is consistent with Sneyers’ arguments re change-point analysis.”

    Fundamentally incorrect.

  18. Chief Hydrologist

    The Times articles – I saw it in The Australian – had temperature graphs that show monthly data. It all peaked in early 1998 as a result of the 1997/98 ENSO dragon-king. A dragon-king – I have said before – is an extreme event associated with a chaotic bifurcation. An extreme ENSO event happened at the 1976/1977 ‘Great Pacific Climate Shift’. So we have a couple of examples in the record that are associated with climate shifts at decadal scales. They are linked to oceanographic and global hydrological shifts that are fundamentally important to human societies and the natural world. Oceanographers and hydrologists have been researching these things for decades. My own journey began in 1990 when I read an article on Flood Dominated and Drought Dominated Regimes in north-east Australia. The article (Erskine, W.D. and Warner, R.F., 1988, Geomorphic effects of alternating flood- and drought- dominated regimes on NSW coastal rivers. In R.F. Warner ed.) was inspired by an observation that rivers changed form in the late 1970’s from a high energy braided form to a low energy meandering form.

    ENSO determines 80% of temperature variability in the tropics (McLean et al, 2009) and 70% globally. So the record is complex to start with. There is decadal variability to ENSO and the Pacific more generally and they are associated with the trends of cooling and warming seen in the 20th century. Here is a graph from which this background variability has been removed.

    http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view&current=MONOTONIC.gif

    It shows moderate warming of 0.08 degrees C/decade in the 50 years to 2000. One of the things to keep in mind is that the background is considerably more variable than we have seen in the 20th Century. This can be seen in, for instance, the 11,000 ENSO proxy.
    http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view&current=ENSO11000.gif

    There is complex and dynamic behaviour at all scales – but in one sense it is very simple. It is all about energy – energy in less energy out equals the change in energy stored in Earth’s climate system. The problem is the data – SORCE, CERES and ARGO all start post 2000 – so miss the critical shift around the end of the last century. Here is the RSS plot – if you start from the big La Niña in 2000 there is a temperature rise for the decade. Extra warmth is found in CERES and in the ARGO deep ocean data. The problem still is the shortness of the record and how much useful information it contains about even short term events.

    There is data at ISCCP-FD and at the Earthshine project that shows the 1990 shift. My feeling is that the cloud related shift to a cooling influence will intensify as La Niña increase in intensity and frequency for the next decade or 3.

    http://science.nasa.gov/science-news/science-at-nasa/2002/12apr_earthshine/

    http://www.bbso.njit.edu/Research/EarthShine/

    There are also intriguing suggestions that UV changes (not TSI) at the poles are implicated in mid latitude climate change (Lockwood et al 2010). Could this be the missing factor in little ice age dynamics regardless of whether the Thames freezes?

    Robert I Ellsion

  19. Economic actions over purported climate trends are causing an increasing political/financial row:
    Airline industry split widens over EU carbon ‘tax’ row

    A day after China barred its airlines from complying with what many consider a tax, the head of the International Air Transport Association (IATA) warned that several nations view the EU scheme as an “attack on sovereignty”.

    “Non-European governments see this extra terrestrial tax as an attack on their sovereignty,” International Air Transport Association (IATA) director general Tony Tyler said in a speech to the European Aviation Club.

    • This issue has been canvassed previously. From my memory, several posters from the EU (or maybe England) supported the new EU tax on the basis that it only added about $30 to a trans-Atlantic flight

      I pointed out that this tax covered non-EU land and oceans on long haul flights. I now have an answer on the additional cost to a return Sydney-EU-Sydney flight : about AUD$700 (Qantas flight fares, February 7)

      I should also point out that the Euro is a cot case (and the British pound is closer to this than is comfortable), so AUD$700 is considerable

      I really hope that retaliation occurs. Perhaps we may see the Chinese buying old European castles at knockdown prices and turning them into fried rice outlets :)

      • A per capita “landing tax” of €500 for all EU citizens arriving at Chinese airports might be a good start.

        And this could easily be extended with a €1,000 “departure tax” for those wanting to leave.

        Max

      • And the Chinese have told the EU to go and do one, over this tax.

  20. Not a single reference to Earth Orientation Parameters in the article & comments. Every day it’s looking more & more like no one or almost no one who participates in this forum is serious about understanding natural climate variations. Particularly concerning is the apparently popular notion that hypercomplexity (in the mathematical sense) cannot be simple.

    Regards.

    • Chief Hydrologist

      Ride ’em cowboy. Get’s so the irregularities in the Earth’s rotation is as hard to ride as a bucking bull from the western plains. No wait – that’s the bourbon whiskey. Maybe you should give up drinking. As for hypercomplexity – all we need is good ole Max Ent and a power distribution – gives us a fat head or a fat tail. I just keep getting the 2 mixed up. We don’t need no stinkin’ dynamical complexity, high faluting butterfly talk, bifurcated phase space and a whole lot of city slicker glop about autocorrelation and dragon-kings. A cowboy just needs a fat head (fat tail?) to express the untold viccisitudes of the soul. Dang nat city slickerts will fall for it every time.

      • You have misinterpreted the term hypercomplexity. It is meant in the sense of hypercomplex numbers.

      • Chief Hydrologist

        It’s a shibboleth. What part of mad theory didn’t you understand. What the hell has hypercomplex numbers to do with anything in the real world. Oh for Christs sake. Either say something sensible or amusing.

      • Markus Fitzhenry

        How’s that.

        Now I know where that missing heat went. To the deep pressured depths of the oceans.

        “”The pressure of the atmosphere and bodies of water, has the general effect to render the distribution of heat more uniform. In the ocean and in the lakes, the coldest particles, or rather those whose density is the greatest, are continually tending downwards, and the motion of heat depending on this cause is much more rapid than that which takes place in solid masses in consequence of their connecting power. The mathematical examination of this effect would require exact and numerous observations. These would enable us to understand how this internal motion prevents the internal heat of the globe from becoming sensible in deep waters.

        General Remarks on the Temperature of the Terrestrial Globe and the Planetary Spaces; by Baron Fourier.””

      • Not joking about complex numbers Chief. Quite the contrary. This is one of the most serious problems in the whole climate discussion. Ignorance won’t make it go away, even if the ignorance comes from “experts”.

  21. Tsonis et al. (2007) is little consolation for skeptics. His definition of a major climate shift in 1976 occurred after a lull such as this. However his shifts and lulls have an amplitude of 0.1 degrees, and therefore are washed out in a longer term warming trend. Anyway, according to Tsonis, we would be due for another shift that will increase the temperature rapidly.

    • Chief Hydrologist

      I linked to a graph of Tsonis ‘washing out’ the natural variability in the 20th century – http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/ – the residual is about 0.08 degrees C/decade and I am a little tired of repeating myself for people who are psychologically unable to process this fact.

      There are 2 relevant papers – amongst a plethora of excellent work.

      A new dynamical mechanism for major climate shifts
      Anastasios A. Tsonis, Kyle Swanson, and Sergey Kravtsov

      Has the climate recently shifted?
      Kyle L. Swanson and Anastasios A. Tsonis

      ‘This suggests that the climate system may well have shifted again, with a consequent break in the global mean temperature trend from the post 1976/77 warming to a new period (indeterminate length) of roughly constant global mean temperature.’

      The periods last 20 to 40 years in the proxy record – and I don’t think that the 20th Century is an adequate handle on the limits of natural variable. You’re a cowboy that get’s dragged kickin’ and scremin to the dance Jim.

      Robert I Ellison
      Chief Hydrologist

  22. Big news on the USA Today website.

    http://www.usatoday.com/weather/climate/story/2012-02-07/warm-january-climate-report/52999508/1

    The really big news? Nary a mention of global warming or climate change.

  23. Sunday Times: So, Do We Freeze Or Fry?

    why has it warmed so much less than was predicted by the climate models?

    Here are the data:

    IPCC Climate Model Prediction => http://bit.ly/zA0a2j

    HADCRUT3 Observation => http://bit.ly/w2dh2R

    Comparison of model with observation => http://bit.ly/wVWllY

    Here is the discussion regarding the above question in the climate emails:


    1) … to argue that the observed global mean temperature anomalies of the past decade falsifies the model projections of global mean temperature change, as contrarians have been fond of claiming, is clearly wrong. but that doesn’t mean we can explain exactly what’s going on.

    2) Here are some of the issues as I see them: Saying it is natural variability is not an explanation. What are the physical processes? Where did the heat go? We know there is a build up of ocean heat prior to El Nino, and a discharge (and sfc T warming) during late stages of El Nino, but is the observing system sufficient to track it? Quite aside from the changes in the ocean, we know there are major changes in the storm tracks and teleconnections with ENSO, and there is a LOT more rain on land during La Nina (more drought in El Nino), so how does the albedo change overall (changes in cloud)? At the very least the extra rain on land means a lot more heat goes into evaporation rather than raising temperatures, and so that keeps land temps down: and should generate cloud. But the resulting evaporative cooling means the heat goes into atmosphere and should be radiated to space: so we should be able to track it with CERES data. The CERES data are unfortunately wonting and so too are the cloud data. The ocean data are also lacking although some of that may be related to the ocean current changes and burying heat at depth where it is not picked up.

    3) we can easily account for the observed surface cooling in terms of the natural variability seen in the CMIP3 ensemble (i.e. the observed cold dip falls well within it). So in that sense, we can “explain” it. But this raises the interesting question, is there something going on here w/ the energy & radiation budget which is inconsistent with the modes of internal variability that leads to similar temporary cooling periods within the models.

    http://foia2011.org/index.php?id=198

    JC, thanks for the link to the excellent Sunday Times article.

  24. II. Multi-decadal oscillations plus trend hypothesis: 20th century climate variability/change is explained by the large multidecadal oscillations (e.g NAO, PDO, AMO) with a superimposed trend of external forcing (AGW warming). The implications for temperature change in the 21st century is relatively constant temperatures for the next several decades, or possible cooling associated with solar. Challenges: separating forced from unforced changes in the observed time series, lack of predictability of the multidecadal oscillations.

    Based on the data, the oscillation is predictable for the whole temperature record as shown:

    http://bit.ly/Aei4Nd

    1880 to 1910 => Cooling

    1910 to 1940 => Warming

    1940 to 1970 => Cooling

    1970 to 2000 => Warming

    2000 to 2030 [?] => Cooling

  25. Heh. One of the jibes of warmists about skeptics is that they appeal to utter unpredictability and chaos and uncertainty to dismiss all “scientific consensus” re climate. Now, backs to the wall, it is that same “unpredictability” to which they now must have recourse to justify the utter lack of scientific “falsification testing” validation of their Great Cause.

    If the Foo Bird Sh**s …

    • I don’t know what exactly is chaotic about the oceans sinking a significant portion of the excess heat due to the energy imbalance.

      I and II are the same and III is a huge cop-out.

      • WHT

        I and II are not quite “the same”, but they are also not mutually exclusive, i.e. they can be combined: (let’s say 25 parts I and 75 parts II).

        Even III can be combined into a I and II combination.

        And we have no notion how these three theories interact or whether or not there isn’t a fourth one out there, lurking as a “black swan” we cannot even see.

        That’s why myopically concentrating only on anthropogenic greenhouse forcing is so silly – but, what the hell, it has been IPCC’s charter and raison d’être from its inception.

        Max

      • Chief Hydrologist

        Yeah – whet’s so freakin’ complex about the coupled atmosphere/ocean system?

        This is getting funnier by the day.

      • The only way that the earth can exchange energy with the external system is through radiative energy transfer.

      • Web said,”There only way that the earth can exchange energy with the external system is through radiative energy transfer.”

        Very true. You are looking into the diffusion rate into the deep oceans, at what point does that rate decrease to 80 milliWatts/m^2?

        A doubling of CO2 increases the conductivity of the atmosphere by approximately 80 mW. Small potatoes right?

      • WebHubTelescope: The only way that the earth can exchange energy with the external system is through radiative energy transfer.

        Is there a point to that?

      • Is there a point to that?

        The approach I use in applying physics is to remind myself of the fundamental laws as often as possible. This helps to rule out all sorts of impossible scenarios. The basic law of energy transfer establishes the long-term trend.

      • The long term trend (for the short term of millions of years) is dominated by ice ages. Does that help?

    • Markus Fitzhenry

      “Yeah – whet’s so freakin’ complex about the coupled atmosphere/ocean system?”

      It’s hard to tie wealth redistribution into it.

  26. Girma | February 8, 2012 at 2:00 am | Reply

    Challenges: separating forced from unforced changes in the observed time series, lack of predictability of the multidecadal oscillations.

    I have long consdidered that the “forced/unforced” terminology is suspect. In a complex recursive system, the distinction is rather arbitrary. It’s pretty hard to sustain in the face of the long-range observation that a dominantly CO2 atmosphere in deep pre-history was steadily and gradually turned into rock and hydrocarbons (mostly by life), and is now trickling back into play, at VERY slow relative rates.

  27. typo: “long considered that ….”

  28. “rather we need to bring additional physics and theory (e.g. entropy and the 2nd law) into the simple models ….”

    Theorem: The steady-state dissipation of a thermodynamic system due to an energy flux between two isothermal surfaces equals the maximum rate of work possible for a Carnot engine operating between these same temperatures given the same energy input.

    from which it directly follows that the maximum temperature change possible for a 3.7W/m2 forcing is 1.44K. This is merely a limit, not a solution. Unfortunately, the theorem’s derivation is mathematical, not rhetorical.

    pdq

    • quondam said, “from which it directly follows that the maximum temperature change possible for a 3.7W/m2 forcing is 1.44K. ”

      Yep, A scientist has gotta know his limitations :)

  29. Dr Curry: “I don’t see how this statement by Leake is misleading”

    Jonathan Leake: “The implication was that temperatures would rise steadily, not with 15-year gaps”

    IPCC: “The rise will not be steady because of other factors.”

    • Here’s a reliable 30 year (multi-decadal) satellite record:

      http://www.drroyspencer.com/wp-content/uploads/UAH_LT_1979_thru_January_2012.png

      So what’s your timeslice ?

      • Ian, how about looking at a time slice of the last 15 years? I calculate from the data (UAH data here for lower troposphere) a linear regression trend of 0.85 C/decade, with a 95% confidence range of 0.023 to 0.147 C/decade. (Simple white noise model calculated with Excel; a more sophisticated model taking autocorrelation into account would give wider error bars.)

        Or the 30 years trend? 0.165 C/decade, with 95% confidence range of 0.144 to 0.187

        So that data suggests that the short term 15 year trend is indeed slower than the longer 30 year trend, though the support of that inference is not particularly strong. It also confirms that the trend is still for warming, whether taken over either 15 or 30 years.

    • Latimer Alder

      @louise

      Headline:

      Global warming is bringing Thermageddon upon us. Models show doom is inevitable. Humankind is releasing deadly carbon dioxide. Ice caps are melting, polar bears drowning. Our children will face a much hotter future. Repent now!

      Small print:

      Subsection 14, para 3, clause 71 (unless amended by the IPCC politburo at their annual congress) in a 2000 page document

      ‘The rise will not be steady because of other factors’

      See the difference? Or do you sell dodgy insurance policies door-to-door?.

  30. Meanwhile James Delingpole reports about a German ex-minister of environment, who also observes that “„Seit 12 Jahren ist die Erd-Erwärmung gestoppt!“

    Which elicited Benny Peiser to remark:

    “Imagine if George Monbiot were suddenly to declare himself a climate sceptic. That’s how massive this story is!”

      • He found hundreds of errors. When he pointed them out, IPCC officials simply brushed them aside. Stunned, he asked himself, “Is this the way they approached the climate assessment reports?”

        Vahrenholt decided to do some digging. His colleague Dr. Lüning also gave him a copy of Andrew Montford’s The Hockey Stick Illusion. He was horrified by the sloppiness and deception he found.

        “Stunned.” “Horrified.” Yes, if you read this book you will be appalled, astounded, dumbfounded, horror-struck, overwhelmed, shocked, etc. at the lies, fraud, deceit, dishonesty, inaccuracy, misrepresentation, etc.

        Well, maybe not “shocked” since that might bring to mind Captain Renault’s “I’m shocked, shocked” in Casablanca

        Someone should write a computer program to crank out reviews in this vein of books and articles critical of climate scientists like Santer, Jones, and Mann, in case that’s not what’s already happening here. The thesaurus offers plenty of good words, and there’s a vast range of suitable phrases, with “should be incarcerated for wasting billions of dollars of taxpayer money” and “the biggest fraud since Piltdown man” barely scratching the surface.

        We’ve been here before, plenty of times by now. Those who regard this sort of thing as a contribution to our deeper understanding of the climate have a way of resolving scientific differences that is rarely practiced in serious scientific circles.

      • Vaughan Pratt: We’ve been here before, plenty of times by now. Those who regard this sort of thing as a contribution to our deeper understanding of the climate have a way of resolving scientific differences that is rarely practiced in serious scientific circles.

        Ah. but those who regard this sort of thing as a contribution to our deeper understanding of climate-related political debate are making a serious point that should not be ignored.

      • @MattStat Ah. but those who regard this sort of thing as a contribution to our deeper understanding of climate-related political debate are making a serious point that should not be ignored.

        Matt, what are you saying here? Apologies in advance if I’ve misunderstood you.

        I will grant you that political debaters should not ignore what to them is a serious point. But how about scientists who don’t feel they have anything to offer the political debate? Are they expected to understand the bitterness of the global warming pill, or explain the consequences of not to taking it?

        Wouldn’t science be better off if those who had never come near making it to the debate team, but who had gotten science grades good enough to get them into a good or even great school, were allowed to continue what they find themselves good or great at, and let those good or great at political debate focus their talents on the climate debate?

        In a fair fight, those who don’t accept AGW could go about proving it false by finding competent gladiators for their side and challenging the other side to match them with their best gladiators. Courts of law are organized around that principle.

        But that’s not how the AGW protesters have been going about it. Instead of issuing a challenge to the other side they’ve created their own kangaroo court by kidnapping those scientists they figured would be most hapless when out of their element, arming them with the same weapons their gladiators were trained on, ridiculing them, and then demanding before a massed crowd of onlookers that they show themselves undeserving of that ridicule.

        The crowd that loved the Roman circus 2000 years ago is just like the crowd today that screams their approval when they see blood drawn in the arena, literally then but figuratively today. It’s like throwing Christians to the lions, with the climate scientists as the Christians and the faux scientists as the lions. Lions may not be competent scientists, but they’re far from dumb animals.

        Today’s crowd can’t see the blood, but they can still smell it, and they love it!

        Matt, would you call this a fair fight?

      • Vaughan Pratt: Matt, what are you saying here? Apologies in advance if I’ve misunderstood you.

        Oops. I think we’re stuck in the mud here. You wrote something indirect, I think, with your Thesaurus reference, and my reply was indirect. Two wrong turns don’t make a right turn.

      • Sorry about that, Matt. For my 2013 New Year’s Resolution I think I’ll resolve to replace every pronominal reference in what I write with its referent — oops, I mean the referent of that reference. ;)

    • Latimer Alder

      ‘For you George, the warm is over’

  31. Vaughan Pratt

    Just to be completely contrarian about recent temperatures, I don’t buy the connection with ocean oscillations since it seems to me they’ve been pretty flat since 1990.

    To quote Mug Wump on the long-running Amazon discussion group Global warming is nothing but a hoax and a scare tactic, “It’s the Sun, stupid” which he repeats ad nauseam.

    We’re just now coming out of an odd-numbered solar cycle, namely 23, and embarking on 24. When exiting the even ones the temperature doesn’t go down much (no idea why, but it seems to be correlated with the magnetic alignment of the solar wind—this phenomenon has been going on at least as long as the 162-year HADCRUT3 record, eight evenly spaced instances). Also the exit from 21 was quite weak. 19 was stronger, but the last exit comparable to 23 was 17, which from 1940 to 1950 went down an impressive 0.2 °C after factoring out all other thermal impacts. In comparison cycle 23 only went down about 0.16 °C, not as strong as cycle 17 but enough to almost exactly cancel the CO2-induced warming, while the ocean oscillations stayed out of the picture as noted above.

    Meanwhile the latter hasn’t gotten any weaker, and adding in the likely rise for cycle 24 should produce an impressive amount of warming during the decade 2010-2020! Also I don’t expect the ocean oscillations to remain flat for much longer, but to start going up after an extremely cool spell in the early 1970s, which will add yet further to the temperature in 2020.

    Just my two cents (three if you count the ocean oscillations). Climate skeptics should feel free to chime in with their customary “expect an impressive amount of cooling during 2010-2020.” Check back here in a decade to see who was right.

    • Chief Hydrologist

      You’re a bug eyed loony bin. The only thing you’ll be checking in a decade in a rehab clinic. If you want to provoke ding bat arguments for your amusement in response to your own deliberately ding bat arguments. You’ve come to the wrong place. This is such as shallow facade that it shows your utter contempt for lessor mortals who don’t share and can’t possibly appreciate your inestimable worth. I think you re a ding bat with pretentions of idiocy. You are not even an idiot – you pretend to be an idiot when in reality you are just a brain fried chimpanzee. You think you can play these games for your amusement and the bamboozlement of the herd. Everyone else is too polite or disinterested in your crapola to call you on it – but I will tell you what a crapulous piece of work you are with no dissembling at all.

      • Chief Hydrologist

        Oh have I got you wrong? You really are that stupid? I don’t think so. You have been playing this stupid game all over the threads. But – hey – sorry if you really are that stupid.

      • Hey Chief,

        Calm down.
        Currently scientist figure all their laws and theories are absolutely correct(even though their models are crashing and burning).
        What I find amazing is that as soon as some scientist come out with a calculation, a model is built around it and it is used instead of our real planets parameters.

        Do you know what 48 degrees latitude is?
        It is actually a very important number!
        This is where velocities and centrifugal force separate as the angle of the planet in rotation is too steep and too slow to pull water south.
        If you were to actually generate an orb and rotate it slowly, while pouring water, the water would always want to go to the poles. Rotate this quickly and water flies off at the equatorial region or anyplace at a 90 degree angle to the axis of rotation.

      • Chief Hydrologist

        It is a hoax guys – he is playing you for fools to draw out all the mad theories around the place. It is a deliberate game to discredit – in his own what passes for mind – the blog.

    • Vaughan,

      I don’t buy into the ocean oscillations as well.
      Now, if you were to say looking at the salt density changes of the ocean…
      They do have an effect on the evaporation and precipitation on a rotational planet that have many different velocities.
      http://www-pord.ucsd.edu/~ltalley/sio219/curryetal_nature2003.pdf

      These theories are all temperature data related which do not include any parameters in motion or with physical changes. Strictly temperature data.
      The models are huge mistakes…here is an example:
      “Scientists say that the planet’s axis has shifted based on their models.”
      Okay, let’s look at this with the actual planet. The core and axis are deep in the planet and is incredibly dense. The crust floats on a magma cushion.
      The actual event is that the crust has shifted and NOT the axis.

      Science has generated many idiocies such as this. From not reviewing science theories as technology changes to 95% of science is still considered as unexplored. Yet no one wants to look at anything which may effect the current consensus and funding generated.

      • incandecentbulb

        It is interesting he would quote Mug Wump – ‘AGW is nothing but a hoax,’ thread on Amazon because that thread in immortalized on Board Reader as more emblematic of censorship.

        For Example, you see on Board Reader–e.g., “As Dr. Pielke, Senior has said, in a period when the oceans are cooling there is no global warming during that period. The oceans have been cooling according to the same methodology that the global warming alarmists would presume to use to elevate their conjecture of man-caused global waming from superstition to …”

        But when you tab on the link to the Amazon thread you see the above post was deleted by Amazon based on complaints from the Amazon community.

      • when the ocean is warm and the arctic is open, it snows more and moves water mass from the oceans and adds ice mass on land and the axis does shift.

      • It is interesting he would quote Mug Wump – ‘AGW is nothing but a hoax,’ thread on Amazon because that thread in immortalized on Board Reader as more emblematic of censorship.

        Was this anything more than some disgruntled commenter complaining at some point on Board Reader that Amazon kept deleting their comments? You portray it as a universally accepted fact, which is news to me. “Emblematic of censorship” is unsupportable libel.

        What amazes (and sometimes even annoys) me about that long-running thread (heading towards 40,000 comments?) is just how few comments Amazon deletes from it. Incandecentbulb, can you name even one climate blog that deletes fewer comments? Do you really believe Climate Etc. deletes fewer objectionable comments than Amazon? (Judith, have you made that comparison?)

        On Tamino’s Open Mind for example I’ve tried posting what I (like almost everyone who posts anywhere) had considered to be detailed and irrefutable scientific facts. This was not to make a nuisance of myself there, mind you—I hadn’t even heard of that blog before—but only to defend myself against attacks on me that had been posted there, that weeks later had been brought to my attention, and that were clearly unsupportable. The moderator apparently mistook me for a “zombie” of the kind Steve Sullivan is allergic to, whose presumed goal in life is to make the lives of AGW alarmists miserable, and deleted what I wrote saying that he “wasn’t going to argue with me” while declining to retract his attacks.

        That Tamino thinks “Open Mind” is an appropriate name for his blog reflects poorly on his understanding of the concept.

        Both sides of the climate debate maintain equally closed-minded blogs, such as WUWT, Greenfyre’s, Bishop Hill, RealClimate, JoNova, ScienceOfDoom, etc. On all these blogs, if you disagree with their basic premises, then no matter how logically consistent the basis for your disagreement there’s an automatic presumption of guilt until proven innocent. While I have less experience with Lucia’s The Blackboard and Tamsin Edwards’ AllModelsAreWrong, they seem better in that regard.

        But I digress. My main point was just to defend Amazon against what seemed to me a particularly unjust criticism. (Disclaimer: through no fault but my own I have no interest, vesting or vested, in any part of Amazon itself.)

    • Vaughan Pratt: Check back here in a decade to see who was right.

      I expect everyone will be doing a lot of checking back.

  32. Color me firmly in Hypothesis II territory, which few seem to have staked out. Well, we will see – it is going to be interesting. The IPCC types are certainly resisting shifting toward this ground, though it would seem a natural shift in the face of future lack of warming (it describes my thinking to some extent). Meanwhile, the chaos guys may yet rule the day – but I’m betting against them. My money’s on flatness for the rest of this decade. Check in with you all in 2020.

    • billc, you are such a rebel :) I was thinking a comparison of II to III would be a good approach. You are still going to end up with a range of about 0.52 to 5.2 by latitude with a mean of about 1.48 C.

      That’s just my estimate of course, I should leave the cipherin’ to the real mathematicians :)

      • Capt and others – in all seriousness, what are the high-level physical phenomena by which the difference between hypothesis II and III would manifest itself?

        Oceanic regime changes leading to large shifts in cloud cover might be a candidate??

      • billc, the way I see it, II is like the old farmer’s almanac. The better the past data and the length of the records are, the better it is for making predictions. III is geared toward determining the changes and the causes of the changes. Both have limits, volcanoes and stuff like that.

        As long as both don’t have the same limits, comparing the two should highlight anomalies, the volcanoes and stuff, to improve the efficiency of each.

        There will never be a perfect solution with either method, but comparison should improve the degree of confidence and point out the more significant unknowns.

      • There will never be a perfect solution with either method

        Oh, such a pessimist… ;)

  33. Judith.

    Atmospheric circulation is an oxymoron to temperature data as it is the movement of our planetary gases.
    Velocities and centrifugal force of our planet has generated a very fascinating phenomenon of circular motion. From the creation of snowflakes to the creation of tornadoes all require circular motion.

  34. I have read the lead into the thread, and most of the comments, and I confess I am unimpressed. It seems to me that there are two vital issues, neither of which seemed to have been looked at.

    The first issue is, has anyone detected a CO2 signal amongst the noise of the temperature/time graph? I have seen no evidence of ANY CO2 signal. If is it there, then where is it?

    The second issue to me is the utility of any hypothesis. The reason for hypotheses is to be able to predict what will happen in the future. This then provides a basis for determining which hypothesis is likely to be correct. So given that there are three hypotheses, what do these predict will happen to global temperatures into the future? Then we can get the future data and compare prediction with actuality.

    It seems to be that the most likely hypothesis is that there is no CO2 signal; increasing CO2 levels have a negligible effect on global temperatures. What we are witnessing is temperature governed by a series of phenomena which affect temperature, most of which we simply dont understand.

  35. incandecentbulb

    The end point is where we are now: a continuous and relentless increase in the government-education complex has lead to a runaway power grab and war on the productive.

  36. incandecentbulb

    Every movement and way of thinking and acting that takes on enough gravity to be named will ultimately be analyzed based on ‘trends, change points & hypotheses’ but only after-the-fact by dispassionate chroniclers of the past. We can only guess about the future but my guess is that years from now AGW theory will be seen as the Chevy Volt of science.

  37. Vaughan Pratt


    Check back here in a decade to see who was right.

    How about having a bet with me?

    I say the current global mean temperature record for 1998 for hadcrut3 will not be exceeded in the next three years (2012, 2013 & 2014).

    • wtf.

      that’s not very daring.

      I say the current global mean temperature record for 1998 for UAH will not be exceeded before 2020.

      why UAH? because HadCrut3 is not going to be updated. it’s moved to #4 or whatever. It may be adjusted upwards and get a closer match to GISS. Spencer has indicated a small downward adjustment for very recent temps in the new version of UAH to come soon.

    • I’m young.

      Since it seems unlikely that here in the US we are going to change much in the way of policy, AGW or not, for quite a while, I can wait until 2020 or beyond.

      Sure I’ll take on shorter term bets (in quatloos of course) but they are less important. I guess it’s like all the little races that happen before the Derby, versus betting on the Derby itself, not that I’ve ever been.

    • Chief Hydrologist

      Girma,

      This post and many other from Pratt is simply a hoax. It is something he made up for his own amusement at the simple minded bloggers who respond.

      Do you want to give im the satisfaction?

      CH

    • Vaughan Pratt

      @Girma How about having a bet with me?

      Although I’m not into online betting, Girma, I don’t mind competing with you in the climate futures market based on our respective forecasting skills.

      However I don’t understand why you would want to bet me with on this question. Are you very confident you would win the bet, or do you have some other reason?

  38. A question for hypothesis IIIers,

    External forcing (AGW, solar) will have more or less impact on trends depending on the regime, but how external forcing materializes in terms of surface temperature in the context of spatiotemporal chaos is not known.

    How is that statement reconciled with the trend of significant warming over the entire 20th century?

    • With a prolonged cooling regime, negative PDO, AMO and reduced solar, small, long term, impacts increase in relative significance. Conductive cooling for example decreases less that radiant forcing since one is a 4th power function and the other is nearly linear. The conductive impact may only be 1/20 of the radiant, but over 20 times the period, it balances the radiant reduction.

      It is the total energy transfer over each time scale that is the issue.

      • How is that statement reconciled with the trend of significant warming over the entire 20th century?

      • Joshua, CO2 increased in the last 100 years rapidly with respect to the multi-century feed backs. The longer term, multi-century feed backs are trying to catch up. If we can continue producing more CO2 at the same rate, the the 20Century trend would hold for the next century, with the same natural variation imposed on the trend, Girma’s plots.

        Problems are, the starting point for the last century, was that at the natural variation mean? At what point are we really on the CO2 forcing Curve? And what longer term natural variables of significance exist?

        If the 1900 to 2000 mean is the true global temperature average, then there has been about 0.4 C of warming per century or 0.04 C per decade. That does not sound as frightening as 0.2 C per decade does it?

      • cap’n –

        That does not sound as frightening as 0.2 C per decade does it?

        That is a separate issue. Don’t let your partisan focus drive your analysis.

        The longer term, multi-century feed backs are trying to catch up.

        When I think of something “trying to catch up” I think of a person trying to make a convincing argument or my dog trying to get me to take her for a walk (as she’s doing right now, in fact).

        I have a harder time understanding how multi-century feedbacks can “try” to do anything.

        Problems are, the starting point for the last century, was that at the natural variation mean?

        Even if the impact of ACO2 is stronger than the most concerned predict, i human impact on the environment will not affect the mean temperature of the planet for a very, very, very long time. The mean will most significantly be determined by the temperature of the planet for billions of years prior to human existence.

        What I’m trying to understand with some level of specificity (I know it’s tough to be specific without going over my head) is how Hypothesis III is reconciled against longer-term temperature of temperature increase over the 20th century.

        If you prefer comments from those who are scientifically literate, I’ll quote from Steve’s comment below:

        But even if correct it does not mean that the long term underlying trend is not forcing dependent.

      • sorry….”…longer term trend of temperature increase over the 20th century…”

      • Joshua, there is nothing partisan about that comment. 0.2 C is a worst case, 0.04 is a best case, the range is likely between the two without some unexpected event.

        “Catching up” is a fact of life in a dynamic system. When you hit the brakes on your car, you have to allow for stopping distance that changes with your velocity, road conditions, tire conditions and whether the girl on the side of the road is cute or not :) (reaction time).

        Using the car for another analogy, if you start it once a week, it starts, if your ignore it for a month, it may start, ignore it for a year and you need a mechanic, ignore it for 5 years and you really need a mechanic. Entropy is a bitch!

        Tsonis said that the CO2 signal is super imposed on a longer term trend, which is correct. There is an impact due to CO2. There is also an impact due to agriculture, development, deforestation, black carbon etc. I have no clue how must if due to each, but some, agriculture, seem to have improved conditions and some, development seem to have made things worse. I just want a better feel for the impact of each before I decide the best place to spend the money. Right now, it looks like dealing with black carbon and land use provide the bigger bangs for the buck.

      • cap’n –

        0.2 C is a worst case, 0.04 is a best case, the range is likely between the two without some unexpected event.

        Except if “the CO2 signal is super imposed on a longer term trend,” and we can’t really determine what is causing that longer term trend (and thus can’t determine at what point that imposition of the ACO2 signal will be swamped by long term trends), and we have growth in ACO2 emissions, wouldn’t we expect that the magnitude of the impact of the ACO2 signal will increase?

        In the long run, we’ll all be dead. Looking at long term trends, (say, over billions of years), the signal of ACO2 would, I imagine, not be detectable. That doesn’t really speak to the importance of short term trends to people who live lives orders of magnitude shorter than billions of years.

        Anyway, I’ll read over your responses again and see if I can manage to understand the hypothesis.

      • What is the actual mean is a big deal,

        http://redneckphysics.blogspot.com/2012/02/fruit-cocktail.html

        That is a comparison of the Central England Temperature to Siberian tree rings in the Taymyr peninsular. There is a new study out suggesting that the Little Ice Age was caused by tropical volcanoes. England is influenced by the Gulf stream current, so it is a fairly decent indication of ocean heat content, not great, but not too bad. The Taymyr tree rings are a fairly good indication of growing conditions near the Arctic. The only time that both seem to jibe, is after 1814 or so.

        http://redneckphysics.blogspot.com/2012/01/history-of-modern-agriculture-and.html

        The industrial revolution was the birth of the Agricultural revolution. Lots of land was cleared for wheat following the inventions of the steel plow and wheat combine. Right now, 1% of the total surface of the Earth is planted in Wheat, Rice and Corn, the big three grain crops. A doubling of CO2 will cause about a 1% change in forcing.

        Do you think it is possible that agricultural land use could be responsible for 50% of the warming since 1814?

    • What trends? Post hoc data fitting will not be much help if the data observations are the result of non ergodic influences and cannot be used for prediction.

      External forcings can only be measured and recorded and checks made to detect anomalies in the temperature and other data sets which may result from these forcings.

      Given that the 20th century is a relatively small segment of the temporal continuum of Earth’s climate over the past 2-3 million years, how significant indeed is the warming trend that you assert?

      • Peter –

        Given that the 20th century is a relatively small segment of the temporal continuum of Earth’s climate over the past 2-3 million years, how significant indeed is the warming trend that you assert?

        See my comment above to cap’n.

        http://judithcurry.com/2012/02/07/trends-change-points-hypotheses/#comment-166854

        In the context of 2-3 million years, it is obviously completely insignificant in a mathematical sense. But that doesn’t mean that it is completely insignificant w/r/t the impact of climate in how people live their lives.

        BTW – why limit yourself to 2-3 million years? Why not consider billions of years?

      • Why not consider billions of years? I was really thinking of the period when humans evolved but complex life forms certainly existed much earlier. The paleological (sp?) evidence also indicates that the Earth was much warmer than it is today and of course, much colder at times as well.

        This seems to support the hypothesis that there are strong negative feedback at work in keeping the Earth’s climate within certain bounds conducive to maintaining life. This could just be an accident but many believe that it is not.

    • John Costigane

      Joshua,

      A better question is :

      Can all the warming, cooling phases in the 20th Century, be explained by ENSO?

      A possible answer is:

      The varying number, and intensity, in each sequence of El Nino and la Nina events could be the explanation.

      • John –

        Can all the warming, cooling phases in the 20th Century, be explained by ENSO?

        Wouldn’t that still leave the question of what would explain the longer-term trend over the entire century?

      • John Costigane

        Joshua,

        I understand your question, but you are assuming co2 is the answer. Better to be objective in questions about science.

        co2 is possibly part of the answer. I hope Tamsin can bring some light to the controversy, noting that ENSO is part of some climate models.

      • John –

        I understand your question, but you are assuming co2 is the answer.

        Actually, that’s not quite accurate. I think that within a range of probabilities, ACO2 might be an answer. As I understand it, the “consensus” opinion is based on quantified probabilities of that theory of cause-and-effect.

        Now I can understand why you, as an individual, might think I’m making assumptions that I’m not making. But when it happens in an often repeated pattern in these pages, I have to question why that happens.

        I would imagine that there is some combination of factors in play – but considering probabilities, I’d have to guess that one of them, at least sometimes, is a willful intent on the part of some “skeptics” to impose certainty onto statements I make that don’t express certainty. I see it happen often when some “skeptics” misrepresent the “certainty” of the “consensus” perspective on AGW. You know, the whole “They said that the ‘science is settled’ kind of meme.”

        Again – I’m not saying that I put you into that category. I’m just wondering why a mistaken assumption on your part (about what I assume) is something I find so frequently.

      • Joshua I don’t think it’s willful, I think people project onto you from others perceived to be on your team like Robert, Andrew Adams etc. Heck on Collide-a-scape you told Michael Tobis you were on his team. Liar ;)

      • billc –

        I think that maybe sometimes it happens because I set people up (so I can nail them on making false assumptions).

        But either way, it doesn’t reflect very well on “skeptics” as a group. Either they are suckers easily set up by someone of inferior intelligence, or they are prone to false generalizations rooted in inattention to detail.

        Tobis is on my tribe in some ways, and not in other ways. Kind of depends on how you define tribe.

        And of course, there’s always the “I wouldn’t want to be a member of any group that would have me as a member” line of thinking.

      • Naw you’re much more of a humanitarian than Tobis, snarky as you may be.

      • Latimer Alder

        @joshua

        If you want to keep talking about yourself and analysing people’s reactions to you, I believe that there are plenty of practitioners called ‘therapists’, ‘shrinks’ or ‘loony doctors’.

        But they normally charge a fee for their consultations. I don’t believe that this blog is the right place to come instead for free advice.

        The clue is in the title.It is ‘Climate Etc;, not ‘Joshua Etc.’

      • Latimer –

        If you want to keep talking about yourself and analysing people’s reactions to you,…

        When people consistently make incorrect assumptions about what I assume, I comment on it. I think that the pattern is instructive. When people address comments to me about my assumptions or motivations – as you have done in the post above, I respond.

        Here’s a little logic question for you. If you think that I shouldn’t be responding to comments that people address to me, what is the single most effective thing that you can do in response?

        Ponder that a bit and get back to me with an answer. I’ll tell you if you’re right.

      • @joshy

        When the subject is yourself, I can only rarely be arsed to reply. It does not sufficiently engage my interest.

        The polite word for ‘shrink’ is psychotherapist. I’m sure Yellow Pages will have a section for your community. Or perhaps ‘advanced psychotherapists’ would be more appropriate for serious disorders.

      • Latimer –

        Your response proves that you failed the quiz.

        But don’t let that get in the way of insulting me. Don hasn’t been around much lately, and I miss his brand of comments.

      • @jos….

        Zzzzzzz……….

    • Joshua, a single century only represents 5% of the last two millennia. Relax, then take a deep breath. Don’t forget to exhale now. Feeling better? Good.

    • John Costigane

      Joshua,

      I explained that ENSO could have caused all the ups and downs, including the peak of 1998, which answers your ‘biased’ question.

      • John –

        My question is whether ENSO (or other short-term variables) could explain all the (individual) ups and downs and still not explain the longer-term trend.

    • John Costigane

      Joshua,

      Could the last upslope, before the levelling-off, be unique? That is an unknowable at present, since our knowledge of natural variability is incomplete.

    • Reconciling
      “but how external forcing materializes in terms of surface temperature in the context of spatiotemporal chaos is not known.”

      Point is, that it may not be known or knowable

  39. Steve Milesworthy

    “High frequency ‘noise'” plus uncertainty in the data can plausibly account for the dip around 1910 and the bump around 1940. Even if it only accounts for a small part of these two “features” (say 0.2C for 4-5 years) the perceived excessive “warming 1910-1940” and “the flat trend between mid 1940′s and mid 1970′s” look far less significant.

    And “explaining the flat trend for the past 15 years” is easier because the trend isn’t flat and the period is very short.

    Hypothesis 3 is currently at the level of numerology. But even if correct it does not mean that the long term underlying trend is not forcing dependent.

    • The problem with talking about recent trends i.e. the 20th century without putting them in the context of the warming coming out of the Little Ice Age e.g. http://www.scirp.org/journal/PaperInformation.aspx?paperID=3217 is to attribute the results of a long term trend to short term variations such as 1970 -2000. What we are discussing is the power of CO2 to overwhelm all other factors. The power of CO2 is suggested to be an homogenizing influence. IF CO2 is the overarching factor, than any variation will have little and diminishing impact over time. Longish periods where temperature does not follow the CO2 curve call CO2’s influence into question. That is what is essentially being argued. You may spin as many hypotheses as you wish to explain it, but the result of the data is to suggest that the power i.e. climate sensitivity to CO2 is less than proposed by many. Perhaps a more interesting question that attacks the heart of the question of homogeneity is the large and grown divergence between temperatures in the Northern Hemisphere and the Southern. http://www.woodfortrees.org/plot/hadcrut3vnh/from:1970/to:2011/plot/hadcrut3vsh/from:1970/to:2011 interesting that we see this divergence accelerating 1990 to 2011 http://www.woodfortrees.org/plot/hadcrut3vnh/from:1990/to:2011/trend/plot/hadcrut3vsh/from:1990/to:2011/trend and since 2000 the southern line goes negative http://www.woodfortrees.org/plot/hadcrut3vnh/from:2000/to:2011/trend/plot/hadcrut3vsh/from:2000/to:2011/trend . What is going on. Do the models not assume that CO2 is a global phenomena. Where is the imposed homogenization. This seems to be an even greater blow than the failure of the global temperature to follow the models trend lines projected from the warming from 1970 to 1998.

    • Vaughan Pratt

      “High frequency ‘noise’” plus uncertainty in the data can plausibly account for the dip around 1910 and the bump around 1940

      Is that really specific enough to qualify as an “account?” How do you get such precise dates out of such vague priors?

      The dates of the solar cycles are very precisely known, allowing dates like these to be obtained with at least an order of magnitude better accuracy.

      The declines after the odd-numbered solar cycles 13, 15, and 17 were all strong. Furthermore the bottom of the first of these, in 1908, was within 5 years of the deepest trough of the ocean oscillations since 1850, which was in 1913, which is why 1910 was so cold. And the highest peak of the ocean oscillations during the same 162 years was in 1938, a mere 25 years later, and three years after that, 1941, saw the hottest peak of the solar cycles in that period, namely number 17 (number 13 having been the second hottest back in 1899), whence the 1940 bump. (Note that I’m treating the so-called “ocean oscillations” as a single organic whole rather than as independent AMO, PDO, etc.)

      The second hottest ocean peak was 1877, which in conjunction with the fairly hot solar cycle number 11 peak accounts for the high temperatures around 1880.

      There is less to long term global climate than meets the eye, in fact it is disturbingly simple given our collective prejudices about the complexity of the climate. As one of the students pointed out at our weekly lunch meeting today, this is not so surprising when you consider that the global behavior of the Sun’s output is very simple by comparison with the very complex local behavior of individual sunspots. Climate is not just an incomprehensible muddle, it a mix of simple big things and complicated little things, kind of like Gulliver strapped to the ground with lots of Lilliputians climbing over him.

      Regrettably I can’t say that shorter term phenomena like ENSO are simple: they’re well beyond my ken, and I’m happy for now to agree with anyone who wants to claim they’re complex. However I don’t see them as having any significant influence on phenomena of duration much more than a decade, and I therefore discount them as having little relevance to long term climate forecasts.

      • Steve Milesworthy

        Attention has been drawn by Judith to three “features” of the temperature record without any formal assessment of whether the features are real or are just a figment of the mind being drawn to two cherries – one at around 1910 and another at around 1940.

        These features are being attributed to some sort of non-specific “regime change” that forces climate into a different state.

        But two of the features are accentuated only by two short-term periods of what could be “high frequency ‘noise'” which is apparently explainable by “Hypothesis I” – i.e. a short term period when some of the “natural” cooling or warming influences converge.

        If you look at datasets with different coverage (eg. Land only, northern or southern hemisphere only, US only) then these “features” look very different suggesting that they would be sensitive to changes in coverage of the datasets.

        None of this, though, takes anything away from ongoing warming driven by ongoing increases in forcing as Hypothesis 3 is only concerned with shifts away from the longer term forced trend. It doesn’t displace “AGW theory”.

      • Latimer Alder

        @steve milesworthy

        Talk me through the argument once again, please:

        Problem:

        We have two periods off apparent warming 1910-1940 and 1970-2000 They have about the same slope and are of about the same length. They are both pronounced enough to be considered a trend.

        You then say that the first ‘could be attributed to short term periods of high frequency noise’, but are adamant that the second is due to some other cause (I guess CO2 emissions).

        Maybe Occam’s Razor is no longer fashionable, but when you have two very similar phenomena occurring within the same system. isn’t it a wise idea to have as your starting point the idea that they are the same thing happening twice? (I fully accept that your investigation may in fact show them to be different, but such examples will be the exception rather than the rule).

        Because it seems to me that to really demonstrate a proper understanding of climate you have to be able to explain both periods with exactly the same rigour and within a comprehensive theory.

        You can’t really say …’We’ve done oodles of work on the recent stuff and have totally convinced ourselves that the only possible cause is CO2′ and then just dismiss the earlier period as ‘could be high frequency noise’. It will not take an Hercule Poirot to smell a rat and to conclude that your homework really hasn’t been done, nor your comprehensive (Its all down to CO2, stupid) theory submitted to any sort of real test.

        So please explain once more how you come to two very different conclusions about the two periods in question. You may also recall that – as a one-time chemist – I just love to see experimental evidence rather than vague generalised hand-waving.

        Thanks.

      • Steve Milesworthy

        Latimer,

        The 1910 to 1940 warming was preceded by 0.2C of cooling and followed by 0.2C of cooling.

        Therefore, if the 1910 minimum were high frequency Hypothesis 1 cooling applied to a warmer background and the 1940 peak were high frequency warming on a cooler background then what you are left with is almost monotonic (forced?) warming from 1890 to 2011.

        Pre-war temperature also has the potential for being noisier because the sampling covered less of the globe so is more prone to confluences situations where they drive cooling/warming in the observed locations and the opposite in the non-observed locations (cf the difference between HadCRUT3 and GISTEMP).

        Post-war we simply have an ongoing trend of warming that is slower at the start of the period, then is fast in the 1980s and 1990s and is then slower – that’s 60 years.

        The two periods are not the same because the second period is longer, has a warmer base-line, is better observed and appears more consistently and more clearly in, say, the land-only data. So a naive application of Occam doesn’t seem appropriate.

        Of the two interpretations I prefer mine as being less speculative than assuming a “climate shift” of no known cause. And, again, the climate shift idea does not deal with the longer term warming trend.

      • …then these “features” look very different suggesting that they would be sensitive to changes in coverage of the datasets.

        Or sensitive to something else. Like urban stations growing (warming), then moving to rural airports (looks like a cooling), and then build-up around the airports over time (another warming).

        Here’s an example:

        Trend is 2.0C/century at the city station, but only 0.1C/century at the airport, which is moderated by the flat temperatures until the mid 1970s.

      • @steve m

        Thanks for the more detailed explanation.

        But your discussion of the 1910-1940 period still relies on two ‘if’s, with no supporting evidence, a general assertion about observational techniques a while back and then on your own preference for your explanation compared with any other.

        Fair enough, and you are, of course, entitled to your opinion just as anybody else is. But it’s not what I would call a rigorous proof. It is a plausible hypothesis, but unless I have missed an essential point it is no more than that.

        And this is deeply worrying.. For we are told that the explanation of the more recent period of warming has been unequivocally shown to be primarily caused by the increase in CO2 concentration in the atmosphere. And from this proof follow all sorts of other scientific, political and economic consequences.

        I also opined that to show that the climate is really well understood, you need to be able to demonstrate that you have a numerically based explanation of both periods of warming that use the same conceptual model.

        But if the best explanation you can advance for Period 1 is so woolly and vague, how much can we be certain of the answer for Period 2? Surely it should be almost trivially simple. You take the model you have for Period 2 – the unequivocal one with all the bits about CO2, just rewind it back to 1910 and lo! if the model is right, out pops the curve for 1910-1940 and the cooling after that ..and so on.

        This is not difficult to do and the ‘correct’ answer would be powerful and persuasive evidence that you really have got a theory that effectively covers periods when CO2 could not be a strong influence. By implication it would also mean that your understanding of all the other possible influences would be pretty much on track.

        This is so obvious a thing to do that I am pretty bemused as to why it doesn’t seem to have occurred to anyone. You have all the tools and techniques, all the equipment, all the staff, all the expertise..and a success in that demonstration would ineed be a triumph for you theory.

        So why not? What is holding you back? Or is it one of those pieces of work that never get published for fear of ridicule? That maybe somebody somewhere did try it but couldn’t get the required answers? And rather than risk the wrath of the elders decided that the best thing was just to stay schtumm?

        As ever, value your views…..

      • Steve Milesworthy

        Latimer,

        Of course there are two perhaps unresolvable “ifs”. We cannot go back in time to see in more detail what happened in 1910 and 1940.

        However, we cannot therefore attribute it to regime change either.

        My point is, really, that the “ifs” are not in the slightest bit implausible. As Chris Ho-Stuart has pointed out below the divergences between the various temperature datasets in the past decade or so are as large as the divergence in the SST dataset that would be required to explain the two features that have given rise to the apparently similar length warming trend of 1910-1940.

        Until there is better evidence for “regime change” then theory 1 seems sufficient to me. I think Occam would agree.

        Evidence required for H3 would be what? A prediction of a change in some as yet unobserved feature of the climate? Observation (I use that word casually) of regime change events in high-end climate model simulations?

      • Latimer Alder

        @steve m

        (In haste as I have to go out to work tonight, brrrrr).

        I made a suggestion as to exactly how you could resolve the issue in a way that potentially gives you a win-win triumph for your theory. Easy enough to do and you have all the resources available. If you can run a model fro 1980-2000, you ought to be able to run it for 1910-1940…and, if it is any good, you will be able to show good agreement with the old observations. If not……..

        But if, for some reason unfathomable to those of us who pay the bills for your organisation you find yourselves unable to do so, then yours is just one explanation among a multitude of others.

        I do not think it is implausible.. indeed I’d probably lay a bob or two that may be at least partly on the right lines. But the point I have been labouring to make is that having a plausible explanation only gets you to the starting line. It does not get you onto the podium. To do that requires a lot more work.

        Thinking back to my university viva days and the terrifying array of gowns and professorships there arrayed, I could just imagine the dialogue:

        Difficult examiner #1; Mr Alder, we understand that you have a theory about climate. Please describe it to us
        LA: Well sir I have done a lot of work on period 1980-200 and I can conclusively show to my satisfaction that its all due to the evil gas carbon dioxide. There are lots and lots of computer programmes that demonstrate this
        DE#2: Indeed. We shall want to look at the very carefully. No doubt they are available to public scrutiny?
        LA: Mostly sir
        DE#3: Turning now to the similar period between 1910 and 1940 for comaprison…what caused that warmign according to your programmes?
        LA: Please sir I don’t know sir. I’ve never tried them.
        DE#1: Why not, young man?
        LA: Never occurred to me sir but its all down to the noises isn’t it? Anyway it was a ded long time ago and nobody cares
        DE#2: not very scientific Mr Alder, We will adjourn to consider our verdict

        (sorry gotta go…maybe more later)

      • Steve Milesworthy

        Latimer,

        Such model runs have already been done and are reported in the IPCC report. But they don’t help with short term variability because the natural variability of even two identical planets would cause temperatures to diverge.

    • Latimer Alder

      @steve milesworthy

      Is just ‘being plausible’ a high enough standard of proof? Because I’m sure that we could all come up with plausible explanations. Surely science – and especially experimentation – is the technique we have learnt to use to distinguish between explanations that are merely plausible and those that are actually true.

      Have you any better evidence than ‘plausibility’?

      • Steve, Latimer is making a similar argument to mine. You talk about “plausibility,” but without a concrete definition of “high” in “high frequency” and a concrete algorithm for converting priors into posteriors, the dates of highs and lows you point to are just handwaving based on direct observation of those lows and not an inference from correlated observations.

        Certainly one can say an observation is self-explanatory, for example the rising of the Sun each morning. However it is not customary to consider self-explanation as scientific explanation. For the latter one tends to look for a non-vacuous and quantifiable correlation with observations of other phenomena. Correlation with high frequency is non-vacuous but you haven’t quantified it.

    • The link to the image didn’t show in my previous comment. Here’s one more try:
      http://imageshack.us/photo/my-images/338/abileneraw1.gif/

  40. NOAA data clearly shows that when earth is warm, it snows more. More snow stops warming, every time. Look at the data for the past ten thousand years. Every warm period is followed by a cool period.

  41. The implication was that temperatures would rise steadily, not with 15-year gaps. The existence of such gaps, the critics argue, implies the climate models themselves are too flawed to be relied on.”

    It always snows more when the oceans are warm and the arctic is open.
    They don’t have this in their models correctly.

  42. I’m surprised that Judith raises the question of whether or not global warming has stopped with presenting a graph like this.

    http://data.giss.nasa.gov/gistemp/graphs_v3/Fig.A2.gif

    I’m not a climate scientist but I think I can recognise a peak when I see one. And this the temperature in this graph hasn’t peaked! Certainly I’d be very happy if my share investment graph looked like this too!

    PS haven’t we had this conversation before?

    • Sorry. Should be “without presenting a graph like this”

    • May not have peaked, but looks like a small plateau to me. Whether this is the summit or not remains to be seen. But there sure hasn’t been any pronounced upward trend in the last few years.

      Which raises the interesting question of what happened to
      a. cause the warming period of 1910-1940 and
      b. stop it.

      Because without bomb proof numeric explanations of those events – using exactly the same theories as are used to explain today’s events – it is clear that we do not have a solid understanding of climate, however much those professionally involved would wish us to believe that they have.

      • Vaughan Pratt

        I fully agree with Latimer on all these points.

      • I agree also. A trend does have to be a change in sign, just a change in slope does nicely.

        Funny about the 1910-1940 period. I brought that up on realclimate some time ago. I was not particularly impressed with the answer. Seems you can cherry pick solar reconstructions that show a couple tenths of degree warming but not for a couple of tenths of a degree cooling. Lean 2000 for warming, 2008 for no warming. Picking which one applies seems a bit complicated to me.

        Of course, climate science is a developing field. Polar amplification actually means mono-polar amplification. Climate science though, is definitely bi-polar :)

    • PS haven’t we had this conversation before?

      Heh.

      By what % would blog comments decrease if everyone was restricted to not repeating conversations?

      • Just think of all the money the nation would save. Free, for nothink.

      • Latimer’s response is the best so far to your initial question Joshua. I was going to try to respond but since I declared my support for Hypothesis II (HII, etc. cause I’m tired of typing that word) I didn’t. However thinking about Captain’s response to my initial comment I realize that I think HIII is informative and don’t wish to dismiss it except to restate my expectation that the chaotic change points will not be dominant.

        Joshua I think you should rephrase, because the problem with significant waming over the 20th century is the mid-century rise and fall, such that even the IPCC does not attribute early century warming to ACO2.

        I do think that over the long term, ACO2 will increase its influence. But I am skeptical that we have proof of its significance yet. It seems likely that coming decades will show less warming than 1970-2000, IMO. The smoking gun for at least SOME contribution from CO2 is the magnitude of the 1970-2000 warming and our understanding of the relative strengths of forcings to date.

        I do allow for the possibility that chaos-based shifts will alter this picture. What I don’t have is the mathematical understanding to use this in any way, and it seems to me even the most math-literate here (eg Tomas Milanovic) don’t profess to be able to use it to make predictions.

        I do hope someone can help me understand what I referred to earlier as “high level physical manifestations” of these chaotic shifts. Again, ocean regime changes affecting cloudiness and therefore albedo, seem the most obvious, but I imagine there are others.

      • You cannot change a person’s mind by just telling them something one time. You must tell them more than once and point them toward data that supports your position. This still don’t work if they don’t read and think.

      • Boy, you can say that again.

    • The arctic is open, the snows have started and temperature is at or near the peak. Earth is about as warm as it usually gets in a modern warm period.
      http://popesclimatetheory.com/page27.html

    • gistemp…(snicker), gistemp…(guffaw), gistemp…(LOL), gistemp…(ROFLMAO)

      sorry, I tried but I just can’t say it with a straight face.

  43. Hi Judy- Excellent summary of this issue. I also agree hypothesis III is the most likely. In our paper

    Rial, J., R.A. Pielke Sr., M. Beniston, M. Claussen, J. Canadell, P. Cox, H. Held, N. de Noblet-Ducoudre, R. Prinn, J. Reynolds, and J.D. Salas, 2004: Nonlinearities, feedbacks and critical thresholds within the Earth’s climate system. Climatic Change, 65, 11-38.

    we wrote

    “The Earth’s climate system is highly nonlinear: inputs and outputs are not proportional, change is often episodic and abrupt, rather than slow and gradual,…It is imperative that the Earth’s climate system research community embraces this nonlinear paradigm if we are to move forward in
    the assessment of the human influence on climate..”

    David Douglass is a leader on this subject, and I look forward to his next paper that will illuminate this issue further.

    • This is a really poor explanation for the extremely stable temperature cycle that has dominated for ten thousand years.
      Actually, it is no explanation at all.

      • Chief Hydrologist

        ‘Large, abrupt climate changes have affected hemispheric to global regions repeatedly, as shown by numerous paleoclimate records (Broecker, 1995, 1997). Changes of up to 16°C and a factor of 2 in precipitation have occurred in some places in periods as short as decades to years (Alley and Clark, 1999; Lang et al., 1999). ‘
        http://www.nap.edu/openbook.php?record_id=10136&page=10

        The explanation for abrupt change in the last 10,000 years is that they hve been smaller and less persistent than paleoclimatic abrupt change. BUt they are there – and they are evident even within the extreme limitations of the instrument record.

        You have but to open your eyes to see.

        Robert I Ellison
        Chief Hydrologist

      • Vaughan Pratt

        You have but to open your eyes to see.

        Chief, I closed my eyes and right away I could see what you meant. ;)

        The first epigram here bears on this, as does Circa Survive here:

      • Chief Hydrologist

        I don’t know Vaughan old buddy – I quote the NAS and you link to a blogger called Chairman Pratt. There is bit of a pointless asymetry here.

        Robert I Ellison
        Chief Hydrologist

      • I was expecting the title of the song, “Close your eyes to see”, to pop up but it didn’t, you have to click on the “watch on YouTube” button to see it.

    • @rpielke
      “It is imperative that the Earth’s climate system research community embraces this nonlinear paradigm if we are to move forward in
      the assessment of the human influence on climate..”

      Take out anything to do with human influence and I agree 100%. Human influence may well be a factor but a more holistic approach should be taken with respect to climate research.

      There is a strong need to better understand the underlying physics of the Earth systems in play and top explore the many dynamic relationships that the available data may show.

      • Peter said, “There is a strong need to better understand the underlying physics of the Earth systems in play and top explore the many dynamic relationships that the available data may show.”

        It is a touch wishful to model the impact of a 0.028% change in a mixed gas composition when you are not all that sure what the other 99.972% does. Course, that might just be a little simple minded on my part.

      • Vaughan Pratt

        Take out anything to do with human influence

        What do you mean by “take it out?” Do you mean that we should only study natural influences on climate and ignore anything humans might be doing?

      • VP..to clarify… To take out “human influence” I was merely referring to Roger’s sentence, so thus I could agree 100% with the sentence. Not to eliminate it altogether from further study! It certainly is a factor to be examined but not to the exclusion of numerous other (more important, IMO) factors.

      • Thanks for clarifying that, Peter.

        (FWIW, IMO the human one is more important, but I’m well aware that’s a distinctly minority view on this blog.)

        @cd: It is a touch wishful to model the impact of a 0.028% change in a mixed gas composition when you are not all that sure what the other 99.972% does.

        As far as trapping IR goes, which is the concern about CO2, we have an extremely accurate idea of what the other 99.972% does, thanks to the comprehensive HITRAN line spectra tables and our understanding of the current rates of growth of all the surface-temperature-relevant gases.

        But you knew that, so perhaps you had some other point in mind that I’ve overlooked.

  44. I. IPCC AGW hypothesis:
    This one will be proven wrong in the next decade(s). Since the annual atmospheric CO2 growth will decrease with the cooling, consensus will be forced to rethink the “all of the CO2 increase is caused by anthropogenic emissions” hypothesis. If the decadal average for 2010s is lower than the 2000s average (~2 ppm/year), the paradigm will have to shift. I assume the BAU emissions scenario. I predict lower than 1.5 ppm/year decadal average for the 2010s.

    Independently from the CO2 attribution, global cooling will disagree with the CO2GW hypothesis and the “climate sensitivity estimates”. Zero or infinitesifal or even negative is not ruled out. Not to forget nonsensical.

    II. Multi-decadal oscillations plus trend hypothesis:
    This is a bit undefined and fuzzy. I disagree with:
    – the large multidecadal oscillations (e.g NAO, PDO, AMO) being unforced. There is some good evidence of the oscillations correlating with solar oscillations. Some of it may be system response, but it can’t be separated at this point.
    – the trend is linear and unchanging. The trend itself is another oscillation.

    III: Climate shifts hypothesis:
    I don’t really understand this one.

    My hypothesis:
    Most of the global climatic change is caused by solar oscillations. These are coupled with orbital oscillations (solar, Earth’s and other planets’). We know too little of the mechanisms and modulations of energy transfer between Earth and space (solar system mainly). We can NOT calculate the balance. It would be like taking the Drake equation seriously. Meanwhile, we can recognise the basic patterns and maybe even predict (multi)-decadal global climate changes.

    • I predict lower than 1.5 ppm/year decadal average for the 2010s.

      Ok, but now let’s use the CDIAC historical data for the past few decades for all anthropogenic CO2 contributions to the atmosphere (mainly fossil fuel consumption, flaring, cement, and land use), to work out how different the 2010s would look from the past five decades if your prediction came true.

      Since 1958 the steady rise in atmospheric CO2 comes very close to an equally steady 40% of what the CDIAC says the total human emissions of CO2 come to.

      You didn’t say anything about emissions decreasing, so let’s assume business as usual for emissions. If those don’t depart significantly from the curve, then at the 40% rate the CAGR (average annual increase in atmospheric CO2) between 2010 and 2020 will be 2.6% per year. (Currently it’s 2.35% and by 2020 it will be 2.9%.)

      For your forecast of a decrease from an average of 2.6% to an average of 1.5% to come true, that 40% rate is going to have to drop to 40*1.5/2.6 = 23%.

      That is, you’re predicting that the 40% rate of retention of our emissions, which has remained very steady since we started measuring atmospheric CO2 carefully in 1958, will within the space of a single decade drop to 23%.

      Do you see the problem?

  45. AR4 was published in 2007 but can anyone identify when the prediction of .2º/decade was inserted into the document? That insertion date starts the clock, not the publish date.

    • Vaughan Pratt

      Are you thinking of 0.2 ºC/decade as any of unreasonably low, or unreasonably high, or about right?

      The last 18 years of the annualized BEST data, starting shortly after Pinatubo, show a steady increase of 0.362 ºC/decade that doesn’t contain any obvious indication of letting up. (Note that this is not over a mere decade but rather over Santer’s proposed minimum of 17 years.)

      Of course the BEST data didn’t appear until 2011, and it’s for land (which is where over 99% of humans live so it’s more relevant to us than sea temperature), but if it’s at all reliable it would appear to be showing that 0.2 ºC/decade is way too low by nearly a factor of two!

      Anyone who thinks it couldn’t possibly be increasing at 0.36 ºC/decade should be screaming bloody murder about the BEST data being totally fraudulent. Right? ;)

      • Vaughan Pratt

        (Sorry, I meant last 17 years, not 18 — 1993-2010 means Jan. 1993 to Jan. 2010 in Paul Clark’s WoodForTrees system.)

  46. Journal of Climate, Feb 2012, Surface Water Vapour & Temperature Trends, (Isaac & van Vijngaarden, York University, Toronto)

    Conclusion:

    A reduction in relative humidity can occur even though water vapor pressure is increasing if temperature is warming sufficiently. Hence, decreases in relative humidity occur at stations experiencing the largest temperature increases in winter and spring as shown in Fig. 7. The strong correlation between increasing temperature and decreasing relative humidity trends agrees with that found by Vincent (Vincent et al; 2007)

    Oooooops, so tell me again, how many GCMs have physics which match with the observed reduction in relative humidity with rise temperature and the consequent negative water vapour feedback?

    • dude I don’t think the observed reduction in relative humidity with rising temperature implies a negative water vapor feedback, just a less strongly positive water vapor feedback than has been otherwise postulated.

  47. “By what % would blog comments decrease if everyone was restricted to not repeating conversations?”

    Hah! And yet one might as well ask, by what percent would *all* human conversation decrease if no repetition were allowed?

  48. ceteris non paribus

    I think this linear vs. nonlinear change distinction is perhaps a false dilemma.

    It’s rather like the theory of ‘punctuated equilibria’ in evolutionary biology. The fossil record shows periods of relative stasis punctuated by periods of relatively rapid change. Some have taken this fact to undermine the theory of natural selection. But it is all a question of scale. When you look closely at the periods of rapid change, they too are commensurate with natural selection. The theory relies upon ordinary speciation, and thus the morphology proposed is a form of evolutionary gradualism, in spite of the name.

    Do we need a new non-linear paradigm of the climate?

    I don’t know the answer to that question, but I am curious as to what such a paradigm would even look like.

    For example – How could the climate paradigm change without a fundamental change in our understanding of the relevant physics? That won’t happen just because we ‘decide’ to adopt a new paradigm – It would require convincing evidence that our current physical theories are false. There is a huge difference between false and uncertain.

    • i still keep thinking the place to start is explaining how it might affect the cloud response. the absolute magnitude of that is so large. maybe i should keep on banging this drum. i’m sure CH think’s he’s answered it but us dummies can’t figure out what he’s really saying.

      • Chief Hydrologist

        This is what the new paradigm looks like – http://www.nap.edu/openbook.php?record_id=10136&page=12

        At the top of atmosphere it is all very simple. Energy in less energy out equals the change in enery stored in the Earth system over any period.

        In the maelstrom of the planet things change abruptly from one state to another very much like any deterministically chaotic system in electrical circuits to ecologies, economics, populations and climate models.

        ‘AOS models are members of the broader class of deterministic chaotic dynamical systems, which provides several expectations about their properties (Fig. 1). In the context of weather prediction, the generic property of sensitive dependence is well understood (4, 5). For a particular model, small differences in initial state (indistinguishable within the sampling uncertainty for atmospheric measurements) amplify with time at an exponential rate until saturating at a magnitude comparable to the range of intrinsic variability.’
        http://www.pnas.org/content/104/21/8709.full Warning – this paper is a very hard slog.

        http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view&current=sensitivedependence.gif

        What is important is what changes in the energy dynamic – and clouds are a big part of this in the short term. Largely because – I think – of changing sea surface temperature both in the Pacific and Atlantic. It makes sense to me that clouds form over cold seas and dissipate over warm. The observations at both the surface and from satellites seem to support that – see some references here for instance – http://judithcurry.com/2011/02/09/decadal-variability-of-clouds/

        As a concept – climate is a single global system with tremendous energies cascading though powerful systems. It comprises hydrosphere, atmosphere, cryosphere, ecosphere, heliosphere and pedosphere. The reductionist error is to consider parts in isolation. The new paradigm is to consider the whole as a complex dynamical system. How that is done is another question – but the fact that it is difficult doesn’t matter to God or the universe.

      • ceteris non paribus

        That’s CH’s problem, not ours. Burden of proof.

        Never assume that you are a dummy just because someone else fails to make any sense to you. There is a second possibility to consider.

        I think it’s very interesting that we are now considering the possibility of rapid “episodic and abrupt” climate change, and yet we are constantly and confidently told that climate sensitivity is low.

        It can be difficult to keep up – The climate times, they are a-changin’! :-)

      • Chief Hydrologist

        Cet – old Buddy – you want it both ways do you? Not that there is anything wrong with that – it explains the identity crisis at least.

        Chaotic systems are extremely sensitive in the region of a bifurcation but insensitive elsewhere. A small push tips the system over into a new state entirely – so rising temperature could for instance push the system into abrupt cooling in as little as a decade.

        ‘Most of the studies and debates on potential climate change have focused on the ongoing buildup of industrial greenhouse gases in the atmosphere and a gradual increase in global temperatures. But recent and rapidly advancing evidence demonstrates that Earth’s climate repeatedly has shifted dramatically and in time spans as short as a decade. And abrupt climate change may be more likely in the future.’
        http://www.whoi.edu/page.do?pid=12455

        But will the politics ever survive random cooling over 30 years? No one will ever believe it again. And there will be you and me on our own – Cet old buddy – and the Woods Hole Oceanographic Institution – saying but wait climate is hypersensitve in the region of a chaotic bifurucation. What’s not to understand? I guess lonesome is just part of the iconic nature of being a cowboy.

      • Ceteris said:

        “I think it’s very interesting that we are now considering the possibility of rapid “episodic and abrupt” climate change, and yet we are constantly and confidently told that climate sensitivity is low.”

        Agree that this is the big gotcha to all the “natural variation” skeptical views. They want to ascribe changing temperatures to internal stimulation, which however implies a very touchiness or sensitivity to the strength of the stimulation. The more sensitive the climate, the less internal stimulus is needed to get it to adjust.

        Now take that sensitivity to external radiative forcing. That same sensitivity is still there, but now it is responding to the external stimulus instead of an internal source. I mentioned this in a top-level post to this blog a couple of months ago.
        http://judithcurry.com/2011/11/29/wht-on-schmittner-et-al-on-climate-sensitivity/
        The physical analogy is doing a random walk in a shallow energy well. If the climate doesn’t have a steep well wall, it can wander.

        All that Hypothesis III is a cop-out, and the term “chaotic attractor” at its root is a fancy euphemism for an energy well.

      • Chief Hydrologist

        Some of it is fast response – http://www.osdpd.noaa.gov/data/sst/anomaly/2012/anomnight.2.6.2012.gif – the big blue V in the eastern and central Pacific in a La Nina and warm water in the same vast expanse in an El Nino.

        Some of it is slow response – http://isccp.giss.nasa.gov/zFD/an9090_SWup_toa.gif – cloud feedbacks from the shifting ocean temperatures.

        Can’t figure out what I’m saying? It’s a shibboleth. Dont know what it means either but it feels good just to say it.

      • Chief Hydrologist

        I was moseying along in a random walk in a shallow well and hurt my ponies nose. Thanks Webby.

    • How could the climate paradigm change without a fundamental change in our understanding of the relevant physics?

      http://www.engineeringtoolbox.com/laminar-transitional-turbulent-flow-d_577.html

      To improve the heat transfer between fluid boundaries you can increase the turbulent flow, which increases both the molecular contact rate and the rate of diffusion in the fluid. Simple right?

      ENSO is a product of changing turbulent flow rates between the ocean and the atmosphere. The ENSO is defined as the change in temperature for a few boxes in the Pacific ocean. There are a bunch of boxes and a bunch of ocean. All you have to do is figure out the thermal impact of the change in relative velocities for each and every box, then you have a fair start.

      Then there is the long term laminar flow puzzle. Heat exchanged in the polar regions cools the sea water to approximately 4 degrees C. That water sinks in a slow more viscous flow exchanging very little heat producing a thermal boundary in the deep oceans. The rate of thermal diffusion into and out of that layer is extremely slow. Once you have the turbulent flow problem solved, jump on the laminar flow problem.

      It should be a piece of cake to solve the transitional flow problems. Oh wait? I think there is a prize of some kind to solve that problem.

      Let’s see, convection in the atmosphere can be laminar, transitional and/or turbulent, it must impact the radiant transfer as well, since there is horizontal convection which can be laminar, transitional and/or turbulent. That horizontal convection can be below, at or above the average radiant boundary layer.

      I am sure a simple up/down model can figure out all that is going on.

      So there is nothing new about non-linear issues in fluid dynamics, it is still a difficult problem. Climate science has yet to scratch the surface.

  49. Speaking of trends, see:
    Surface Water Vapor Pressure and Temperature Trends in North America during 1948-2010, Journal of Climate 2012 ; e-View
    doi: http://dx.doi.org/10.1175/JCLI-D-11-00003.1
    V. Isaac and W. A. van Wijngaarden

    Over 1/4 billion hourly values of temperature and relative humidity observed at 309 stations located across North America during 1948-2010 were studied. The water vapor pressure was determined and seasonal averages were computed. Data were first examined for inhomogeneities using a statistical test to determine whether the data was fit better to a straight line or a straight line plus an abrupt step which may arise from changes in instruments and/or procedure. Trends were then found for data not having discontinuities. Statistically significant warming trends affecting the Midwestern U.S., Canadian prairies and the western Arctic are evident in winter and to a lesser extent in spring while statistically significant increases in water vapor pressure occur primarily in summer for some stations in the eastern half of the U.S. The temperature (water vapor pressure) trends averaged over all stations were 0.30 (0.07), 0.24 (0.06), 0.13 (0.11), 0.11 (0.07) C/decade (hPa/decade) in the winter, spring, summer and autumn seasons, respectively. The averages of these seasonal trends are 0.20 C/decade and 0.07 hPa/decade which correspond to a specific humidity increase of 0.04 g/kg per decade and a relative humidity reduction of 0.5%/decade.

    Support for the saturated greenhouse effect leaves the likelihood of AGW tipping points in the cold

    • There are several interesting theories and I read many comments from posters who have “studied the climate” and “deeply understand” various aspects of the climate and are quite sure what drives the climate and the relative importance of CO2.

      Simple question– does anyone’s theory or relative understanding really matter until they can demonstrate it is accurate by having it modeled and then showing that the model matches observations over a reasonable period? Longer periods of matching observed conditions will lead to higher confidence in future predictions being accurate.

      Trusting models that do not match observed conditions is called faith. (that something will change in the future to make them accurate)

      • Vaughan Pratt

        I agree 100% with Rob on this point. It’s amazing the faith people put in inaccurate models.

    • Bottom line is that humidity increased. Relative humidity is not the absolute humidity.

    • WebHubTelescope
      Bottom line is that the SURFACE specific humidity increased while the relative humidity decreased.
      That says nothing about the upper atmosphere.
      The historic radiosonde data showed more strongly declining upper atmosphere specific humidity vs increasing lower atmosphere specific humidity. e.g. Paltridge et al. Trends in middle- and upper-level tropospheric humidity from NCEP reanalysis data

      It is accepted that radiosonde-derived humidity data must be treated with great caution, particularly at altitudes above the 500 hPa pressure level. With that caveat, the face-value 35-year trend in zonal-average annual-average specific humidity q is significantly negative at all altitudes above 850 hPa (roughly the top of the convective boundary layer) in the tropics and southern midlatitudes and at altitudes above 600 hPa in the northern midlatitudes. It is significantly positive below 850 hPa in all three zones, as might be expected in a mixed layer with rising temperatures over a moist surface.

      There are a number of global warming objections that the data must be wrong attributing differences to slow sensor response etc.

  50. Apropos to trends, change points, and hypotheses:

    http://www.yaleclimatemediaforum.org/2012/02/jpls-josh-willis-looks-ahead-to-continuing-sea-level-rise/

    The combination of the video and the exchange in the comments is quite interesting, IMO.

    • when the arctic is open, much of that water falls as snow and becomes long term ice.
      when the arctic is open the sea level falls, it don’t rise again until earth cools and the snow stops falling.

    • interestingly, and as discussed in some of the comments on the page you link to, it appears the trend in SLR is slowing, even if you try to avoid cherry picking by truncating the curve in 2010. Just by a little. It’s been discussed before. It’s interesting to think of what the combination of a flattening in SLR and simultaneously land-atmosphere GW might mean. Maybe nothing. Maybe the missing water is in space!

  51. People worry about one molecule of manmade CO2 per ten thousand molecules of other stuff, including the three molecules of natural CO2.
    An asteroid hit the earth and killed the dinosaurs and other species, changed the orbit and spin axis and earth survived. We have caused some changes, but our changes are tiny compared to natural changes.

    • Latimer Alder

      @HAP

      But Carbon Dioxide is The Death Gas!!!

      Too much CO2 will cause the planet to be overrun with those dreadful plant things growing everywhere. Which is not at all what we mean by ‘going green’ :-)

      • The day of the Triffids was a portent. Give them too grow-juice and the blighters will try to take over.

        The Green Peril Looms,
        We’re Doomed, we’re doomed
        We’re DOOMED! :)

  52. The furniture on the deck of the Titanic needs to be re-examined in detail, plans carefully made and redone, and rearranged again and again until we get this furniture finally right once and for all. But then again we shall have to rearrange it because my salary depends on this furniture!

    “It is difficult to get a man to understand something, when his salary depends upon his not understanding it!” – Upton Sinclair

  53. sorry to be off topic – looks like a number of Book reviews are in for Michael Mann’s new book..

    Scot Mandia’s is worth a perusal (laugh)
    http://www.amazon.com/gp/cdp/member-reviews/A1U43LM86A5W5P/ref=cm_cr_pr_auth_rev/192-3584096-0290965?ie=UTF8&sort_by=MostRecentReview

    like Peter he didn’t apparently like Donn’as book much..
    I notice John Cook and Dana have also given 5* reviews …

    • Barry –

      Your link didn’t get me to a Scot Mandia review of Michael Mann’s new book (for which I might be grateful). My Amazon link throws up two reviews – neither which is by Dana or John Crook!

      Am I looking in the wrong place?

  54. Climate models are very complicated. There are sections of code from many different disciplines. No one of them understands all the disciplines. Whoever puts this all together does not understand all the different disciplines. No one in the different disciplines understands how this all works together. Since all the disciplines had an input, they all accept the output as absolute truth. This is the Gospel of Climate Science.

    • Vaughan Pratt

      Climate models are very complicated.

      Ok, I understand the complaint. But if someone came up with a simple and accurate model would your objection to climate models then be that they’re too simplistic?

  55. I liked this quote from the Sneyers abstract:

    it seems justified to conclude that there are severe limits to climate predictability at all scales

    I think anyone who has an interest in following the weather could have told you that. Which maybe explains why so many meterologists seem to have doubts about the threat of climate change.

    • Vaughan Pratt

      I think anyone who has an interest in following the weather could have told you that.

      Not unless you have a different definition of weather from Mark Twain: “Climate is what you expect. Weather is what you get.”

      There do indeed appear to be severe limits to climate predicability at all weather-relevant scales. But how does that extend to all scales? How would you go about showing that we have no hope of predicting the global temperature averaged over the period 2075-2100 to within say a degree?

      • Chief Hydrologist

        There are 2 answers to that – hombre – and they both involve model ‘plausibility’.

        ‘AOS models are therefore to be judged by their degree of plausibility, not whether they are correct or best. This perspective extends to the component discrete algorithms, parameterizations, and coupling breadth: There are better or worse choices (some seemingly satisfactory for their purpose or others needing repair) but not correct or best ones. The bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms, and a posteriori solution behavior.’ Irreducible imprecision in atmospheric and oceanic simulations – James McWilliams

        The first plausibility criteria requires billions of dollars and thousands of times more computing power.

        ‘The global coupled atmosphere–ocean–land–cryosphere system exhibits a wide range of physical and dynamical phenomena with associated physical, biological, and chemical feedbacks that collectively result in a continuum of temporal and spatial variability. The traditional boundaries between weather and climate are, therefore, somewhat artificial.’ A UNIFIED MODELING APPROACH TO CLIMATE SYSTEM PREDICTION
        by James Hurrell, Gerald A. Meehl, Davi d Bader, Thomas L. Delworth , Ben Kirtman, and Bruce Wielicki

        The second plausibility criteria is much simpler to implement.

        ‘Atmospheric and oceanic computational simulation models often successfully depict chaotic space–time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but nonunique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model. Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.’

        As running simulation ensembles across systematically designed model families would require billions of dollars and thousands times more computing power – we simply decide subjectively what a plausible solution looks like after the fact.

        As for how accurate this is – seriously – who really gives a rat’s arse.

      • Ombre de hombre, wallet be ballot, all of it fall out.
        ===============

      • Chief Hydrologist

        Yee hah – yippee-ki-aye – I’m the climate cowbot.
        Dude – get lost – we want a shiny billion $ superbot,
        Not a lonesome, iconic, laconic, ironic dusty cowbot.
        Shibboleth – it’s just a game I play for fun – it’s just a…

      • Vaughan,

        I think Mr Clemens has basically said the same thing I did – just better expressed.

        And the quote I referenced said ” severe limits”, not impossibilities. I’m not taking the position that modeling is worthless. I do believe that modelling should not be the primary basis for policy, as it isn’t yet good enough to tell us what is happening.

      • I do believe that modelling should not be the primary basis for policy, as it isn’t yet good enough to tell us what is happening.

        Agreed. I further believe that the current approach of modeling the equilibrium states and treating the transitions between them as only of secondary interest is doomed to failure, because for a long time now and for the foreseeable future we’re in a transition with no equilibrium state within a century of today’s date.

      • Chief Hydrologist

        ‘I haven’t lost my temper in 40 years; but, Pilgrim, you caused a lot of trouble this morning; might have got somebody killed; and somebody oughta belt you in the mouth. But I won’t. I won’t. The hell I won’t!’

        The bushwhacking SOB is reduced to making snide comment from the sideline. They are surpringly free of any science at all for one with such a limited capacity for wit.

      • Don’t they give away the game when they use “ensembles” of model runs to get the best fit for hindcasting?

        http://www.pnas.org/content/107/5/1833.full

      • Chief Hydrologist

        ‘The global coupled atmosphere–ocean–land–cryosphere system exhibits a wide range of physical and dynamical phenomena with associated physical, biological, and chemical feedbacks that collectively result in a continuum of temporal and spatial variability. The traditional
        boundaries between weather and climate are, therefore, somewhat artificial.’ A UNIFIED MODELING APPROACH TO CLIMATE
        SYSTEM PREDICTION by James Hurrell, Gerald A. Meehl, Davi d Bader, Thomas L. Delworth , Ben Kirtman, and Bruce Wielicki

        The weather and climate quote is from Heinlein not Twain.

  56. Stephen Wilde

    “Meanwhile, we can recognise the basic patterns and maybe even predict (multi)-decadal global climate changes.”

    Quite so.

    Poleward and/or more zonal jetstreams mean that the troposphere is bit warmer as the rate of energy flow through the troposphere increases.

    Equatorward and/or more meridional jetstreams mean that the troposphere is a bit cooler as the rate pof energy flow through the troposphere declines.

    But either way the total system energy content fails to change because BOTH are NEGATIVE system responses to anything that tries to take the system away from the energy content dictated by atmospheric surface pressure and solar input.

    Add to that the 60 year oceanic cycling and the millennial solar cycling from MWP to LIA to date and that gives a pretty good idea for predictions.

    We currently are going into a cooling oceanic phase for 20 to 30 years but it is a bit early to be at the solar cycle peak though the current solar quietness suggests that possibly the peak arrived early and we are now on the way down. Could still be few active solar cycles to come though.

    Meanwhile the jets are more meridional/equatorward than they were so the troposphere is currently cooling and more clouds from more wavy jets is reducing the amount of solar energy getting into the oceans which will intensify La Nina dominance for so long as the sun stays quiet.

    I suspect that that will turn out to be a better guide to the future than the current GCM output.

    Bookmark this post and review in 5 years.

    • Markus Fitzhenry

      Thanks for that Mr Wilde.
      You’ve given me inspiration to wrote a book for children, titled:

      “How Our Climate Works, Understanding Macroclimatology.”

      Titans, such as yourself, have made it so easy to understand it won’t take me long at all. See you when I’m finished.

      Cheers for now,

      • Markus, is that sarcasm or not ?

        I don’t see any intrinsic problem in boiling a highly complex system down to the fundamentals. Whether it is right or wrong can be readily checked against ongoing events.

      • ‘The Pet Climate’, cartoons by Josh.
        ==========================

      • As Craig Loehle said:

        “One of the ways people deal with too much data/information is to make a simplifying story. A good simplifying story would be Newton’s theory of gravity, which in its simplist form ignores friction and other minor factors”

      • Markus Fitzhenry

        No sarcasm, I meant it is a compliment.

      • Chief Hydrologist

        Did Craig say that? My engineers heart shrivels into dust – and I stand vindicated. Friction is routinely included in the calculations involving the laws of motion. Really – the only thing that is not included in the Newtonian Laws of Motion is relativistic effects – and that is entirely forgivable as – when ambling along the trail on my pony – relativity only really matters when we are being chased by cougars.

        Robert I Ellison
        Chief Hydrologist

      • Chief,

        That’s what the gun is for. The cougars that is.

      • Friction is routinely included in the calculations involving the laws of motion.

        What “laws of motion?” Certainly not Newton’s three laws of motion, which don’t breathe a word about friction.

        Perhaps you’re referring to the laws of motion as taught to hydrologists, where dynamic viscosity, kinematic viscosity, and fluidity are important.

      • Chief Hydrologist

        Don’t worry ‘bout bitin’ off more than you can chew, Pilgrim, cause your mouth is bigger’n you think.

        ‘3.Third law: The mutual forces of action and reaction between two bodies are equal, opposite and collinear.

        Perhaps you didn’t get past the first 2. Seriously – I can’t be bothered instructing you in kinetic or static friction. Is this seriously embarrassing or just your regular horses arse idiocy?

  57. I wonder how many are aware of the policy implications of what is happening.

    If models are not useful in a decadal timescale, such as they can predict a strong warming for a period of minimal or even no warning, then what use is there for models? What government (apart from North Korea…) would make it difficult for people to heat up their homes in the next decade with the explanation that is going to be warm in 2070 anyway?

    People do not average-out their lives across decades or centuries: each and every one of us have to go through each and every day first.

    If I freeze to death today at -10C, I will not enjoy the warmth of July at +30C even if the average is +10C, perfectly compatible with human life. The same can be said of plants and animals. If I plant an olive tree in my London garden, it will die of cold in February even if the yearly average is in theory just enough to make olive trees survive in the open. If a nasty mosquito species migrates from warmer places during an August heatwave, still if that species cannot survive the following winter it will not be around until next migration opportunity during a future heatwave.

    A purely statistical, multi-year approach to modelling the climate is in theory useless for policymaking (similar considerations could be made for non-regional projections, but that is too long a story here – read “How Space-Time Digested AGW” if interested). And if we end up with 15 years of incorrect projections without even a volcano for an excuse, then whatever physical explanation there is, policymakers would be much wiser in keeping climate scientists at arm’s length.

    • Interesting thoughts.

      I largely agree, but would add that there are plenty of other reasons for policymakers to keep climate scientists at arm’s length..

  58. Chris Ho-Stuart | February 8, 2012 at 3:10 am |
    Steven mosher, I don’t see the problem with the graph you mention. It includes a shading envelope that indicates a range of possibles. It has a horizontal scale on which ten years doesn’t even show (the tick marks are 50 years apart).

    To paraphrase E.S…. As you can’t read a simple chart, maybe you should take my Graph lab/class.

    • Urk. You are right… the tick marks are 20 years apart. Mea Culpa.

      I do still think that the ticks marks, along with everything else about the graph, show that it is not attempting to predict 10 year windows of warming.

      But I salute a valid coup!

  59. Let us grant that someone has proposed 3 theories, as in the head post. The standard way to proceed would be to see if any of them could be disproved. IPCC has made no effort to disprove II and III, which have sufficient explication (in my opinion) to be called at least plausible. In fact, when it is pointed out that the GCMs are running hot, it is claimed that “oh no, we said II is possible with decadal fluctuations”–but if that is allowed, how does IPCC know that the warming from 1980 was not itself just (or partially) a natural fluctuation?

    • Fair point.

      But haven’t the IPCC allowed for a large portion of natural variation from 1980 by saying that (only) more than half can be attributed to AGW?

      I only say that because the famed attribution statement strikes me as the one genuinely conservative thing in the whole of AR4.

    • Dikran Marsupial

      This raises an interesting question, which is how should the IPCC (or anybody else for that matter) falsify hypothesis II and III, which although they are at least plausible, make no testable predictions, unlike hypothesis I. Has anybody made projections for future climate with an unambiguous statement of uncertainty that would allow the projections to be falisfied by the observations?

      Hypothesis I also has the advantage that it can be used to make a model based on physics, rather than statistics, that can at least explain past climate. I am not aware of any phsyical model based on hypothesis II and III that can explain 20th Century climate without the enhanced greenhouse effect.

      Another reason that hypothesis II and III are not as plausible in my view is that for them to be correct, our understanding of radiative physics etc., which have been tested experimentally and by observations (e.g. spectra of outbound IR radiation) must be fundamentally wrong. This is possible, but unlikely – scientific revolutions do occurr, but not as frequently as individual scientists just getting things wrong.

  60. To all who still argue the climate models predicted the current global mean temperature pause, here are the predictions from IPCC’s own website:

    Graph => http://bit.ly/zA0a2j

    Text => http://bit.ly/caEC9b

    Comparison of Prediction and observation => http://bit.ly/wVWllY

    • Girma; the point we are making is that climate models DON’T predict the timing of short term (decadal) pauses or accelerations. What model runs show is that the rate of change is not steady on that scale, but that the underlying main trend shows up over a scale long enough to smooth out the unpredictable short term variations up and down.

      Having a slow down over 15 years is in line with what we should expect according to conventional climate science. What conventional science can’t do is predict when slow down or speed up occurs. The graph you show is explicitly for “means”. Reading the report shows that this is the output of an ensemble; not a single model. It does not include and does not try to include short term variation around the trend (which is not predictable); it shows the mean.

      Your “comparison” is apples and oranges. You are comparing a single data line (the observed line) with a mean line. What you SHOULD compare with is a smooth of the data; a moving average, perhaps. The scale of the window should be something from 15 to 20 years. Recent work (discussed here at climate etc in this commentary by Judith: Santer on timescales of temperature trends has been more precise, identifying 17 years as the window over which the trend is showing over the short term variation.

      So to compare like with like, take a 17 year moving window average of the data. You’ll find it’s within the right ball park for the IPCC projections from models.

      • Chris

        I disagree with “the models are okay” claim.

        Here is IPCC’s verifiable statement.

        Since IPCC’s first report in 1990, assessed projections have suggested global average temperature increases between about 0.15°C and 0.3°C per decade for 1990 to 2005. This can now be compared with observed values of about 0.2°C per decade, strengthening confidence in near-term projections

        In the above statement, the IPCC used a 15-year period from 1990 to 2005 for trend calculation.

        Here is the trend for this period from 1990 to 2005 => http://bit.ly/wULkoQ
        This trend is a warming of 0.24 deg C per decade, as stated above.

        Now, let us check the above statement against the trend for the 15-year period from 1995 to 2010.

        Here is the trend for this period from 1995 to 2010 => http://bit.ly/wFhfXH
        This trend is a warming of 0.11 deg C per decade.

        This is outside the range between about 0.15 deg C and 0.3 deg C per decade.

        Chris, does this result weaken or strengthen confidence in the near-term projections?

      • Chris

        Another point is the observations are below the model mean.

        As a result, real climate sensitivity should also be lower than IPCC’s mean value of 3 deg C.

      • The four years leading up to 1995 included a solar max, but the four years leading up to 2010 were in a deep minimum. Do you think that makes a difference, or should we just ignore solar cycles in all this?

      • Girma, the equilibrium climate sensitivity (estimated at about 3C per CO2 doubling; or about 0.8 C per W/m^2) is not related to the rate of increase, but to how far the increase goes until the Earth is back in energy balance. You are again comparing apples and oranges.

    • Nice job! That deserves a smilie :)

  61. Dr. Loehle,
    Precisely.

    If we take the commonly accepted interpretation of the IPCC statements as projecting 0.2 Deg C/decade temperature increase, then over the 15 year period from 1997 it would have been reasonable to expect a 0.3 Deg C increase. Instead, a 0.051 Deg C increase was measured, which could be interpreted to imply (assauming the IPCC argument is correct) a natural negative variation of 0.249 Deg C over the period, or 0.166 Deg C/decade. If it is accepted that a negative natural variation of 0.166 Deg C/decade can occur, surely it is not unreasonable to accept that a 0.166 Deg C positive natural variation could occur from time to time.

    To reach the canonical value of 0.2 Deg C/decade over the canonical 30 year period starting from 1997, it’s clear that temperatures would need to increase by 0.549 Deg C or 0.366 Deg C/decade over the next 15 years. Let’s see.

    (By the way, a 0.2 Deg C/decade increase would imply a temperature increase that would fall within the stated limit of 2 Deg C temperature increase to avoid catastrophic climate change).

    • Instead, a 0.051 Deg C increase was measured

      “Measured” in what sense? One can easily measure a substantial decline by careful choice of endpoints.

      Over the past two years various people have pointed out on the basis of how weather has varied over the past several decades that projections based on less than a decade are unreliable. In Feb. 2010 I proposed 15 years as a minimum, 6 months later Tamino proposed the same number, more recently Santer has proposed 17 years.

      If you go with 17 years you will see 0.36 °C/decade in the recently released BEST land-temperature data from Richard Muller’s group at Berkeley. Furthermore there is no sign of any flattening in that data during the past decade.

      • Vaughan

        With BEST, you are ignoring 75% of the globe.

        Please return to hadcrut and gistemp when discussing trends =>
        http://bit.ly/Aei4Nd

        Do you see any change in the global mean temperature since record begun about 160 years ago?

        Is it not a single pattern of a warming of 0.06 deg C per decade with an oscillation of 0.5 deg C every 30 years? WHY NOT?

      • Chief Hydrologist

        I propose a 100 years – after all we want a really long term trend that encompasses all of the variabilities – see Girma for the details – I really can’t be bothered.

      • Is it not a single pattern of a warming of 0.06 deg C per decade with an oscillation of 0.5 deg C every 30 years? WHY NOT?

        When you include those two offsets of ±2 °C I would have to agree with you.

        If you increase them to ±20 °C it looks so flat that I’m not sure I can see any warming trend at all.

        However if you decrease them to ±0.2 °C then it becomes easier to see differences that aren’t so visible with your suggested ±2 °C. In particular I would say on the basis of that last graph that you are quite right about the rise from the 1st peak to the 3rd peak, but quite wrong about the rise from the 1st peak to the 2nd, which is a lot less than your proposed 0.5 deg C every 30 years, and from the 2nd to the 3rd, which is a lot more.

        This difference is also present in your original graph, and to the same degree, it just isn’t as obvious when you zoom way out like that.

        I’d say on the basis of your original graph, when looked at more closely, that this warming trend is not following the straight line you suggest but is bending upwards. WHY NOT?

      • CH, I propose 162 years. We have the data.

  62. Yes but Vaughan, surely you are aware that choosing a 17 yr period with a start date of 1994 right at the bottom of the Pinatubo cooling effect is a nice big fat sweet juicy red cherry pick:) It is difficult to see any significant long term trend with a series containing extreme weather outliers including Pinatubo and the mother of all El Ninos 98.

    • Agreed

      Here is a reliable 30 year satellite record (starts 1979 because that’s when the sats started operating). Difficult to cherry-pick anything much other than ENSO in that timeslice

      http://www.drroyspencer.com/wp-content/uploads/UAH_LT_1979_thru_January_2012.png

    • choosing a 17 yr period with a start date of 1994 right at the bottom of the Pinatubo cooling effect is a nice big fat sweet juicy red cherry pick:)

      This would be a fair objection if I had chosen this out of a range of alternatives. However I was trying to make the point that this is what you get when you try to include as much as possible of the allegedly flat 2000-2010 period while still meeting the Santer 17-year criterion. With those two constraints I had no alternative. (One can extend 2010 very slightly to 2010.2, but no further because of the insane outliers in the last two months of the BEST data!)

      But since you think I’m cherry-picking, it would only be fair to let you cherry-pick a 17-year period (or longer if you prefer) from the BEST data that makes the opposite point. Go for it!

      Note that I composed three moving-average filters of widths 12, 10, and 8 months in order to remove all traces of < 12 month high frequency noise. That combination turns out to be an extremely effective filter for that purpose, for reasons that are best explained in terms of side lobes in the frequency domain if anyone's interested. If you don't remove the high frequency noise it becomes easier to cherry pick from among 17-year windows, so to that extent I'm making it as hard for you to cherry-pick as I made it for me. Just didn't want to give either of us an unfair advantage.

  63. I’ve taken the time to download some data and calculate the trends. I’ve used the global temperature data from GISS, NCDC and Hadcrut3. I’ve also taken the lower atmosphere data from UAH, which is closely related. I’ve calculated trends using Excel. The spreadsheet also includes BEST data for comparison. I’ve not quoted results here, since it measures land only, which gives stronger warming trends throughout.

    Here are trends for the most recent 15 years, in C/decade. That is 1997 through 2011 inclusive.
    Hadcrut3: 0.01
    UAH: 0.10
    NCDC: 0.05
    GISS: 0.10

    This is even less warming that Leake mentions. But if take the 15 years to July 2011, I get this:
    Hadcrut3: 0.05
    UAH: 0.11
    NCDC: 0.08
    GISS: 0.12

    That’s a bit closer to numbers in Leake’s article; it’s possible he’s working from numbers he obtained some months before publication. But in general, the 15 year trend is going to vary a fair bit over different time periods. There’s still a lot of influence from short term variation on that scale. This is not some new excuse to explain any failure of predictions; it’s been a stock standard part of climate science for the last twenty years.

    The 95% confidence on these trends is roughly 0.05. That’s using the regression confidence, without any consideration of confidence on the underlying data. A crude confidence guide just to give a ballpark idea. This means that the data does show, with confidence, a short term trend which is less than the longer term trend we’ve seen over recent decades. That’s not particularly surprising, and (conventionally) most scientists expect the stronger long term trend to continue to be apparent over this century at least. If the recent lull persists another ten years, then there might be reason to look askance at conventional expectations. The current data, however, is consistent with conventional expectations.

    It has been suggested that the recent lull over the last 15 years makes it unlikely that the next 5 years will allow for a strong warming trend over 20 years. We’ll see about that in five years time; in the meantime, here are trends for the 20 years just past (1992 through 2011 inclusive). (Regression confidence limits about 0.04)
    Hadcrut3: 0.16
    UAH: 0.21
    NCDC: 0.16
    GISS: 0.21

    Those numbers match the expected trend of about 0.2 C/decade.

    There is no IPCC prediction for the trend over the last 15 years. There IS, on the other hand, an expectation that the main underlying trend is about 0.2 C/decade or so; where “about” can be read as 0.15 to 0.3. It’s well understood and explicit (not explicit enough for everybody it seems, but certainly not a hidden footnote either) that this trend is not expected to show over short windows of time; but only over windows of time long enough to smooth untrended variations.

    Recent work has identified 17 years as a window over which the underlying trend starts to show. (Judith has commented on it also here at Climate Etc: see Santer on timescales of temperature trends)

    There is still a fair bit of up and down seen in the 20 year trend when I look at the data. But any 20 year window from about 1980 to now, in any of those datasets, shows a trend somewhere from 0.15 to 0.25 C/decade. I expect that to continue; just as I expect to continue to see substantially greater and smaller trends over a 15 year window or less.

    I have adapted this comment to be a post at the experimental SkyDragon forum. In that post I include also links for all the data files, for the spreadsheet, and images of plots of how trend varys with time. Basically I plot the trend value for a window centered at a given time, for every possible window of that length. Hence you can see how the 15 year or 20 year trend (or others) varies over time. You can find it at Recent trends in global temperature. If you would like to discuss further, I’ll continue to read both here and in the forum.

    Girma, this also relates to your question. You don’t expect 15 years to show up the long term trend. You do expect to see the long term trend start to dominate over short term untrended variation over longer windows, like 20 years. Those numbers are not an excuse for failed predictions. They are a feature seen in the data. It’s ALWAYS been understood that untrended variations make short windows not much good for looking at a global warming signal; that’s the nature of climate.

    I know you don’t agree with the models. That’s your prerogative. But it is just wrong to say that the recent 15 year lull conflicts with IPCC expectations.

    Cheers — Chris

    • Chris

      This means that the data does show, with confidence, a short term trend which is less than the longer term trend we’ve seen over recent decades.

      Thank you for that.


      That’s not particularly surprising

      I disagree.

      Because of the following:


      Yeah, it wasn’t so much 1998 and all that that I was concerned about, used to dealing with that, but the possibility that we might be going through a longer – 10 year – period of relatively stable temperatures beyond what you might expect from La Nina etc.
      Speculation, but if I see this as a possibility then others might also.

      http://foia2011.org/index.php?id=6226

      • Girma, you show an email in which a conventional climate scientist considers the possibility of a ten year lull. Apparently you take this to mean that lulls are NOT considered possible in the conventional picture. Boggle.

      • Chris

        but the possibility that we might be going through a longer – 10 year – period of relatively stable temperatures beyond what you might expect from La Nina etc.

      • Yes… so? La Nina etc (the major factor in “1998 and all that”) is not the only short term influence, and scientists know this. Such lulls have occurred before. They aren’t a surprise and we are not good at predicting them, and you’re STILL citing a climate scientist who is considering the possibility of a 10 year lull. That’s SUPPORTING my position.

      • Chris, the basic problem is that sensitivity is an abstract concept that has little to do with reality,

        David, let me suggest a different possibility. Like everyone else in the world you have no clue what climate sensitivity is, and therefore have no clue whether it is abstract, concrete, or even meaningful.

        So equally you have no clue as to whether it has anything to do with reality. In your complete ignorance of the concept, for all you and anyone else in the world might know, it might have everything to do with reality.

      • If I may continue the piling on Wojick. He doesn’t seem to realize that abstractions help us understand reality. Just because a concept is abstract does not mean it doesn’t reflect reality.

        “By analogy, abstractly one can throw a potato chip as far as a baseball, but don’t bet on it. In short, the focus on radiative physics is misguided.”

        Again poor Wojick does not understand what the word abstract means. In this case, he probably meant to say hypothetically. Radiative physics is not abstract, as it is clearly the mechanism by which the planet maintains its average temperature. So Wojick is saying that modulation of the radiative transfer leading to temperature change is only a hypothetical premise.

      • WebHu

        uh, you need to look up abstract in the dictionary. (hint: it means not practical, applied or real).

      • (hint: it means not practical, applied or real).

        I’m afraid I have to agree with you here.

        On the one hand that’s merely definition 3 here out of 5 definitions of this adjective.

        On the other, in my reply to David I was assuming it was the one he intended, since I’m not aware of his ever having used that adjective with any of the other four meanings.

        To back up WHT’s point, temperature is a great example of a quality that is concrete to the Man In The Street but abstract to a physicist. So abstract in fact that a molecule can’t run a temperature. A gas cannot have a temperature unless it consists of at least two molecules.

        The MITS reasons that one molecule moving at ten times the average speed of air molecules at sea level must be much hotter than average, but this only shows a lack of appreciation for how something like temperature becomes meaningless without an abstraction on which to base it.

    • The 30-year linear trend for 1990 – 2020 will be 0 (+0.1/-0.1).

      • Wow. Forget cycles; you’re proposing the 30 year warming trend will fall off a cliff in the next few years! To get that would be spectacular.

        Last time there was a 30 year trend as low as 0.1 was around 1960-1990.

        I think about 0.15 to 0.2 should be in the ballpark.

      • Chris, shoudn’t it be 0.6 °C (3 x 0.2 per decade), according to IPCC?

      • Your post was talking about the “linear trend”. So was I. The IPCC suggested “about 0.2” for the linear trend. I’m going slightly on the low end of that, for a linear trend of 0.15 to 0.2 C/decade.

        You know that trend is not the same thing as “difference between two end points”? It’s not even the difference between end points divided by time difference. That varies a lot more than the trend.

      • at .15C you become a lukewarmer.

      • Edim, the 50 years trends in Hadcrut3 are positive over every 50 years you pick last century. Although it does get very close to zero in the middle.

        1900-1949 inclusive has a trend of +0.100

        It goes up to a local high of +0.102 over 1903-1952; this is actually the culmination of a steady increase going into the previous century.

        It then declines fairly steadily to a minimum of +0.003 over 1930-1979

        Amazing. What a coincidence. That’s the 50 years you picked out!

        It then increases again, to a maximum of +0.141 over the most recent 50 years.

        This is consistent with the physical factors driving climate that we know of. The major factors in the early part of the century appear to be a mix, coming out of the little ice age. The middle decline is mostly (we think) from anthropogenic aerosol pollution. Enhanced greenhouse has been a factor all along, but that really took off from around 1970 or so, and is now easily the dominant forcing. The 50 year trend will, I expect, continue to increase, as it sheds the tail of that mid century cooling window.

        The physical expectation is that the trend will continue upwards. Observations so far have confirmed the predictions of warming being made some 30 years ago. Recently we’ve been getting more specific predictions into the future, and as you keep an eye on trends you can (potentially) falsify the expectations from present physical understanding.

        Present understanding is that there is to be a persistent warming trend throughout this century; though (as has been noted consistently) this is not expected to be a steady trend. The expectation is that the main on-going trend should be apparent over 20 year windows or more; with 15 years or less showing a combination of this underlying trend with shorter term untrended variation.

        Cheers — Chris

      • at .15C you become a lukewarmer.

        Steven, what do you become at 0.13 °C/decade? That’s what I figure for the impact of rising CO2 in 1990. By 2000 AGW was up to to 0.16 °C/decade. Currently I figure AGW is running at 0.2 °C/decade.

        For those who’ve just tuned in, some people define AGW to be the impact of rising CO2 brought on by our land use changes, conversion of fossil fuels to energy, and flaring of same. I’m one of those people, why shouldn’t I be? Why shouldn’t anyone be? If you have a different definition of AGW, by all means put it on the table and let’s discuss it.

        Now the Keeling curve shows atmospheric CO2 to have been increasing extremely steadily. Hence with that definition of AGW it’s impossible for the impact of rising CO2 to be all over the place. That impact has to be rising very steadily.

        Which is the case for my AGW numbers above.

        As several million people have pointed out, CO2 isn’t the only thing driving global temperature or it would be rising as smoothly as CO2.

        Well, duh.

        Anyone conflating natural fluctuations in global temperature with those attributable to rising atmospheric CO2 is living in a state of sin.

        You must separate the natural and anthropogenic contributions to global temperature. Ideally you would do so to a precision of well under a millikelvin. If you can’t come even close to that level of precision then you can’t claim to understand climate change, because climate change works with very small variations in temperature, in case you hadn’t noticed.

    • Chris

      Why should we panic now with the warming of 0.15 deg C per decade, when we had similar warming rate from 1910 to 1940 when there was little human emission of CO2?

      http://bit.ly/9kJczm

      • We should not panic.

        We should, however, anticipate that the warming we see at present will continue, and that that we still have a say in how strong that warming will be; and plan accordingly without panic.

        The reason we should have high confidence in this conclusion is not extrapolation of trends or curve fitting; but physics of what drives temperatures and climate. This is a matter which tends to get lost in discussions such as those of Leake. Of the various “hypotheses”, only number 1 has a solid testable basis. The others invoke all kinds of unexplained cycles and shifts and so on without any suggestion of what could be driving them; and a major problem with the lack of indication of those cycles going back beyond the instrument record. That, combined with the idea that climate is insensitive and doesn’t change much in response to forcings becomes merely oxymoronic.

      • Chris

        Breaking the record into the apparent 30-year cycles (as Girma has done) there is the 30-year warming cycle from ~1911 to ~1940 (which was “statistically indistinguishable” from the latest 30-year warming cycle from ~1971 to ~2000, according to Phil Jones), followed by the 30-year cycle of slight cooling from ~1941 to ~1970 (which occurred despite the fact that CO2 emissions were beginning to accelerate as a result of the post-WWII boom years).
        http://www.woodfortrees.org/plot/hadcrut3vgl/from:1941/to:1970/plot/hadcrut3vgl/from:1941/to:1970/trend/plot/hadcrut3vgl/from:1911/to:1940/plot/hadcrut3vgl/from:1911/to:1940/trend/plot/hadcrut3vgl/from:1971/to:2000/plot/hadcrut3vgl/from:1971/to:2000/trend/plot/hadcrut3vgl/from:2001/to:2011/plot/hadcrut3vgl/from:2001/to:2011/trend

        The warming/cooling cycles are underlaid by a slight warming trend of around 0.5 to 0.6°C per century, like a sine curve on a tilted axis.

        It is quite apparent that these cycles, as well as the latest 11 years (starting in 2001 (which again shows slight cooling), bear little correlation with the steadily increasing CO2 levels.

        This is Girma’s point, and it is pretty hard to invalidate based on empirical evidence, as you will have to admit.

        One can deny that the cycles exist, but that would simply be sticking ones head in the sand in denial.

        The unanswered question remains: will the current trend of slight cooling continue for another 19 years or so to become another 30-year cooling cycle, or will it reverse to a new warming cycle?

        No one knows the answer to that question.

        Certainly not the scientists and climate models cited in IPCC AR4, who had projected that it would warm by 0.2°C per decade instead of cooling slightly, as it actually did.

        The second question (also unanswered) is: what has caused these observed ~30-year warming and cooling cycles?

        And finally, the third question could be: what has caused the underlying long-term warming trend of around 0.6°C per century?

        Max

      • Manaker, my reply was basically that we are not limited to observations of temperature. It’s physics which leads us to think that this isn’t just cycles.

        Looking at nothing at all but the temperature trends, without any consideration of what is actually causing them, would remove entirely my whole basis for expecting the 30 year trend to remain comparatively strong warming.

        I know Girma’s point was that the temperature data doesn’t (by itself) give strong evidence for a persistent ongoing trend. I agree that we need more. I think we HAVE more, and said so. I know we have have more, because it’s really the physics which is more my particular interest than statistics on trends.

      • Latimer Alder

        @chris ho-stuart

        The problem with your reliance on ‘the physics’, is that the pesky climate doesn’t seem to be behaving the way your theory tells us it should. And I’m disappointed to see that your response isn’t to go back to the physics and see what you’ve missed, but merely to reiterate that it’ll all work out fine in the end.

        H’mm

        Scientific history is littered with examples of ‘comprehensive theories’ that were 90%+ correct, but with just a few little problems in some dark corners. And it is the investigation of those troublesome phenomena that can lead to interesting and new insights. Einstein’s work on the photo-electric effect and discovery of the quantised nature of radiation is a classic example.

        So colour me unconvinced that you really understand this climate system. And colour me even more unconvinced by the argument that though it is impossible to forecast the climate 5 years away, you are perfectly capable of doing so 50 or 100 years out.

        Unless and until you can give a better understanding of the recent plateau in temperatures it seems to me that you have a lot more work to do. And that ‘It’s the carbon dioxide, stupid!’, may prove to be a far too simplistic theory.

      • Chris

        Thanks for your response.

        I think you have pinpointed where our basic disagreement lies.

        You cite “physics” (rather than “physical observations”) as the basis for the postulations leading to the CAGW premise.

        By this, I suppose you are referring to greenhouse theory or climate sensitivity hypotheses used in the climate model simulations.

        These are great, but as a rational skeptic, I would like to see empirical data to validate the hypotheses (which you call “physics”).

        If the “physics” tell me one thing, but the “physical observations” are showing me something else, I’ll go with the physical observations, especially in a science that is still in its infancy, such as climate science.

        Until these hypotheses (your “physics”) are validated by empirical evidence, based on real-time physical observations or reproducible experimentation, they remain uncorroborated hypotheses.

        This is how “physics” (and all other sciences) work, Chris. Hypotheses and theories are tested against empirical evidence. If the empirical evidence shows that they can pass repeated falsification attempts, they can become “corroborated hypotheses” and eventually “reliable scientific data”. The CAGW hypothesis has not passed this test.

        Show me (and Girma) the empirical evidence to support the CAGW hypothesis, i.e. that human GHG emissions have been the primary cause for past global warming and that this represents a serious potential threat to humanity and our environment unless these emissions are curtailed dramatically.

        But don’t just tell me it’s based on the “physics”.

        Max

      • Latimer Alder

        @manacker

        Well blow me down! You’re suggesting that Chris H-S goes out and does some experiementy thingies!

        Surely you know by now that such things are strictly discouraged in climatology. Chris would soon find his career at an end

        The Theory is Correct. The Observations are Wrong. Long live The Theory. A bas les experimentalists! The Glorious and Infallible IPCC has Written the One True Truth. Speak with One Voice And Praise The Theory. Shun and Dismiss the Well-Funded Big Oil Running Dog Denier Scum……….

      • Latimer says:

        The problem with your reliance on ‘the physics’, is that the pesky climate doesn’t seem to be behaving the way your theory tells us it should. And I’m disappointed to see that your response isn’t to go back to the physics and see what you’ve missed, but merely to reiterate that it’ll all work out fine in the end.

        Latimer, you’re just wrong there. The climate IS behaving consistently with what theory tells us. The people who say it isn’t are consistently distorting what expectations have actually been given.

        There are some glitches between theory and data here and there, sometimes resolved by better theory, sometimes by correcting data problems. That’s normal in all kinds of science, by the way. But we haven’t been looking at those problems here. The “problem” being raised HERE — a 15 year lull — simply IS NOT a conflict with theory.

        The point is — which I HAVE emphasized many times now — that the physical theories are incomplete. It’s not a solved or finished problem. They don’t, for example, adequately deal with short term variation. The short term variation is quite CONSISTENT with theory to date, but the theory is not at present able to predict them. (It may never do so beyond a narrow window, much as weather prediction is strongly limited in scope; this is where chaos shows up.) Physical modeling of many factors like cloud, or carbon cycle, or solar cycles, continue to be unsolved, and limit the skill of predictions.

        There are a whole HEAP of open questions in climate science, and we (or rather working scientists) are going back to the physics again and again and again. What on earth makes you think they are not??

        Manacker says:

        You cite “physics” (rather than “physical observations”) as the basis for the postulations leading to the CAGW premise.

        By this, I suppose you are referring to greenhouse theory or climate sensitivity hypotheses used in the climate model simulations.

        These are great, but as a rational skeptic, I would like to see empirical data to validate the hypotheses (which you call “physics”).

        Greenhouse theory is “theory” in the same sense as relativity is “theory”. It’s really very well understood physics with ample direct empirical confirmation, and actually one of the simplest and best solved problems in climate.

        Sensitivity isn’t a “hypothesis”. The hypotheses concern values for sensitivity. The most important constraints here are actually observational. Physical theory is not able to model the Earth well enough to give a definite prediction for sensitivity. That’s why it’s invariably given as a range of possible values.

        What the physical theory gives you quite definitely is that there IS a strong persistent contribution to heating up the planet. What it doesn’t give you particularly well is how much response there will be (sensitivity) and also what other local changes around the globe occur and how short term chaotic changes take place as the whole system settles to the changing conditions.

        There is, however, strong empirical reasons for lower bounds on sensitivity, which is where you get some fun debates from those who propose — in spite of all evidence to the contrary — a low value. Major positive feedbacks contributing increased sensitivity do have a well understood physical basis, but it is not complete. There’s a whole heck of a lot more going on and so theory does not tell you a particular value.

        In discussions here, some people don’t seem to understand how sensitivity is defined; they mix up equilibrium sensitivity with transient response sensitivity, or try to read equilibrium sensitivity from a temperature trend. That’s not falsifying theory. That’s just needing to learn a lot more before you can even follow discussions. The problem is I’m engaging here with people who operate at many levels.

        I don’t expect to convince you to change your mind all at once; nor can I possibly go into a full account of all the data and theory that is being applied — even if I knew it all, which I don’t.

        What I do know a fair bit about is the radiative physics in the atmosphere; at the level of a keen student, not a researcher.

      • Latimer Alder

        @chris ho-stuart

        ‘The people who say it isn’t are consistently distorting what expectations have actually been given.’

        Nope. 100% the wrong way about. Arse about face.

        If we have got the wrong end of the stick (and I by no means accept that we have), then the fault lies with you, not with us. If you have ‘given us the expectations, and we haven’t received them correctly, then it is your responsibility to make sure that we have them correctly, not ours to decode your mumbo jumbo into things that we interpret to your satisfaction.

        That is the basis of sound communication…YOU need to check that the message as sent has been received correctly…

        As to distorting expectations’, please show a consistent and frequent library of publications/presentations/public speeches designed for the general public where you and your colleagues have placed as much emphasis on the variability of recent temperatures as you have on the ‘inevitable’ long-term warming.

        Show me the portfolio of letters to the press where you have pointed out that stories discussing dangerous global warming should be moderated with the caveat that it may not actually warm an for up to a generation, but that we should still be very concerned.

        Show me the transcripts of the TV appearances where you and colleagues have appeared to reassure the public that the strange weather event du jour isn’t directly caused by global warming, but is just part of natural variation.

        I’ll take a wee wager with you that you can’t do any of those things. And the reason is that they never happened. Until very recently – prompted by the article in teh Sunday Times and other similar ones, the idea that ‘climate scientists have always known there will be a slowdown or stopping of temperature rise, but its all taken care of within our theory, so we still need to do something now’ has been a dirty little secret hidden in a dark place and rarely if ever let out for public examination. One can speculate why – not wishing to give the funding institutions an excuse to withdraw their largesse must be a strong hypothesis. And not wishing to halt an otherwise desired political ‘green’ process is another.

        But whatever the reason, you guys have NOT made it clear that these effects are likely to occur. And acusing us of deliberate misinterpretation is no way to escape your responsibilities to have done so.

        It is no wonder that the Man from the Met Office writes that he believes that there is a lot of work to do to regain the trust in climate science. With daft arguments lke this the amount of work needed becomes ever greater.

      • Chris, the basic problem is that sensitivity is an abstract concept that has little to do with reality, and the policy issue is about reality. To take an extreme example, suppose CO2 levels double but we fall into an ice age and the temp goes down 10 degrees. Is the sensitivity minus 10? No, of course not, because other factors are at play. The point is that radiative physics are only one aspect of climate dynamics, and the other aspects may matter more as far as reality is concerned. By analogy, abstractly one can throw a potato chip as far as a baseball, but don’t bet on it. In short, the focus on radiative physics is misguided.

      • Why should we panic now with the warming of 0.15 deg C per decade, when we had similar warming rate from 1910 to 1940 when there was little human emission of CO2?

        Depends on whether you’re planning for next week, next century, or next millennium.

        Girma, when would you say was the last time Earth experienced a rise of 15 °C in a mere millennium? What proportion of the planet’s species might fail to adapt to such a sudden change?

        Historically rates of change like that have led to mass extinctions.

        Anyway we’re currently hitting closer to 20 °C per millennium, and will be at double that rate by 2060 assuming business as usual.

        You are advocating business as usual, right, Girma? I’d hate to misrepresent you on that little detail.

      • Shorter Latimer: I bear no responsibility for any errors in my opinions – it’s all your fault.

      • CHS: In discussions here, some people don’t seem to understand how sensitivity is defined; they mix up equilibrium sensitivity with transient response sensitivity, or try to read equilibrium sensitivity from a temperature trend. That’s not falsifying theory. That’s just needing to learn a lot more before you can even follow discussions. The problem is I’m engaging here with people who operate at many levels.

        How many levels are you operating at? If one, how do you define climate sensitivity? If multiple levels, how are you different from anyone else here?

        Personally I operate at one level: I “try to read equilibrium sensitivity from a temperature trend” as you put it. Since I apparently to need to learn a lot more before I can even follow discussions, if you could convince me you had a more reliable approach you would have my full attention!

      • Markus Fitzhenry

        “”Vaughan Pratt | February 10, 2012 at 2:01 am |
        Girma, when would you say was the last time Earth experienced a rise of 15 °C in a mere millennium? What proportion of the planet’s species might fail to adapt to such a sudden change?
        You are advocating business as usual, right, Girma? I’d hate to misrepresent you on that little detail.””

        Vaughan,

        Do you think the following link to a pie chart graphic is still the business as usual, in the wider climate scientific community?

        http://eposters.agu.org/files/2011/12/Pratt_V_AGU_Poster_GC43B-092110.pdf

        It is all those bad humans need for fossil fuel energy that is going to exterminate them. Their bad behavior is going to make temperature to rise by 20 Deg, coupled with natural spatio-temporal chaotic systems;

        1. Irradiation and radiation of surface and atmosphere.
        2. Dynamic heat distributions of oceans.
        3. A multiple pole thermodynamic atmosphere.
        4. A Gravitational velocity spinning on an uneven axis.
        5. A Sun with fluctuation of solar isolation.
        6. Planetary harmonics.
        &. Atmospheric pressure and composition.

        Yep, definitely 74.8% anthropogenic and only 25.2% natural.
        I’d hate to misrepresent you on that little detail.

      • @michael

        ‘Shorter Latimer. I bear no responsibility for an errors in my opinions. It’s all your fault’

        But we weren’t discussing my opinions..for which I, of course bear full responsibility and am happy to do so.

        We were discussing instead this remarkable phrase from Chris Ho-Stuart.

        ‘The people who say it isn’t are consistently distorting what expectations have actually been given’

        Ever played the game ‘Chinese Whispers’? In which one person whispers a message to another, which is passed through a line of people until the last player announces the message to the entire group. Errors typically accumulate in the retellings, so the statement announced by the last player differs significantly, and often amusingly, from the one uttered by the first.

        Apart from being tremendous fun at children’s parties, it tells us something interesting and useful about communication in general. And that is that if you really want a message to be correctly received you need to build in a way of checking that it has been, otherwise it will accumulate errors.They may be caused by noise in transmission, by genuine misunderstanding, by inattention, by copying errors, by copying errors, by mispront, by language difficulties, by cosmic rays. But whatever their cause they are inevitable.

        So designers of communication systems pay special attention to ways of checking that the message has got through correctly. To use a relatively simple example if you watch any episode of Air Crash Investigation and you will see that the pilot always reads back his ATC instructions so that both parties know that it has been received as intended. This strategy of send/receive/check is universal in good comms systems design.

        Note especially the requirement that communication must be a two way process. It is not

        ‘I send, you receive’, but

        ‘I send, you receive, you send, I receive, we both check’

        So for CH-S to blithely state that we haven’t correctly receieved (and indeed have wilfully distorted) the message we have been given really highlights a number of points about climatology communication.

        Many climatologits have not grasped the two-way nature of communication at all. They are still stuck with the ‘I send, you receive’ model.

        We can see for example in the website ‘Real Climate’, which is entirely dedicated to such a one-way proposition. Its strap line ‘Climate science from climate scientists’ does not augur well that it is a 2-way street. And its comments policy only reinforces that idea.

        With no corresponding check of perception, they have no way of knowing what people’s expectation will be at the end of the game. And if it is hugely inconsistent with that they thought they had given, it really is of little value to blame the recipient.

        For it is they themselves who have chosen to restrict themselves to the one-way communication model. They who wilfully and deliberately choose to ignore contrary views – dismissing any objections with derision. They who choose the megaphone not the telephone as their way to disseminate their ‘message’. And, as in this case, when that mechanism fails as it has here, and hen it causes them acute public discomfort, it really ill-becomes them to blame the receiver for the failure.

        To reiterate: For my opinions I am happy to take full responsibility.

        But for the fact that they think their message may not have been received correctly by the masses, only the climatolgits themselves bear the blame.

      • @Markus Yep, definitely 74.8% anthropogenic and only 25.2% natural.
        I’d hate to misrepresent you on that little detail.

        If you found an error I’d greatly appreciate your drawing it to my attention. If not then I don’t understand the point of your remark, unless you’re finding those numbers a bitter pill. If you’re interpreting the chart as standard deviation instead of variance then I would certainly agree with you that under that interpretation the chart would be wildly inaccurate.

        Your recent reasoning in a three-component analytic model of long-term climate change, to me was illogical in several respects,

        Do you mean it was wrong or that you didn’t follow it? Happy to fix either one.

        but the biggest bias was your misanthropism.

        I’ll have to pass on that one. Understanding how “but the biggest” can serve as a connective between topics whose relationship has not been made clear is above my pay grade.

        your ad-hominem attack was no more than a condition of your preconceived ideals.

        Again I’m not following. An ad hominem argument is an appeal to a negative trait that is unrelated to the argument. How is someone’s inability to accept that their reasoning is circular irrelevant when that’s what I’m complaining about?

        Being a bank robber is a negative trait, but calling someone who’s robbing a bank a bank robber is not an ad hominem attack because it’s a relevant trait.

        In my 58 years hanging out with Aristotle, I’ve never seen a better protagonist with the fallacy than you.

        I take it you’re not a fan of constructive logic.

  64. Just think how much more we will all know in 10 years…and then again in 20 about the relative strength of solar and anthropogenic forcing. What better time than a Maunder minimum #2 to see first hand if attribution studies have even come close to identifying the underlying forcing from the increase in greenhouse gases.

    The last decade has been an interesting one. True, no contined increase in temps, though some pretty warm years. Will a new instrument record warm year or two occur in Solar Cycle 24, perhaps near the Max, especially if even moderate El niño sets up? How will complete skeptics of any AGW spin such an occurrence? And what of the long-term, multidecadal decline in Arctic sea ice? How might a sleepy sun for several decades impact that? Stay tuned…for the most exciting period on the history of climate study is about to begin!

    • It seems that some of us, at least, will have have learned something about climate and climate change. I hope that having a little knowledge of these issues will serve to make us more aware of what potential impacts our actions and that of other external influences could have on climate in particular and on our environment generally.

    • R. Gates

      Wisely stated Just think how much more we will all know in 10 years…and then again in 20 about the relative strength of solar and anthropogenic forcing

      This is precisely the point I was trying to make to Chris.

      We simply DO NOT KNOW whether it will continue to cool slightly as it has over the past 11 years, or start warming again.

      We don’t even know WHY it has warmed and cooled slightly in roughly 30-year half-cycles over the long-term record.

      We think we know (but aren’t sure) why there was an underlying warming trend of around 0.6 degC per century – but IF it continues to cool slightly over the next several years despite ever-increasing CO2 levels, we will have to revise our theory on that, as well.

      Yes. It is an exciting time for climate science – actually much more so than the much-ballyhooed IPCC late 20th century “poster period”, because we will get a real-life test of the CAGW hypothesis and assumptions, which are principally based upon that period.

      Max

    • R gates

      Your visits here are all too infrequent, guess you’re still trying to convert them all over at wuwt :)

      You say no continued increase in temperatures over the last decade. This is a refreshing admission from someone primarily on the agw side.

      We have had people coming up with all sorts of graphs to prove temperature is increasing, decreasing, or static.

      No snark here, but which graph would you cite that has a degree of authority, i.e one not manufactured on wood for trees using dubious end points and data.

      Tonyb

  65. Just a remark to the article by Jonathan Leake.

    If one picks 1998 as the “start date” for the current cycle of “lack of warming”, one arrives at an essentially flat trend (Met Office tells us: “Our records for the past 15 years suggest the world has warmed by about 0.051C over that period” ).

    If one takes only the past 10 years (2002-2011) one arrives at a more significant cooling rate of 0.1C per decade.

    There are good arguments for NOT starting in 1998 (a record high, strong El Nino year).

    Starting in 2002 (or 2001) gives a more pronounced trend, but only over a shorter time period.

    Cherry-picking?

    Max

    PS The fact of the matter is it is cooling slightly, as it did from around 1941 to around 1970 (not warming imperceptibly, as Met Office would have us believe).

    • You are ignoring the fact that short term trends change quite rapidly.

      Using HadCrut3, for example, the 10 year trends have the following sequence.
      0.29
      0.25
      0.28
      0.35
      0.33
      0.29
      0.24
      0.24
      0.09
      0.04
      0.07
      0.03
      -0.03
      -0.10

      That’s going from 1989-1999 inclusive at the top, to 2004-2011 inclusive, at the bottom. You might like to play with the spreadsheet I’ve made available at SkyDragon to do these calculations quickly.

      Swings around a bit, doesn’t it? That’s why we DON’T use the 10 year trend as you have done. That’s why it’s just silly to say Met Office is trying to make us believe what isn’t “actually” the case. The Met Office, after all, specified the time span. What they actually said is (via Leake’s article) is this:

      “Our records for the past 15 years suggest the world has warmed by about 0.051C over that period,”

      Why would you pick a short term, which is even LESS reliable as an indicator of what’s coming, as what is “actually” happening now? I call BS on that. (BS being “Bad Science”, of course. :-)

      What’s “actually” happening isn’t a trend over any window. Next year might be warmer or cooler; the changes “now” aren’t given by ANY trend. The trend over a window is a diagnostic, used for testing hypotheses. What is ACTUALLY happening now is that the atmospheric greenhouse effect is getting stronger; and at the same time the circulations of water and air and heat and cloud and so on around the globe are going on their merry chaotic way, meaning that we are going to have unpredictable short term variations while there is a continual flow of heat into the ocean from the energy imbalance between what is being emitted and what is being absorbed.

      It’s the physics that matters. The behaviour of temperature, along with a lot of other observations, is all backing up the general picture of the planet shifting climate to get into balance with the new atmospheric composition. It’s not just extrapolating trends. It’s classic conventional science, digging into material causes, and forming and testing hypotheses.

      • Chris

        Again, you cite the “physics”: It’s the physics that matters (rather than the “physical observations”).

        See my earlier post for the reasons why its the “physical observations” that matter, because they provide the empirical evidence for the “physics”.

        Max

      • Chris

        Your 10-year trend analysis confirms one thing.

        There has been a shift in the decadal rates of temperature change since the decade starting in 1989, from one of strong warming to one of slight warming to one of slight cooling.

        Will this trend continue?

        Who knows?

        At any rate, IPCC missed it in their projection of continued strong decadal warming.

        Max

      • Well, you excuse the models from explaining short-term variance and trends. That leaves long-term. Which, of course, can’t be validated till a long term has passed. So we’ll just have to trust the models are right!

        Nice work if you can get it.

        But I dooone thang sew. Trust in the models seems to be a VERY expensive wager.

  66. Chris

    You cannot justify cherry-picking 1998 as the “start date of a 15-year trend showing imperceptible warming” (as Met Office did) versus cherry-picking the most recent 10-year period starting in 2002, which shows a statistically significant (if shorter) cooling trend.

    Let me give you my opinion as to WHY Met Office cherry-picked 1998 (instead of 2002): so they could still claim a warming trend, even a statistically insignificant one, rather than (oh horrors!) a cooling trend, and then (wink-wink) remind us that the trend was started in a record high El Nino year..

    The IPCC projection of +0.2C per decade warming looks much worse when compared to actual -0.1C per decade cooling than it does when compared to +0.03C per decade warming – right? [Error is only 0.17C rather than 0.3C per decade.]

    And even that +0.03C per decade warming sounds a bit better when expressed as +0.05C warming over a 15-year period – right? [Casual observers might conclude that the IPCC error was only 0.15C per decade.]

    Chris, that is what cherry-picking and word-smithing is all about, and Met Office as well as IPCC are experts at both.

    Max

    • You cannot justify cherry-picking 1998 as the “start date of a 15-year trend showing imperceptible warming” (as Met Office did) versus cherry-picking the most recent 10-year period starting in 2002, which shows a statistically significant (if shorter) cooling trend.

      They DIDN’T cherry pick anything. They were asked about the recent trend over 15 years, and gave the answer.

      Good heavens man! That’s about as unfair an accusation as you could possibly give! They didn’t pick the window length at all.

      The Met office DOES NOT pick any particular window as being the “actual trend”.

      Careful studies of data, such as that by Santer cited previously, attempt to give a scale to the window length where you start to see the underlying trend. That’s not cherry picking either; that’s making a testable inference based on looking at all the data; one that will be confirmed or falsified as time goes by.

  67. Max and Chris

    At 3.42 above I suggested that R Gates post an authoritative graph as it can’t be cooling AND warming. It is doing one or the other or is static.

    It can be expressed over a 15 or 10 year period so do either of you two (or anyone else) care to oblige?
    tonyb

    • Climatereason, the spreadsheet I have provided at the thread in “SkyDragon” (Recent trends in global temperature) allows you fairly quickly to produce plots which give trends for four well known global datasets (GISS, NCDC, HadCrut3, UAH) plus one land only dataset (BEST). It allows you to enter a window length (in months) and the plot will then show the trend over every possible window of that length for each of the datasets. You can see how the trends increase and decrease with time.

      Is this “authoritative”? Science isn’t actually about “authoritative” answers. It’s when you get a conciliation from many lines of evidence that you begin to think science has a handle on something.

      Anyhow, those four datasets are all pretty close. There are differences and I know a bit about WHY there are differences, but that is not “authority”. That’s part of the messy business of science and replication and testing and falsification and so on.

      But in general I think we can have confidence that four together give a fair picture of how temperatures are changing over time. (It’s worth understanding what is actually being measured here — know how an “anomaly” is defined and calculated. Another topic.)

      I’ve given plots over there which show trends from moving windows of 15 and 20 years. You can easily generate others with the spreadsheet. The sources of data are given with links. Here’s a direct link, for example, to the image of a plot for trends with a 20 year window. linky. The vertical scale is the trend value, the horizontal scale is time, and each plot point represents the trend for the 20 year window centered at the given time.

      For example… the purple line is for HadCrut3. It shows a local high of 0.237 C/decade at the time 1994.333. This means that the 20 year window from April 1984 through to March 2004 inclusive shows a trend of +0.237

      Since then, the windows have declined. The most recent 20 year window is Jan 1992 through Dec 2011, and that has a trend of +0.155

      The plot tracks how the trend over the window changes and the window moves over the last century to now.

      I don’t give it as “authority”. If there are bugs, I want to know. If anyone wants to repeat the calculation, I give links to the data I used. But I do commend this as my best effort, which I think is correct, and which people can look at or use.

      The SkyDragon thread has more comment, links and the spreadsheet itself.

      • The four datasets are not pretty close, nor are they of equal credibility. I like UAH because it is a measuring instrument, not a complex statistical model. It shows no warming from it’s beginning (1978) until 1997. It shows no warming from 2001 until now. But then second flat line is higher than the first flat line. The step up occurred during the big ENSO.

        There is no trend here, just a step function. No physical trend whatsoever. No evidence of GHG warming whatsoever. The other datasets disagree, of course. The point is that there is no simple evidence such as you claim. I personally think the UAH data is sufficient to falsify AGW. But in no case is it simple.

      • There is no trend here, just a step function. No physical trend whatsoever. No evidence of GHG warming whatsoever. The other datasets disagree, of course.

        If it will set your mind at ease, David, the other datasets actually agree. For example HADCRUT3 is just a step function whose steps average 27 years in length. Those steps are even longer on average than the ones in the UAH dataset. No physical trend whatsoever. No evidence of GHG warming whatsoever. The other datasets don’t disagree, they agree. You’re absolutely right about climate being flat.

        Incidentally the six HADCRUT steps show a net decline of 0.036+0+0.012+0.01+0.03+0.067 = 0.155 °C when you add them up. How does that compare to the net decline of the UAH steps?

        I personally think the UAH data is sufficient to falsify AGW.

        As David has made clear on this and many occasions, his standards as to what it will take to convince him that AGW is false are very high. I personally think that for most skeptics a cold day in July is sufficient to falsify AGW. Imagine David’s uncertainty before the UAH data became available.

  68. Chris Ho-Stuart

    I know Girma’s point was that the temperature data doesn’t (by itself) give strong evidence for a persistent ongoing trend. I agree that we need more.

    Thank you.

    In my climate debate against the AGW side, you are the first one to admit the above. Hat tip to you Chris.

    Chris, if “we need more,” before dotting our i’s and crossing our t’s, why do we the educated class scare billions of our world’s uninformed?

    Chris, have you looked at a paper ( http://bit.ly/nfQr92 ) co-authored by Mann that describes the oscillation in the global mean temperature to be due to Thermohaline circulation?

    • No, I had not seen that paper. It looks interesting, and the general notion makes good sense. That’s not a judgement on the paper or the work, which I haven’t read. I’ve seen similar kinds of work on shorter period changes associated with the Pacific Southern Oscillation.

    • PS. Don’t forget the next sentence after “I agree that we need more.” in the paragraph you partially quote.

      The next sentence is “I think we HAVE more, and said so.”

      It is the combination of all lines of evidence and tested theory which is the basis for the educated folks (that is, folks educated in climate science in particular) giving useful information on that technical subject, concerning what we do actually know about climate and what it’s doing.

      Whether it is “scarey” or not is beside the point. The aim is to give what information we have. Which does indeed confirm about as well as science confirms anything that the planet is heating up primarily because of human caused changes to the atmosphere. That’s not just a guess. It’s the overwhelming conclusion supported using all available evidence and indicated by best available physical theory.

      How much it will heat up and what attendant patterns of change will be seen around the globe is not so definite. That’s important information wanted to plan for the future, and that’s being worked on. Conclusions are being drawn, though generally they are more tentative, as befits the more limited level of scientific support for the such conclusions.

      • Chris

        According to Mann himself (http://bit.ly/nfQr92 , Figure 4) , expect slight cooling until 2030.

        IPCC’s 0.2 deg C per decade warming is being falsified.

      • Girma says:

        According to Mann himself (http://bit.ly/nfQr92 , Figure 4) , expect slight cooling until 2030.

        IPCC’s 0.2 deg C per decade warming is being falsified.

        That’s not correct. You’ve read the paper wrong.

        Look at the figure again. It is NOT showing temperature, but the “THC anomaly”. That is, according to Mann et al, the THC will be contributing a small cooling effect. Actually, it should be “Knight et al”; Knight is the first author.

        This is a proposal for part of the contribution to natural variability above and below the main trend… but an unusually long period contribution, which is indeed most interesting.

        What the paper ACTUALLY says of the overall temperature trend is seen in the conclusion. Here are the last two sentences of the paper:

        This natural reduction would accelerate anticipated anthropogenic THC weakening, and the associated AMO change would partially offset expected Northern Hemisphere warming. This effect needs to be taken into account in producing more realistic predictions of future climate change.

        That “partially offset” means that Knight et al are proposing that the main warming trend is greater than the quasi-periodic THC/AMO contribution to global temperature. Hence the paper is not predicting cooling, but that this effect will mask of some of the warming, leading to a reduction in the warming trend over that scale.

        This fits pretty well with what I had suggested earlier, where I anticipated an upcoming 20 year trend of 0.15 to 0.20 C/decade… the low end of IPCC projections. But my guess was simply based on extrapolation, not on a particular physical theory. I don’t use that the prejudge Knight et al’s idea, even though it is fits pretty well with my existing perspective. The paper will need to stand or fall on its own merits.

        Knight et al are not proposing an alternative to the main drivers of global temperature. They are proposing a specific factor contributing to secondary and untrended variation, which (if correct) might allow for better mid-term forecasts on the scale of several decades.

        It doesn’t falsify the IPCC projections, for two reasons.
        (1) FIrst, it’s not well confirmed. It’s a proposal; not a refutation of all alternatives.
        (2) Second, it’s not really an alternative anyway. It would, if it holds up, give a constraint on the mid term 20 year projection, to let it be nailed down a bit more tightly, towards the lower end of expections. Instead of “about” 0.2, it would be “a bit under” 0.2.

        When I did the same thing above, I got called a “lukewarmer”, which gave me a chuckle. :-)

  69. Chris, question to you!

    Here is the global mean temperature data => http://bit.ly/Aei4Nd

    Why do most of the global mean temperature peaks lie on a straight line?

    Why do most of the global mean temperature valleys lie on a straight line?

    Why do these two global mean temperature boundary lines parallel?

    Why is the slope of these boundary lines equal to the trend for the whole data from 1880 to 2010?

    • Why do most of the global mean temperature peaks lie on a straight line?

      Why do most of the global mean temperature valleys lie on a straight line?

      They don’t, as far as I can see.

      If you think you can see some kind of pattern by eyeballing a graph, the proper thing is to get some kind of sensible significance test. That’s not easy; you should really check with a professional statistician. I’m not one of those. Sorry.

      Why do these two global mean temperature boundary lines parallel?

      Because that’s how you defined them; they the same line with different offsets.

      If you want to make a non-trivial comparison, you need to actually calculate two lines independently, and then see if they are parallel. But what definition would you use?

      As it stands, note that your upper line has a couple of outlier peaks well above the upper line, and the lower one…. doesn’t.

      Why is the slope of these boundary lines equal to the trend for the whole data from 1880 to 2010?

      As before… they are the same line, just with different offsets. It’s because that’s what you’ve chosen to plot.

      This is not a good place for these kinds of questions; this is really more about learning a bit of simple statistics.

      • I have to say, I agree with Chris here.

      • Chris

        The global mean temperature (GMT) data shows a single pattern of a warming trend of 0.06 deg C per decade with an oscillation of 0.5 deg C every 30 years => http://bit.ly/Aei4Nd

        This mean that during cooling cycle, the magnitude of the cooling every 30 years is

        –0.5 + 0.06*3 = -0.32 deg C

        During the warming cycle, the magnitude of the warming every 30 years is

        +0.5 + 0.06*3 = +0.68 deg C

        Starting from GMT value of –0.27 for 1880, we can calculate approximate values for the other peaks and valley as follows:

        GMT for 1910s = -0.27 – 0.32 = -0.59 deg C

        GMT for 1940s = -0.59 + 0.68 = 0.09 deg C

        GMT for 1970s = 0.09 – 0.32 = -0.23 deg C

        GMT for 2000s = -0.23 + 0.68 = 0.45 deg C

        These values are very close to the observed values. Is this just coincidence?

        Let us make predictions:

        GMT for 2030s = 0.45 – 0.32 = 0.13 deg C

        If the observed temperature approximately matches this value, this will demonstrate the variation in climate is natural.

      • Chris Ho-Stuart: This is not a good place for these kinds of questions; this is really more about learning a bit of simple statistics.

        This is not a case for simple statistics. Descriptively, Girma is correct, but needs to use “quantile regression”.

  70. RealClimate has an article today updating model-data comparisons,

    http://www.realclimate.org/index.php/archives/2012/02/2011-updates-to-model-data-comparisons/

    The first figure shows that the observations, including the last 15 years fall well within the uncertainty of the IPCC projections for the “business as usual” scenario A1B.

    • Any agreement between observations and IPCC projections (GHG modelled) is pure coincidence. This will be demonstrated in the next decade(s).

      • Prof. Curry stated “Yes, but the very small positive trend is not consistent with the expectation of 0.2C/decade provided by the IPCC AR4”. The RealClimate analysis demonstrates unequivocally that this statement is factually incorrect. This is becuase Prof. Curry failed to consider the uncertainty around the expected trend. This may not have been stated explicitly in the summary for policymakers, but perhaps that is because it was a summary for policy makers.

        Blustering about what the observations may or may not show in the next dacades does not change the fact one iota that Prof. Curry was simply wrong about this.

      • Policymakers are allowed to look @ thermometers, to settle their uncertainty about who was wrong.
        ===============

      • Dikran –

        Prof. Curry stated “Yes, but the very small positive trend is not consistent with the expectation of 0.2C/decade provided by the IPCC AR4″. The RealClimate analysis demonstrates unequivocally that this statement is factually incorrect

        I tried asking Judith a related question.

        http://judithcurry.com/2012/02/07/trends-change-points-hypotheses/#comment-166946

        For some reason she chose not to respond. I assumed it was because I wasn’t sophisticated nor intelligent enough to understand that the answer to my question was obvious.

        Any chance that you might speak to the question of whether 15 years, or even 30 years of “pauses” would be “expected,” albeit rarely?

      • Pick a number, any number will do. You pays your money and you takes your chances.
        =============

      • Joshua: I assumed it was because I wasn’t sophisticated nor intelligent enough to understand that the answer to my question was obvious.

        Not quite. It’s because, whatever your actual sophistication and intelligence, you write stupidly on purpose. As you wrote in another post, you try to prove people wrong using the technique of “Socratic Dialogues”, which in this venue are pointless.

    • Overall, given the latest set of data points, we can conclude (once again) that global warming continues.

      Did not the IPCC predict the temperatures would be above for the case emission was held at the 2000 level?

      http://bit.ly/wVWllY

      • Girma, you’ve linked to this picture a number of times now.

        It is a modification of the ipcc figure, the original of which is seen here:
        Figure TS.26, AR4.

        The original figure is a comparison of predictions with data.

        Your figure is a modification. You’ve somehow REMOVED the data from the original figure, and added in your own version of the data.

        That’s not honest.

        It’s also technically incompetent on two levels.

        First, the projections are of central tendency, which means that they should be compared with smoothed data, filtering out short term variations. That is indeed what the original figure uses. You’ve removed that, and replaced it with unsmoothed data. Note that the original still shows the unsmoothed data points with black dots, and the smooth with a black line. That’s the major problem.

        Second, the modified data is not quite aligned right. It seems to be shifted down a little.

        The correct and more honest thing to do would have been the following.

        KEEP ALL the original figure, including the observational data already supplied. Editing the image to remove parts of it, without even saying that’s what you’ve done, is reprehensible.

        Extend with additional data available since publication, making sure it is correctly aligned with the existing plot, and identified clearly as added to the original. Purple is fine.

        Add in a new longer smooth data line, using “decadal averages” as done in the original figure. That’s what you should compare, not raw data.

        Do all this, and you’ll find (to my own total lack of surprise) that there’s no falsification at all.

      • Chris

        KEEP ALL the original figure, including the observational data already supplied. Editing the image to remove parts of it, without even saying that’s what you’ve done, is reprehensible.

        Before lashing out, please know that I have not drawn that graph.

        What matters is what happened after publication.

        The main thing is that IPCC projected for a warming of 0.1 deg C per decade if CO2 emission were held constant at the 2000 level.

        However, the globe is on a cooling trend right now => http://bit.ly/nz6PFx, and there is no restriction in CO2 emission.

        This clearly falsifies IPCC projections.

      • Since the IPCC has not ever predicted 10 year trends, which is how you get “cooling”, you are, again, flatly wrong. The IPCC expectations are for a warming trend that shows up over longer time scales. On short scales, trends are expected to be substantially above and below the persistent long term trend. The IPCC expectation is for a trend of about 0.2 C/decade. That trend should be sought over windows of 20 years or more. The “about” in this case means something from 0.15 to 0.3

        I personally would bet on the trend for 2000-2020 to be a bit under 0.2, which is still in line with the given uncertainty levels.

        You are responsible for choosing to cite that image, no matter what clown produced it. The person who produced it doesn’t actually know what is being predicted, or more likely (given that the original smooth of observations was deliberately removed) has deliberately obscured the matter. If you didn’t produce the image, then it would seem you’ve been sucked in badly. It most definitely does NOT show any falsification.

        To the extent you think it does, you are merely refusing to use what is actually explicit in IPCC expectations. It is most certainly not, and never has been, trends over 10 years!

    • Yep, still doing a fine job tracking scenario C, which should put 2100 temperatures under 2C.
      Adjusting Arrhenius’ latitudinal predictions seems to show a non-linear sensitivity.
      http://i122.photobucket.com/albums/o252/captdallas2/sensitivitybylatitudeestimate.png

      Of course, that is just a rough estimate, but interesting.

      • Markus Fitzhenry

        Vaughan, The gentleman asked, he did not state;

        1. Is there anybody here who can admit that when the stated uncertainty of the projection is considered Prof. Curry’s assertion is incorrect?

        Have you got some sort of circular reasoning problem when your ad-hominem attack was no more than a condition of your preconceived ideals.

        “”On both sides of the climate debate, the test that people seem to be applying as to whether their reasoning is logical is whether it leads to conclusions they already held.

        Your recent reasoning in a three-component analytic model of long-term climate change, to me was illogical in several respects, but the biggest bias was your misanthropism

        “”In my line of work, which for the last 35 years has been logic, this is known as circular reasoning.””

        My experience during that period in persuading people that they are using circular reasoning is that it is utterly impossible to do so. People have beliefs, and they simply refuse to attempt to imagine the opposite. Which is what you have to do in order to debug your reasoning.””

        Argumentum ad populum has been your argument here. In my 58 years hanging out with Aristotle, I’ve never seen a better protagonist with the fallacy than you.

      • @markus fitzhenry

        Here are the exact words used in the IPCC report

        ‘For the next two decades, a warming of about
        0.2°C per decade is projected for a range of SRES
        emission scenarios. Even if the concentrations of
        all greenhouse gases and aerosols had been kept
        constant at year 2000 levels, a further warming of
        about 0.1°C per decade would be expected’ (AR4 WG1 SPM p12)

        There is no discussion of uncertainties. No ‘if’s or ‘but’s. The whole para is highighted to stand out from the rest of the text. And it appears right slap bang under the heading ‘Projections of Future Changes in Climate’.

        This was their take home message to Bush and Blair and Merkel and Putin and other world leaders. To the press and the rest of the media. And to the interested general public. This was written n plain(ish) language for a lay audience to understand. There can be no ambiguity. This is the prediction.

      • Markus Fitzhenry

        Latimer,

        “”This was their take home message to Bush and Blair and Merkel and Putin and other world leaders. To the press and the rest of the media. And to the interested general public.””

        Didn’t scare me.

      • Latimre – what do you think the word ‘about’ means in the passage you have quoted?

      • Markus Fitzhenry

        Well it’s not like they said it was going to be zero. Louise?

      • @louise

        ‘Latimre – what do you think the word ‘about’ means in the passage you have quoted?’

        I think it means exactly what its common usage means.

        No use trying to wriggle. The statement is unequivocal. It was deliberately meant to be easily read and easily understood. The Summary for Policy Makers is not a legal document where every word can be parsed and analysed. It is there to influence policymakers.

        And it clearly states that they expect 0.2C warming per decade. Not 0.1C or 0.0C.

      • about – around, approximately etc etc.

        It’s a pretty weak argument to be trying to suggest that the SPM stands alone from the AR4.

      • @michael

        ‘It’s a pretty weak argument to be trying to suggest that the SPM stands alone from the AR4’

        ???

        Or is English not your first or second language and you have difficulty with the word ‘Summary’?

      • Lati,

        Brilliant – you’ve finally got it!

        Yes, it’s utterly stupid to declare what the AR4 says by only invoking the summary and ignoring the details in the technical section.

        And an english hint for you – common usage of ‘about’ is not ‘exactly’.

      • May I suggest that the source of confusion here is not over “about”, but whether a rate of “about 0.2 per decade” can be read as “about 0.2 for every decade”.

        The “0.2 per decade” is simply the magnitude of trend; the units of degrees per decade. It’s not an indication that every decade will see about that rise. The suggestion is that over TWO decades, you will see a rate of increase of about that magnitude.

        That prediction fails if the trend over the two decades is substantially different from 0.2.

        The prediction does not say you can look at the trend over one decade and see that trend. I won’t quibble over whether it could be worded more clearly; but the meaning is nevertheless the same as it has been for every IPCC report ever. It’s always been recognized that there’s lots of short term variation over a decade or so. It’s always be explicit that the rise is not expected to be steady.

      • Chris,

        Yes, that is absolutely clear…..for anyone who reads the document in good faith.

        Those who find it convenient to misunderstand will continue to do so.

      • @markus fitzhenry

        ‘Didn’t scare me’

        Nor me. No even a dampness in the knicker area. No palpitations whatsoever. BP steady – both systolic and diastolic.

        And absolutely not even the stirring of the idea that I must unbutton my wallet, throw twenties around like confetti and run around shouting

        ‘Oh my God, we must Do Something. Here, spend lots and lots and of money, however futile it is’

      • @chris

        If they wanted us to read it the way you describe, they were perfectly at liberty to have written it with your convoluted interpretation in mind.

        They could have written

        ‘we expect that the temperature will rise by about 1.0C in 50 years, although warming may be uneven’ which would have covered the same point – albeit with slightly different emphasis..

        They didn’t. They deliberately chose to use the decadal time frame, not the century or the annual or any of the other scales that they could have chosen to illustrate their point.

        So I don’t think it is any sort of a reasonable interpretation to believe that this was supposed to be taken as ‘0.2C in some decades (and probably not this one coming up now)’.

      • @chris h-s

        and partly @ michael

        ‘It’s always been recognized that there’s lots of short term variation over a decade or so. It’s always be explicit that the rise is not expected to be steady’

        Chris, this is the Summary for Policy Makers. Guess who it is aimed at? Yep, Policy Makers. The clue is in the name.

        And who might the policy makers be? Senior politicos and their staffs primarily. Why was this summary written, rather than just dump AR4 in its entirety on their desk and say ‘there you go matey..its all in there’? Because these guys are busy people, have limited time to read things, probably have a zillion other important things to worry about and may have only limited interest in the topic. So, like an ad on the telly, it has to be short and sweet and cover the main points of a topic that the recipient may have only limited (or no) background knowledge about.

        It may well be that in geeksville, arizona ‘it has always been recognised’ that there will be short term variation. For those whose careers are funded just to obsess about the last jot and tittle of every word in the report this may be common currency (though historical records of this being so seem to be hard to come by).

        But this is not the case for the occupant of the White House, or the Elysee Palace or 10 Downing Street or wherever it may be. Th eones at whom the SPM is aimed. They likely know little and care less about the historical conventions of the IPCC and its implicit caveats. They just have document on their desk called ‘Summary for Policy Makers’, and quite reasonably expect that it should give them a quick read and a decent understanding of the key points of the topic in hand. Probably enough for them to incorporate something about it in a speech and answer 1st level questions in Parliament or a press conference without making complete arses of themselves.

        And this document is made available to the general public as well (a good thing to do). So the interested layman might take exactly the same approach. That if he reads this he can hold his own with the regulars of the Dog and Duck or over coffee at work on the topic.

        I’ve written before about how misguided it is fro you to blame the recipient for them getting your message ‘wrong’. Here, it seems you compound the mistake bigtime.

        You do your cause no good just by leaping up and down and shouting ‘you’re all too stupid to understand what we tell you, scum’. Especially when you don’t seem to have taken the slightest trouble to move beyond megaphone communication.

        PS – I’m sure you know that implict assumptions aren’t worth the paper they’re not written on.

      • Latimer,

        Your brand of disingenuousness is what gives ‘skeptics’ a bad name.

        Despite your wish that policy makers are as stupid as you pretend to be, here is what the SPM says to start,

        “The basis for substantive paragraphs in this Summary for Policymakers can be found in the chapter sections specified in curly brackets.”

        And are there “curly brackets” after the paragraph you keep partially quoting????

        Yes;
        “For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios. Even if the concentrations of all greenhouse gases and aerosols had been kept constant at year 2000 levels, a further warming of about 0.1°C per decade would be expected. {10.3, 10.7} ”

        Which point the policy makers to “the basis” of that summary.

        Do you really want to keep playing stupid?

      • I am so used to the unit “degrees per decade” that it would never even occur to me to take that as meaning “degrees for every decade”, or an indication of a time frame. Especially when the time frame was explicitly given as two decades in the same sentence.

        The trend over 10 years varies enormously. It’s been negative a number of times in recent decades; and also very high; over 0.4 at times. It’s not something you can reasonably predict.

        There are two wordings of the prediction you are speaking about. In the technical summary, they say:

        Committed climate change (see Box TS.9) due to atmospheric composition in the year 2000 corresponds to a warming trend of about 0.1°C per decade over the next two decades, in the absence of large changes in volcanic or solar forcing. About twice as much warming (0.2°C per decade) would be expected if emissions were
        to fall within the range of the SRES marker scenarios.

        That phrasing seems okay: “degrees per decade over the next two decades”. The “per decade” is the unit; the “over the next two decades” is the window.

        In the summary for policy makers, it’s slightly different.
        In the summary for policy makers, they say:

        For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios. Even if the concentrations of all greenhouse gases and aerosols had been kept constant at year 2000 levels, a further warming of about 0.1°C per decade would be expected. {10.3, 10.7}

        Using “over the next two decades” might have been better than “for the next two decades”, but at this point I think the more serious problem is a willful determination to take the wrong reading.

      • @michael

        I don’t doubt there are curly brackets all over the place. Just like there are references academic papers and in some popular history books. They are there to guide the very interested reader to further material. Big deal.

        A Summary should be what it claims to be. If you really need to read all the other stuff because you are otherwise missing really important stuff then it ain’t a summary it’s a ramble. This is clearly entitled ‘Summary’, and does not say ‘BTW you guys need to read all the rest as well because I just can’t be arsed to write a proper summary’

        You and Chris H-S keep falling into the trap of trying to say that what was written and published for all the world – and all the world leaders – to see doesn’t mean what it actually means. That there is some sort of code known to the cognoscenti of the IPCC that makes the language mean something different from its common meaning. That we should all have known that the author had hos fingers crossed when he wrote it. Or that it was the Third Wednesday after the Feast of Walpurgis and Beltane, so that words take on a different meaning.

        And then, to compound the felony, you wander about claiming that its all our fault. Words almost fail me, so I will use an abbreviation.

        BS!

      • @chris h-s

        ‘I am so used to the unit “degrees per decade” that it would never even occur to me to take that as meaning “degrees for every decade”, or an indication of a time frame’

        You’re doing a great job of emphasising my oft-repeated point about communication. You – as somebody involved in this professionally – may have some specific meaning that you are so used to that you never even notice it. And maybe that is fine for discussion with your immediate colleagues. Every field has its jargon, so this is no surprise.

        But when you are writing for a wider sphere, you must be just as careful with your language.and phrasing as I hope you are with the numbers.

        In this case they were writing for non specialists in an attempt to make some sense of all the technical stuff in the rest of the report. It matters not a jot what you take it to mean. What matters is what the intended audience takes it to mean. And it is quite reasonable for them to expect that the language used will be in common usage and the meanings will be the common meaning.

        And if a phrase or sentence or paragraph is possibly ambiguous, it is the author’s responsibility to make sure that such ambiguity is eliminated. It should not be the reader’s task to try to guess which meaning was intended.

        The common meaning of ‘about 0.2C per decade’ is perfectly clear. If you choose to interpret it differently that is your choice. But you cannot accuse those who see it differently as wilfully distorting it.

      • MIchael

        Yr “Yes, it’s utterly stupid to declare what the AR4 says by only invoking the summary and ignoring the details in the technical section.”

        O. K. Michael, I got it. When dealing with you and your pals–the “team”–we must always read the fine print. You’ve made your point.

        And we also now know, thanks to you, Michael, that any decision-maker that relies on the weasel-worded summary statements appearing in your profession’s most prestigious guide for policy-makers is “stupid”, unless that policy-maker has also read and digested every fine-print detail of the whole report.

        And finally, we know, Michael, those cleverly-worded, summary statements in the AR4 report are just a sound-bite friendly, agit-prop resource. You know, the sort of “good stuff” the team’s Big-Green trough-masters can draw on to whip-up decorative, “scientific” justifications for their CAGW scams, as needed. Like I said, I got it.

      • Lati,

        That was just pathetic.

        I love the smell of desperation in the morning.

    • Dikran,
      No, what RC has done is toss up a lot of bull dust, and you believers, having an appetite for bull dust, think it is wonderful.
      RC is a pure propaganda site, and your quoting them is no better than some low level apparatchik of the USSR quoting Pravda.
      I look forward to hearing how RC explains away the fact that the IPCC was completely wrong about Himalayan Glaciers:
      http://www.guardian.co.uk/environment/2012/feb/08/glaciers-mountains

      • It is deeply sad that whenever in this debate an assertion is shown to be unquivocally incorrect, rather than accept that it is incorrect, the response is an attack on the source or an attempt to move the discussion onto another topic (Himalayan glaciers).

        RC would be stupid to present a false analysis. The AR4 model runs are all archived and publically available, so if they misrepresented them it would be straightforward to expose the falsehood. The ball is in your court, present an analysis that proves RC wrong; I am a rational man, I am swayed by logical reasoning, but I am not swayed by ad-hominems or rhetoric or bluster.

      • It is deeply sad that you confuse what takes place at RC as conclusive of anything other than Schmidt’s bloviation.
        RC has been, by your definition, stupid for quite some time.
        Your defense tactic is ridiculous.

      • as I said, “I am swayed by logical reasoning, but I am not swayed by ad-hominems or rhetoric or bluster”.

      • You should understand, d. mars. that Skep. Sci. is a deeply ironic blog title, given your evident example of deep irony.
        =====================

      • yet another ad-hominem (attacking the source of an argument in place of an attack on the content), is deeply unconvincing.

      • That crew, dm, is deeply deceitful. Sadder than deceiving you, they’ve deceived themselves.
        ==============

      • Dikran,
        I followed your link from the RC page. I acknowledge that 2011 fell within the uncertainty range of the model estimates, althought the range was 0.8C wide. It is too soon to tell the accuracy of the models in the AR4 report. More interestingly, temperatures have been below Hansens Scenario C since 2003. This is the scenario, where draconain cuts would keep atmospheric CO2 concentrations constant at year 2000 levels (the best case scenario, which he felt was highly unlikely). Hansen has stated that the cause for his is the large increase in sulfate aerosols resulting from Chineses coal burning. RC glosses over these disparities, and instead focuses on trends longer than 15 years. A few posters asked about what would happen if the current 15-year trend (-.009C/decade – CRU3)continued, but Gavin seemed to deflect these.
        My question to you is how long would such a trend need to last before we begin to rethink these models? Gavin seems to think that not much will change this year.

      • DM,
        You are clearly swayed by argument and bluster dressed up all sciencey.

      • DanH The models need rethinking continuously, that is the way science works and indeed is what the climate modellers do. They are always looking to include more physics so that the models become a better representation of reality. The models tell us the likely consequences of our actions based on our best understanding of the physics.

        The key point here is that the observations are consistent with model projections, I have no objection to people criticising the models as long as the objection has a basis in fact. This one doesn’t and promulgating it is merely reducing the signal to noise ratio of the discussion, so it is in the best interested of both camps to drop it.

      • I’m not sure you’re getting the point of the uncertainties: it’s not unlikely for a climate system with a sensitivity of 2-4.5ºC to show relatively little warming across a given short interval (e.g. 11 years) even when the average rate of warming, across a longer period encompassing that interval, is about 0.2ºC.

        Therefore, you can’t make any significant statements about sensitivity from this data.

        You also might be interested to know that the IPCC range of uncertainty is not stratified by sensitivity across this short period. Some of the lowest trends come from higher sensitivity models, some of the highest trends come from low sensitivity models.

        I should note that I do think 0.2ºC/Decade is probably a small overestimate though this is more likely due to the overly large forcings in most of the models rather than having any clear implications for sensitivity. I think the current warming rate is closer to ~0.15ºC/Decade. One thing that hasn’t been mentioned earlier in the thread is the word ‘about’, which has historically been used by the IPCC in a way that should really be written as ‘ABOUT’ in big letters.

      • PaulS said, “I should note that I do think 0.2ºC/Decade is probably a small overestimate though this is more likely due to the overly large forcings in most of the models rather than having any clear implications for sensitivity. I think the current warming rate is closer to ~0.15ºC/Decade. One thing that hasn’t been mentioned earlier in the thread is the word ‘about’, which has historically been used by the IPCC in a way that should really be written as ‘ABOUT’ in big letters.”

        Agreed, ~0.15 with approximately +/- 0.15 natural variability. That is a huge difference in initial estimates and would make a huge difference in planning for and the cost of preparing for the future. That’s the point.

        http://i122.photobucket.com/albums/o252/captdallas2/ar4smilie.png

        This plot was from the IPCC AR4, modified by Girma with an added uncertainty added by myself, smilie wearing shades :)

        Definitely not long enough for a confident trend, but it looks like we are ABOUT to establish one with major policy implications.

        Of course, how creatively AR5 handles the Antarctic, mid-tropo, lower strat and tropics would also have major policy implications.

      • Impressive work finding that post.

        Regarding what you were saying about ‘falsification’ of scenarios one interesting thing to look at is what scenario we followed over the past 11 years. In terms of emissions the A1B / A2 pathways are pretty close, but if you look at estimates of total radiative forcing change (e.g. from GISS) there is zero increase since 2000. This means that the scenario we have effectively followed is the ‘Year 2000 Constant Concentrations’ one, projecting about 0.1ºC/Decade.

      • Dikran,
        If the models are constantly being reworked to include the latest research, then why are so many people still using the AR4 model scenarios, which are about six years old? As I mentioned earlier, the observations are only consistnet with the models, because the models have such a large uncertainty associated with them. One would think that they would rework the models before the observations fall outside the 95% confidence level.
        Similarly to Paul,
        It appears that even 0.15C/decade is too high. While a 30-year trend line will yield a similar value (0.16), shorter or longer time frame do not. (The 15-year trend is essentially 0, the 60-year trend is 0.11, and the 90- and 120-year trends are 0.07C/decade). Some posters are critical of the short time interval, and rightly so, but why neglect the long term trend in favor of an intermediate timeframe? A long term warming of ~0.7C/century would experience short-term rates of both 0.15 and 0C/decade

      • While a 30-year trend line will yield a similar value (0.16), shorter or longer time frame do not. (The 15-year trend is essentially 0, the 60-year trend is 0.11, and the 90- and 120-year trends are 0.07C/decade). Some posters are critical of the short time interval, and rightly so, but why neglect the long term trend in favor of an intermediate timeframe?

        Because the theory isn’t that the climate should be warming at a certain rate by decree over any chosen timeframe. The theory is that climate will warm in proportion to changes in radiative forcing over time (+ equilibrium ‘pipeline’ warming). I’ll post the GISS forcing diagram again. Note that the forcing increase since 1950 is about 3 times that from 1880 to 1950, hence the theory would expect a greater rate of warming over the past 60 years compared to the past 130 years. There is also a, less clear, acceleration at around 1970 so again, we would expect the past 40 years to have a greater trend than the past 60.

        Likewise, see my previous post about RF change since 2000. The GISS estimate suggests there has been no net RF change over this period.

      • DanH the reason that we are still using the AR4 models is that organising a consistent set of scenarios and getting a large number of modelling groups to coordinate to produce the multi-model ensmeble is a large effort, which detracts from the time required for research. There will be a new set of model runs for the next IPCC report and I understand that work on this is already underway.

      • Dikran, If you take the IPCC model range, you include models that predict 1.5K increase for the 21st century. So to say that the observations are “within” the model uncertainty is in my book a virtually meaningless statement. I think Hansen’s 1988 scenarios were clearly wrong on 2 fronts. He overestimated the percentage of emissions that would remain in the atmosphere, by a very large margin actually. And his model had a high sensitivity. In my book, that is definitive evidence that Hansen was unduly alarmist. In fact, the error bars on the models are probably greater than 100% of the values for such things as temp anomoly. And that’s the problem we should all focus on, not endless debates about how we can “adjust” the data so its consistent with our theory and models.

    • Mr. Marsupial, perhaps you can help me out with this sensitivity thing. Seems the Antarctic is not warming because CO2 needs water vapor to work. The tropics are not warming as much because they have too much water vapor for CO2 to work properly. The mid-latitudes are not warming as expected because ? At least the northern high latitudes are working. That kinda contradict what Arrhenius predicted.

      Since, “we” know more than Arrhenius, is there a modeled out put by latitude that matches what is going on?

      http://redneckphysics.blogspot.com/2012/02/arrhenius-is-still-dead-but-his-mistake.html

      • Again this is an attempt to change the topic of the discussion rather than admit that Prof. Curry’s assertion is factually incorrect.

        SkepticalScience is a good place to ask questions like that, I would reccomend you try there (just pick a relevant article)

      • DM,
        No, your only tactic is to ignore what you do not like.

      • Mr. Marsupical,

        I’m sorry, silly me. I thought that with Hypothesis II latitudinal sensitivity of the past 120 years might indicate somewhat natural variations impact on the climate sensitivity to CO2 increase. Then an average sensitivity based on the latitudinal trends being 1.48C per doubling might be some indication of future response to CO2, which appears to be somewhat less than 0.2C per, though still within the confidence interval of the model predictions, just closer to scenario C.

      • Which is irrelevant to the discussion of whether Prof. Curry’s assertion about the observations being inconsistent with IPCC projections (and hence falsifying hypothesis I) is correct.

        Is there anybody here who can admit that when the stated uncertainty of the projection is considered Prof. Curry’s assertion is incorrect?

      • Please don’t make me read @ RealClimate. Pilgrims get squished under the wheels of the machine over there.
        ======================

      • you don’t have to read RC, just go to the IPCC model archive and plot the 95% credible interval for the A1B projection for yourself, and plot the observations on the same axes. You will find that the observations are consistent with the models and hence Prof. Curry’s assertion is incorrect.

      • Mr. Marsupial,

        One of the main issues with Dr. Curry’s statement is the instrumental record and which scenario is used to determine if HI is or can be falsified. Lucia has been on this subject for over a year now and comparing the various data sets to projections, in her opinion, the data is verging on falsification of H I.

        As you know, falsifying H I with the large uncertainty is not an easy thing to do. Some say it is impossible to falsify. However, since the 0.2C per decade appears to relate to the BAU scenario, there is a possibility that that scenario is falsified.

        So do you consider it valid to falsify “a” scenario, or just the particular scenario you happen to select?

      • Policymakers don’t need weathermen to tell which way the chill blows.
        ==============================

      • It isn’t difficult at all. If the observations (within their stated uncertainty) lay outside the stated uncertainty of the projections, then that would falsify the models. Yes the credible interval is broad, but that is because there are large uncertainties involved.

        However, that is beside the point. Prof. Curry has claimed that the observations are not consistent with the IPCC projections. If you plot the projections and their uncertainty, then it clearly not the case. I really don’t understand why some can’t accept this.

      • Short answer to the question is “Yes, there is indeed modeled output by latitude that matches what is going on.”

        At least, in the general terms you are using.

        Model data is available as gridded data, which allows you to get the numbers for latitudes if you want. The group at NASA also give you all kinds of plots from model runs using “ModelE”, including plots of all kinds of data by latitude; temperature included.

        The page to start at is ModelE Climate Simulations. From there, you can get to (for example) Zonal Means vs. Time, which is what I think you want. It’s the temperature response by latitudes.

        If you want to find out more about the forcing used or other information, spend a bit of time reading the pages and references and so on at that page.

        If you hunt around the web you can find other groups with more freely available models and model information. But the NASA ModelE data is pretty comprehensive and matches closely what you ask for.

      • dm @ 9:28, it is difficult to accept because I’ve stared @ the blackboard ’til I’m blue in the face, and my fingers thrill and tremble. With facade and digits such, I wanna feel the heat. It’s sadly lacking, er, at least, I’m missin’ it now.
        ===================

      • Chris Ho,

        If I am reading the plot model correctly, 1 degree of warming has not happened in the Antarctic and the tropics appear to be projected higher than observation. The sensitivity by latitude that I calculated differs a bit from the that model.

        Now is that because that model does not consider natural variation or perhaps is the radiant physics is off a touch?

      • Mr. Marsupial said, “It isn’t difficult at all. If the observations (within their stated uncertainty) lay outside the stated uncertainty of the projections, then that would falsify the models.”

        Business as usual estimates both emissions and response. That means there are two layers of uncertainty. If you consider only the response, I would say it is falsified, barely, but falsified. Perhaps a more specific post is required because the observations agree with H II more than H I.

        By separating the scenarios and the models it would be easier to determine what is falsified. I am confident some models should be either falsified or corrected.

      • As you know, falsifying H I with the large uncertainty is not an easy thing to do. Some say it is impossible to falsify.

        I’m assuming that by ‘falsifying HI’ you mean ‘falsifying the range in the IPCC’s scenario-based projections’. It really isn’t that difficult. All that needed to happen was a ~0.2ºC drop in global temperatures since 2000 (or a ~0.8ºC increase). That hasn’t happened, ergo the IPCC range is not ‘falsified’. It’s like saying it’s impossible to falsify the theory of gravity because all this stuff keeps falling down.

        However, since the 0.2C per decade appears to relate to the BAU scenario, there is a possibility that that scenario is falsified.

        The uncertainty that Dikram is talking about is for a BAU scenario (A1B). Observations are currently within the IPCC range, ergo the range isn’t falsified.

      • Paul S said, “I’m assuming that by ‘falsifying HI’ you mean ‘falsifying the range in the IPCC’s scenario-based projections’. It really isn’t that difficult. All that needed to happen was a ~0.2ºC drop in global temperatures since 2000 (or a ~0.8ºC increase). That hasn’t happened, ergo the IPCC range is not ‘falsified’. It’s like saying it’s impossible to falsify the theory of gravity because all this stuff keeps falling down.”

        Or not rise with the projection. With current sensitivity estimate of 2C and less for a doubling, that is “likely”. So it is more like falsifying the gravitational constant because things don’t fall as a fast.

        Which, if I were in the climate modeling business, I would be considering, instead of splitting hairs.

      • Capt Dallas, the model is not a perfect match. It does show some of the variation by latitude you expect but the magnitudes may differ somewhat. There is discussions of these limits on how well the model matches distributions of change over the globe in the associated journal article, prominently linked in the pages I gave previously. Section 2 lists and discussions the known deficiencies. Read it for yourself, please.

      • You promised heat, and you persist in your promise. Now, I’ve sacrificed mightily for my belief in your promises, oh, mightily. Nobody knows the trouble I seen, trouble I seen, trouble I seen.
        ====================

      • Chris Ho,

        I thought the discussion was on whether H I will be falsified and if H II and H III might be worth consideration.

        Since the projections are based on the models simulations that indicate approximately 0.2 C per decade, the error in the models in the Antarctic and tropics appear to be higher than observation, and the trend in the tropics since 1994 is only 0.04C per decade, it appears likely that H I will be falsified. Perhaps a better look at the observations will help,

        http://redneckphysics.blogspot.com/2012/02/models-versus-observations.html

        Of course, more high northern latitude stations could be added to the surface temperature records to postpone falsification, but without adjustment I doubt it will not be falsified.

      • @Dikran marsupial Is there anybody here who can admit that when the stated uncertainty of the projection is considered Prof. Curry’s assertion is incorrect?

        Judging by the following quote, apparently there is such a person:

        @Dikran marsupial This is becuase Prof. Curry failed to consider the uncertainty around the expected trend.

        But why should such a person be believed when the rule of inference they appear to be applying would seem to be that if RealClimate doesn’t know something, therefore JC doesn’t know it either.

        Now admittedly that’s how RC prefers to reason about these things, quite understandably of course, that’s how we all reason. But where in the spectrum from logical to arrogant would you say that line of reasoning lies?

        On both sides of the climate debate, the test that people seem to be applying as to whether their reasoning is logical is whether it leads to conclusions they already held.

        In my line of work, which for the last 35 years has been logic, this is known as circular reasoning.

        My experience during that period in persuading people that they are using circular reasoning is that it is utterly impossible to do so. People have beliefs, and they simply refuse to attempt to imagine the opposite. Which is what you have to do in order to debug your reasoning.

    • I have a question.

      If the range of the error bar is 0.8 and the prediction is for 0.2, what good is the predictive quality of the model?

      • timg56, you are mixing up different things. The 0.8 error bar is not an error bar for the trend which is being predicted. Trends over two decades have errors bars much less than 0.8.

      • Chris,

        It is easy to get mixed up following the discussion. Just the discussion alone between you and Girma – it appears both of you are correct, which makes me think either you both are talking past each other about different things or the nature of the data is that it can be manipulated almost anyway you want it to.

        I guess in the end it doesn’t really matter, as I believe we are getting warmer and I’m willing to accept that anthropogenic causes can be a significant factor. I just haven’t seen anything that classifies as good science that shows it is something to be concerned about. Arguing about starting points for statistical analysis of the temperature data is nothing more than an academic exercise. It is interesting and may further our understanding, but it certainly is not a clarion call to action on the part of governments.

    • Chief Hydrologist

      so the models are right but so uncertain as to be totally useless.

      • Which prompts one to wonder exactly why they are so uncertain.

        Is it just inherent in some large scale version of Heisenberg’s Uncertainty Principle? Or because we don’t know how to writhe the code? Or some other reason?

        Because it seems to me that before we spend another cent on this endeavour we need to have a very good idea of exactly what we can achieve. And that if the answer is that the models will never be good enough to have a decent idea about future temperatures, we should defund the lot immediately and spend the money on something useful instead.

        There is absolutely no point in throwing good money after bad.

      • Chief Hydrologist

        The models are irreducibly imprecise because of sensitive dependence and structural instability arising from the intrinsic nature of the multiphasic Navier -Stokes partial differential equations – they are crazy little, type 3, deterministically chaotic bastards. And people say they can’t understand me. What the hell is the freakin’ problem with them?

  71. Just to add regarding hypothesis 1 “Challenges: convincing explanations of the warming 1910-1940, explaining the flat trend between mid 1940′s and mid 1970′s, explaining the flat trend for the past 15 years.”

    what is wrong with the standard explanations, which IIRC are “solar forcing”, “sulphate aerosols” and “ENSO”.

    • The same thing which is wrong with epicycles in the geocentric model.

    • These explanations are all very dubious. I thought solar forcing was negligible (at least that is the dogma of all the IPCC reports). Sulphate aerosols influence is essentially unknown even according to the IPCC. ENSO I thought couldn’t result in a net multi-decadal trend because of conservation of energy. We have had multiple posts from Fred on this subject. Of course, when an explanation is required, one resorts to things that are essentially unknown. This is just astrology and not science. And by the way, what caused the little ice age and the Mideval climate optimum? If the models are as good as you say, then they should be able to tell us. It seems pretty clear to me that the bulk of the evidence shows that the models are overpredicting warming, especially if you use the lower troposphere satellite data.

    • Sorry, should be “solar forcing variations are negligible compared to other forcings.”

  72. Tomas Milanovic

    Judith

    I liked much your attempt at differentiating the hypothesis used to establish a theory of climate dynamics.
    I would like to attempt to use this differentiation to identify more accurately the physical background.

    You wrote:
    I. IPCC AGW hypothesis: 20th century climate variability/change is explained by external forcing, with natural internal variability providing high frequency ‘noise’. and Hypothesis I derives from the 1D energy balance, thermodynamic view of the climate system

    The operative word is here 1D.
    To stay rigorous, Hypothesis 1 is not really that the variability = GHG signal + noise.
    The real Hypothesis 1 is that the climate system can be robustly and deterministically predicted by a 1D model.
    This Hypothesis has necessary consequences.
    – only energy balance matters (here comes the school of people saying that the system is trivially simple because it exchanges energy only by radiation)
    – only “equilibrium” matters (here comes the school of people who compare the system to a small ball slightly moved away from its equilibrium position inside a spherical bowl)
    – space doesn’t matter (this is a tautology because if a 3D system can be reduced to 1D and still predicted, then the “neglected” 2 D obviously didn’t matter)
    – from the above follows also necessarily that everything that happens in the real 3D world can only be noise (here comes the school of people who say that everything averages out)

    Interestingly you will have noticed that 99% of the comments here are resolutely 1D and many are even totally unable to understand the difference between a 3D world and a 1D model. This gives us some of the funniest coments which boil down to saying things like “What can be possibly complex about multiphasic Navier Stokes? How can that be relevant to anything?” ?

    The analogy to this schol of thinking comes immediately in mind and I am sure that you will understand what I mean because you come from fluid dynamics even if, unfortunately, it will be wasted on most commenters amateurs of the 1D hypothesis.
    2D and 3D Navier Stokes.
    Why is 2D N-S easy?
    Because the vorticity is conserved in the inviscid limit.
    Of course it is not conserved for 3D N-S and we live in a 3D world.
    So people who would only learn 2D N-S would never understand why N-S is really hard and why we can’t correctly explain things that are easy to explain in a 2D world.

    To resume the 1D climate hypothesis is per definition unable to explain anything that happens in the neglected 2 dimensions and must rely on the axiom that all these 3D phenomena do not matter. This is cannot be proven in the frame of this theory and must be postulated.

    I did not really understand Hypothesis 2. It seems to me like a curve fitting exercice where I add periodic signals which are superposed to a linear signal. The whole exercice happens still in 1D though and that’s why it would be just an avatar of the Hypothesis 1.

    Hypothesis 3.

    Climate shifts hypothesis: 20th century climate variability/change is explained by synchronized chaos arising from nonlinear oscillations of the coupled ocean/atmosphere system plus external forcing

    Here it is not really a hypothesis but a physical reality.
    I hope nobody would defend the idea that the system doesn’t obey the dynamical equations transcribing energy, momentum and mass conservation (e.g Navier Stokes &Co).
    And obviously it happens in a 3D world.
    As obviously the fields interact and are coupled in a non linear way.
    The Lyapounov coefficients are clearly >0.
    So this the only way to take the system seriously.
    Sure, it is much harder to solve than unrealistic 1D linear lodels but since when did Nature care about what was hard and what was easy to solve?
    For me this is the only serious paradigm with some physics inside and it is a disgrace that there are still people who don’t understand it.

  73. It seems, (il semble,) compared to cloud related climate shifts, l’ombre de homme n’importe:

    ‘And here face down beneath the sun
    And here upon earth’s noonward height
    To feel the always coming on
    The always rising of the night.

    To feel creep up the curving east
    The earthly chill of dusk and slow
    Upon those underlands the vast
    And ever climbing shadows grow

    And deepen on Palmyra’s street
    The wheel rut in the ruined stone
    And Lebanon fade out and Crete
    High through the clouds and overblown….’

    With “You, Andrew Marvell,” I rest my case.

  74. A question for “skeptics.”

    Seems to me that the point at which you should crow about predictions of future temperatures being wrong is the point at which any known magnitude of year-to-year fluctuation (anomalies?) would still leave the overall trend outside the projected range. (I believe that mosher has said some things along similar lines).

    In other words, if 2012 were as much warmer than 2011 as 1997 was compared to 1996, and still the overall trend would not fall into the predicted range, then there are some serious problems with the predictions. That logic could be extended to two year differences, or ten year differences, etc.

    In other words, if the previously observed degree of variability, if repeated, would not substantiate the predictions, then it would seem to me to be reasonable to assert that the predictions were in error.

    Of course, you’d also have to account for any established trend of increased year-to-year, or two-year to two-year, or decade-to-decade variability.

    Can some “skeptic” take pity on me, read what I just wrote, and clear up my silly attempt at understanding how to evaluate the validity of the IPCC’s “predictions.”

    • Of course, the 1997-1998 comparison was just an example. The point would be to use the largest magnitude of variability that has been observed for any given period of time.

    • Joshua. Youa re absolutley correct. Girma’s graph, the URL of which I have lost, makes the issue crystal clear. Temperatures have been rising since 1850 or so at a rate of 0.06 C per decade, with nearly all temperatures lying within +/- 0.25 C of the mean value. The recent pause in temperature rise still lies within this +/- 0.25 limit. When the actual observed temperature falls outside these limits, as predicted by the proponents of CAGW, then, and only then, will I start to worry. If Girma sees this, I am sure he still has the URL.

      Needless to say, what I have just written has been repeated ad nauseum for years. It is just that people like yourself seem to ignore it.

    • Joshua,
      You use too many ” ” to come across as someone asking sincere questions.
      I think it is better to just let you twist and troll.

    • Joshua: A question for “skeptics.”

      I didn’t see a question. You seem to want skeptics to prove that something is correct, whereas we mostly point out limitations in the science: showing that the IPCC/CAGW view does not rest on a solid base. In this instance, the IPCC projections are shown to be unsubstantiated by subsequent events, so there is no good reason to believe the forecast for later decades.

      Ignorance is the hardest state to recognize and admit to. We document the ignorance (i.e. limits of the knowledge) and you want us to turn it into an alternate certainty. The only proper way to do that will be to continue the research until there is more evidence of all kinds. I give you “Raymond T. Pierrehumbert’s book ‘Principles of Planetary Climate’ is mostly correct, but inaccurate in detail, and the details require more study”. You want proof that some other presentation of details is correct, but every presentation is inaccurate in details. This seems to cause you anguish. I attribute to you a belief like “The consensus has to be correct because there is no strongly supported alternative”, but I give you “The consensus is full of cavities, is not strongly supported, and there is no strongly supported alternative.”

      • MattStat,
        Joshua is working far too hard to avoid such an obvious point.
        His is a tar baby defense- to ask insincere questions / or to make non-questions, and to bog down everyone involved .
        Just let him twist and troll.
        AGW apologists have to resort to wack-a-mole, tar baby and shouting defenses, but their favorite is to simply ignore what they do not like, fail to respond, and then to claim only their views are valid.

      • Matt –

        I didn’t see a question.You seem to want skeptics to prove that something is correct, …

        Good point. The question was whether or not my construction (to the extent that it was even understandable) was correct. The “question,” such as it was, was to ask whether or not there is a flaw in my thinking. Not to prove that something is correct, but to show me how it is incorrect.

        This seems to cause you anguish. I attribute to you a belief like “The consensus has to be correct because there is no strongly supported alternative”,

        That isn’t how I see it. It is interesting that no matter how many times I have to correct people in that regard, they still see my opinion to be one as you describe above.

        My “belief” (more like general sense of how it works rather than a belief – I think that “belief” is too strong a word) is that absent hard proof otherwise, it isn’t irrelevant that a “consensus” of expert opinion says that a certain interpretation is probably correct. I don’t think a “consensus” is dispositive in any way – in the sense that you attribute such a belief to me. Because I can’t evaluate the science for myself, I have to look at it as playing the probabilities. It doesn’t help the “skeptical” cause when smart people such as yourself, who understand the science much better than I, make simple mistakes of attribution w/r/t my beliefs.

        Here’s the thing. I read threads like the “Sky Dragon” thread, or posts up at WUWT where very smart and knowledgeable “skeptics” say that AGW is impossible, and then I read other “skeptics” say that I should disregard such arguments because they are outliers, and then the same “skeptics” turn around and say that I shouldn’t disregard their opinions simply because they are outliers. Do you see the problem?

        In fact, I don’t disregard any opinion because it is an outlier. I look at the information available and make a best guess.

        That all said – if you can figure out what I was attempting to say in the post (I’m not sure it makes any sense), I would appreciate it if you could either confirm the logic or explain where it goes wrong.

      • Joshua: Do you see the problem?

        It is not a problem, hence “the problem” describes something that is not there. Skeptics do not agree about very much except a few generalities: the absorption/emission spectra of gases measured in laboratories; the laws of thermodynamics, etc. They dispute each other’s reasons for being skeptics.

        My “belief” (more like general sense of how it works rather than a belief – I think that “belief” is too strong a word) is that absent hard proof otherwise, it isn’t irrelevant that a “consensus” of expert opinion says that a certain interpretation is probably correct. I don’t think a “consensus” is dispositive in any way – in the sense that you attribute such a belief to me.

        You have put the burden of proof on the wrong side. In science, the burden of proof lies with those promoting complex theories. That’s because most complex theories have been false as proffered, or else have required decades of work to clear up the details. Thus, a “consensus” of the experts is irrelevant. Until the cavities have been filled, there is no solid foundation for any policy, and odds are that the consensus is wrong.

      • Matt –

        You have put the burden of proof on the wrong side.

        I find “burden of proof” arguments about as useless as arguments about what constitutes an ad hom. Both sides always think that the other side has the burden of proof.

        It is not a problem, hence “the problem” describes something that is not there.

        So you say. However, I have read many an argument at WUWT between smart and knowledgeable people, arguing with absolute certainty, that their opposition is arguing in favor of a perspective that fundamentally defies the laws of nature. Now I can’t evaluate the science of their arguments – all I can do is: (1) evaluate it when they seem to make illogical assertions in a non-scientific domain and use that as information about their proclivity to allow biases to influence their thinking and, (2) evaluate, based purely on probabilities, whether they seem to be an outlier among other smart and knowledgeable people, realizing, of course, that being an outlier is not dispositive to anything.

        Thus, a “consensus” of the experts is irrelevant.

        So you say – but I doubt that you truly live your life in accordance with that statement. My guess is that if you had an illness that you didn’t know anything about and had no personal experience with evaluating, you would do the absolute best you could to understand and consider the opinions of experts, but at some point you would consider the preponderance of expert opinion in making a treatment decision. Of course, the preponderance of expert opinion wouldn’t necessarily be dispositive- but you would consider it to be a factor in your decision making. I find it hard to believe that you would say it is “irrelevant.”

        Anyway, we’re not likely to make further progress on this. You made an incorrect assumption about my belief. That is what it is. I’d still appreciated it if you’d give me feedback on my post that started this mini-thread.

      • Joshua: I find “burden of proof” arguments about as useless as arguments about what constitutes an ad hom. Both sides always think that the other side has the burden of proof.

        Yet you clearly stated where you believe the burden of proof lies. Have you forgotten the order of these comments?

        However, I have read many an argument at WUWT between smart and knowledgeable people, arguing with absolute certainty, that their opposition is arguing in favor of a perspective that fundamentally defies the laws of nature.

        Debate or discuss with them, quoting them exactly.

        I’d still appreciated it if you’d give me feedback on my post that started this mini-thread.

        That was poorly expressed. Write what you believe to be a logical development whose conclusion you believe to be well-supprted.

    • Joshua
      Your point about temperature is not unreasonable.

      I suggest that the key issue is whether or not we have confidence in the models that forecasted the dire conditions 100 + years into the future. Temperature as a function of CO2 levels is only on characteristic predicted by the models and the “actuals” are coming in lower than the models predicted.

      The issue then goes to the other characteristics that the models predicted- all of them and let’s compare observations to forecasted results. It appears that by any reasonable measure the models are doing very poorly. Based on the models doing poorly to date, what is the basis of the fears that the same models predicted for 100 years into the future.

      Maybe conditions will change, but there is no reason now to think that is inevitable or even probable. How does a reasonable person accept the predicted outcomes based on what we know today?

      • Rob –

        Your point about temperature is not unreasonable.

        Kind of like when you couldn’t say that you disagreed with me?:)

        Actually, though, I think that here your response of not disagreeing makes perfect sense.

        Sure, as we stretch out the time horizon, it is entirely possible that the implications of any short-term inaccuracies, across different projections of impacts, would grow exponentially. That is a legitimate concern, IMO, but it doesn’t justify exploiting short term divergence from projected long-term trends when caveats about short-term trends were provided. (whether those caveats were sufficiently stressed or not is another matter).

        The issue then goes to the other characteristics that the models predicted- all of them and let’s compare observations to forecasted results. It appears that by any reasonable measure the models are doing very poorly.

        Could you be more specific? Are you referring to sea level change? Glacier melting? Frequency of extreme weather events? All of the above? Are you referring to scientifically qualified predictions based on modeling where observations fall outside of error ranges? As far as I can tell, in each of those areas the debate takes basically the same shape as with temperatures: “skeptics” claiming that short-term observations disprove long-term predictions and “realists” saying that the short-term variability is within long-term error ranges. Are you referring, for example, to the statements about glaciers melting in the Himalayas?

  75. I dont think this is really OT. There are three assaults on the position of the Royal Society and APS in today’s blogosphere, namely:

    http://www.thegwpf.org/images/stories/gwpf-reports/montford-royal_society.pdf

    http://wattsupwiththat.com/2012/02/08/ipcc-reviewer-resigns-from-agu-saying-i-will-not-renew-my-agu-membership/

    http://wattsupwiththat.com/2012/02/08/comedy-gold-aps-president-attacks-inaccurate-characterization-of-aps-position-doesnt-realize-hes-attacking-an-aps-quote/

    Whether these represent to end of the non-scientific approval of CAGW by the Royal Society and American Physical Society, I dont know. But I sure as hell hope so.

    Maybe Judith might like to start a thread on this issue, which seems to me to be very pertinent.

    • Jim,

      Plenty of members of both organizations had many chances to stand up and protect dissent. Many did not even if they realized the wrong they were protecting. This part of the conversation isn’t even worth discussion, it’s a social fact of record.

      The real issue and taboo is what cultural do all of these groups have in common that create such a willful blindness that existed in the AGW movement and reaction?

      I believe you referenced this Climategate email link if indirectly just the other day;

      http://di2.nu/foia/foia2011/mail/0020.txt

      Consider the date of the email, the parties and how arrogant, tired and boring the debate was even then.

      That the topics of temperture sets, political consensus as a serious science
      argument ended for me long ago. While I have my own nuances on the topics Dr. Lindzen best represents my views on why these topics are a waste of time (while I still respect the exercise of M&M and others concerning the abuse of these records and data);

      http://www.youtube.com/watch?v=OS-cLp1PEGQ

      At some point it’s just silly to debate flawed (worthless might be a better word) temperture records that even if assumed accurate tell us very little about “climate” in the complex meaning of the term. The debate itself reflects the child-like simple minded nature of the science of AGW and the advocates. I’m sure that many skeptics are well intended in this engagement there is clearly a validation to the agw minions that are will represented on this thread. Better to spend time over the silly insider views of “consensus” and nearly worthless land data records then face the real issue which is their political culture and motivations that are the actual drivers of the movement.

      Lindzen was 25 years ahead that long ago, he’s 25 years ahead today. The science is limited in reality but the politics is and was very very real. Do you seriously believe either the leadership of the APS or RS are any more qualified or informed on the issue at hand? Of course you don’t, know sane person could. There is only one way to explain their posture and that is the cultural rot politics that they in fact share with most of the vocal “consensus” and the larger climate science community.

      Boycotts and resignations are helpful, addressing the politcal culture specifically is more helpful. This site is largely a backwater for the standards of political obfuscation regarding topics that are maintained by the moderator and largely supported by skeptics. Useful idiots they often are.

      Do you think any of those links are going to change APS or RS leadership positions?

      I have a bridge to sell you.

      • cwon14 you write “Do you think any of those links are going to change APS or RS leadership positions?

        I have a bridge to sell you.”

        I respectfully disagree. The positions of the Royal Society and American Physical Society, together with dozens of other learned societies are so obviously anti-science that sooner or later, they are going to have to change their positions. So, no, I cannot be certain that those particular links are going to change things. But things ARE going to change; sometime. And maybe a discussion of the subject might speed up the process.

        One of these days something will happen and the RS and APS will have egg all over their face. However, I cannot predict what will actually cause this to happen. It happened before with the RS. There was a disagreement when lightning conductors were first invented. The Americans and British had different designs; one was a spike and the other a ball on top of the building being protected. Which was which I have forgotten. We now know thast it does not make a blind bit of difference, as long a you have a large conductor on top of the building. The RS, of course, supported British industry. Need I say more.

      • It’s good to be optimistic Jim, some of the time. The toady green infiltration into physical science, its dependency on debt finance and the usual trappings of government excess are becoming far more clear to more people. That’s part of the reform movement. As the AGW left tends to be one note, boring and totalitarian in nature is also becoming more widely understood. Ultimately it’s this issue that is the driver of real reform.at trade and professional associations.

        A more honest and direct discussion of political specifics in these matters will help advance reform. On the third link from WUWT, which is a scream by the way, we get to visit FULL STUPID on display from the APS. The actual political motives of the party in question are not referenced directly but everyone (around our circles) knows. Only some idiotic decorum or convention prevents the direct discussion of APS AGW leader and the uber left-wing agenda he is shilling for and is part of. It’s this sort of make-believe that drags this topic on forever and huge wastes of time discussing of all things meaningless temp data crowd this forum. The same time is being wasted at the APS all by letting one side control a technical narrative that is nonsense and obfuscating a political narrative that is most relevant to the story. Even Dr. Lindzen is far to gracious in the RS reform article.

        We see this pattern here on these boards all the time. We accept silly rhetorical standards and the pace of change remains dismal. Byers has nothing to worry about at the APS while the wimp factor of skeptics remains this high for example. How obvious are their political motives? How obvious is it they are not discussed directly and specifically as they should?

        Your links illustrated it perfectly. Expect nothing if this is a general demeanor and rules of discussion going forward.

      • Jim Cripwell and cwon14

        I doubt that APS, NAS, RS or any other venerable society, itself, will have “egg on their face” as the scientific support for CAGW unravels further..

        It will simply be a matter of changing out the political leadership (early resignation) with someone who sees the ongoing climate debate more as a true scientist and less as a politician.

        [It is actually tragic to see a renowned Nobel Prize winning scientist blow his reputation by making silly political proclamations about a science of which he is totally ignorant, as cell biologist Sir Paul Nurse of the RS did.]

        The societies can then quietly issue new statements, not based on defending the IPCC “consensus” position, but based on the status of the “science” at that time.

        This will happen, I predict, within the next 5 years.

        Max

        .

    • I apologize in advance regarding the temperture set reference, I realize you aren’t on this thread whining about it as so many are. I linked the two topics here in error to your post. I should have just focused on the questions your links raise.

      They are good links worth reading, I appreciate the sentiments but I stand on my conclusion. Nebulus shilly-shally conversations and points aren’t going to change them.

  76. Chris

    Yes, IPCC projections have been falsified => http://bit.ly/xQJNcF

    • Girma, no they haven’t. You don’t appear to understand how to even test IPCC projections properly, let alone describe them accurately or directly.

      If you try to spell out a projection, and the nature of a refutation, then it will be easier to explain where you are going wrong. That’s been done many times. Is that why you aren’t now even TRYING to state what projection you think is falsified or describe what observation falsifies it?

    • Girma,

      I understand your point, I’ve followed the thread. On the other hand would you consider the concept of “being dragged down to their level”?

      Arguing the temperture sets is rolling in the mud with swine on the other side. The whole notion of the “climate” being reduced to “warmer” or “colder” in itself was an idiots delight from the very beginning. It’s one of the main reasons I have climate science near to the bottom of science universe of fields. I have more respect for “experts” with vitamin hyperbole.

      I respect McIntyre and shooting down the bogus hockey stick. I see the magic bullet of exposing fraud in the temp data but on the other hand skeptics are also validating a red herring that short term temp records really matter when in fact they don’t. Nothing about this topic explains cause for example. That’s it’s flat or down near peak co2 production might seem like fun to expose but you are also perpetuating a core warmer myth; temperture is really important in driving climate not the other way around which could well be the case. Same for co2 levels, the other bag of bogus shells.

      You know when you are having extended conversations with Joshua or Martha shows up it’s worth a review.

      • Agree with all the above Sol, but I’d just comment that RSS and UAH are not measuring the same thing as the others. They are providing an estimate of temperature along a big slice of lower atmosphere, where the others are doing air temperature about 2m above the surface.

        The lower troposphere temperature shows much larger and more rapid swings, which means that you generally need a longer window to reveal the longer term trend. They are also shorter records overall, so it’s not going to be easy to calculate appropriate window sizes as has been done for the surface record. There are also major sources of error and uncertainty with piecing together satellite measures, which is why RSS and UAH are quite different from each other, despite being based on exactly the same underlying raw microwave data. People seem to think satellite data is somehow “more reliable”. For temperature of the lower atmosphere, it’s actually substantially less reliable; for a number of reasons. The extraction of a temperature value is certainly one heck of a lot more mathematically complex.

        Thanks for the exchange!

      • Chris Ho-Stuart –

        Ditto, and appreciated.

    • Girma and Chris

      IPCC projection:
      (TAR) warming of 0.15 to 0.3C per decade
      (AR4) warming of 0.2C per decade

      Actual (HadCRUT3)
      Cooling of around -0.1C per decade

      [CO2 levels increased as projected].

      Ergo, IPCC projections (at least for the short term) have been falsified by the actual observation. [All the mealy-mouthing and weaseling of the world will not change this indisputable fact.]

      Will this continue?

      Who knows?

      Max

      • Ergo, IPCC projections (at least for the short term)

        That is an oxymoron. The IPCC projections are NOT for 10 year windows. They are for the central tendency of rise, over a longer period. 20 years is enough to get the central tendency of trend showing up reasonably well, though of course there’s still a bit of up and down due to the unpredicted short term variations apparent in shorter windows.

        Be that as it may:

        The most recent 20 year windows I gave previously. HadCRUT3 (which is known to be biased a little low) gives 0.155 as the most recent trend.

        GISS gives 0.206

        Lots of people seem to like the UAH data; that gives 0.206 also

        I like to keep an eye in the NCDC construction. That gives 0.164

        Since you like HadCrut3, the last time you could get a 20 year window showing a trend of less than 0.1 was 1977-1997.

        From 1979-1999 onwards, the HadCRUT 20 year window has never gone below 0.15.

        Will it do so in coming years? Possibly… 0.15 is after all at the low bound of expectations. But note that HadCRUT4 is coming out soon, which is likely to give better global coverage, of the Arcticin particular, and that is likely to give results aligning more with other datasets that already take the full globe into account.

        Using 15 year windows is just wrong. That will show up too much of the unpredicted short term variation. You get 15 year windows with a trend below 0.15 from time to time in all the data sets, and this is not in conflict with expectations.

      • Chris Ho-Stuart –

        I agree with your point about length of trends, and generally would go even further..

        You mention that –

        From 1979-1999 onwards, the HadCRUT 20 year window has never gone below 0.15.

        I keep an eye on the trends since 1990 for the sole reason that this was the year of the IPCC FAR. I’d therefore make the obsevartion that since then, Hadcrut3 has a trend of 0.14 [as does RSS)]

        There are of course caveats aplenty to be had. Those with a ‘cooling’ agenda might point out that had Pinatubo erupted in 2009 rather than 1991, the trends would have been less than 1.0.

        I’m not making great claims for these things, but saying that even a 20yr moving window has its limitations, as I’m sure you know. One obvious example of this is the likely trend of 1997-2006. Barring very dramatic changes, it will have a very low positive trend. I don’t think this is significant, but making a point of the 20 year trends of the past opens you up a little to the ‘what about 1997-2006!!’ if it is indeed very low [or even negative].

      • Sol, thanks. Nicely and fairly stated.

        Yes, the 22 year trend, from 1990-2011 inclusive, is
        HadCrut3: 0.141 (+/- 0.025 for 95% conf on a regression)

        NCDC seems to get numbers close to HadCRUT3; it gives 0.149

        GISS and UAH tend to give stronger trends. 0.174 and 0.172 respectively.

        I’ve just found and added RSS to my spreadsheet; I get 0.142, which matches your calculation.

        Overall, its right on the low end of expectations. NCDC or GISS will be, I think, the proper comparison; though it doesn’t make a lot of difference if you prefer one of the others.

        I’m not making great claims for these things, but saying that even a 20yr moving window has its limitations, as I’m sure you know. One obvious example of this is the likely trend of 1997-2006. Barring very dramatic changes, it will have a very low positive trend. I don’t think this is significant, but making a point of the 20 year trends of the past opens you up a little to the ‘what about 1997-2006!!’ if it is indeed very low [or even negative].

        Agree on the limits of a 20 year trend; and I’ve said so in my comments a number of times. That window still incorporates a substantial contribution from untrended short term variation; though it is long enough for the main trend to dominate over the main short term variances.

        However, 1997-2006 is a 10 year window; not a 20 year window.; and for what it is worth, it is a window with strong positive trends in all datasets. Was this a typo? Can you confirm what you meant there?

        As I said previously; the HadCRUT3 20 year window hasn’t been below 0.1 since 1977-1996 (0.099); and all 20 year windows including any part of this century are above 0.15… so far. It could dip below 0.15 in coming years, quite possibly. We’ll see; but for GISS or NCDC; that is less likely.

      • Chris –

        Apologies for the typo [and well spotted that it might be] – I meant 1997-2016.

        That window still incorporates a substantial contribution from untrended short term variation; though it is long enough for the main trend to dominate over the main short term variances.

        My point was in a way a slight disagreement with your statement above. I would agree with it but would add usually and I was thinking about the 20 year trends starting 1997, 1998, and perhaps 1996.

        If the RSS data can show a negative trend over 15 years (which they do now, just) then an unexceptional next 5 years will very likely lead to a low positive trend for a few 20 year periods tarting in 4 or 5 years.

        The important point for me is just that trying to extract meaning – particularly statstically convincing meaning from data such as we have, is always fraught with danger. My conclusion in a way – which is more philosophical than scientific – is that hard and fast rules about lengths of trends needed for relevance are themselves not very supportable. Of course we’re desperate to pin down ‘how many years are needed’ etc. I think the nature of the beast is such that the harder and faster the rules we apply, the less we are likely to glean the most useful information from the data. And maybe there is a limit to how useful very complicated statistical analysis can be. We should allow ourselves to shift our perspectives to and fro depending on the circumstance – which would be OK if we were all profoundly objective…

        I revert back to my original comment – that mostly I would err on the side of using the longest period available and settle for saying ‘we just don’t know’ more often than not.

        But that doesn’t detract from your point that 20 year windows are considerably more useful than decades – which are barely worth studying.

      • The length of trend you analyse should really depend on what question you’re trying to answer. If people were really serious about disproving the existence of a continuing underlying positive trend, rather than politicking on a blog forum they should really be calculating what would actually be necessary.

        One way to calculate this theoretically would be to produce a time series containing a linear 0.2ºC trend + noise with similar properties to the global surface temperature series, then find out what length of ‘time’ would be needed before you can definitively say that every trend drawn is pretty close to the actual 0.2ºC trend.

        I did this a while ago. As I recall I calculated that about 25 years of annual data is needed to get within +/-0.05 of the actual trend 95% of the time, though I think I included a low-amplitude sine wave, representing the 11-year solar cycle, in that particular experiment + random ‘noise’.

        We can then state a null hypothesis that there is an underlying trend of 0.2ºC/Decade, and this null hypothesis can be falsified with 95% confidence if any 25-year trend across the period is more than 0.05 away from 0.2ºC/Decade. Of course this wouldn’t necessarily be particularly meaningful since the 0.2 figure is clearly not meant to represent an exact estimate and I haven’t accounted for the possible effects of volcanic eruptions or significant changes in solar activity.

  77. “The IPCC and its proponents are emphatic that the flat, cool trend from mid 1940′s to mid 1950′s is not natural variability, but anthropogenic aerosol forcing.”

    In fact, the IPCC statements about this include estimates of the both natural (the AMO, MOC) and human-caused (aerosols, land use) factors that are expected to have led to this cooling, along with estimates of the continued warming trend and anticipation of ongoing research that will continue to add to our understanding.

    The IPCC statements do not involve the sort of declarations that ClimateEtc so often asserts and is the subject of shadowboxing. Quite the opposite. It’s interesting that ClimateEtc perceives itself to be radical readers and defenders of objectivity but instead engages in some of the most repetitive rhetoric; and believes that bogus arguments about claims that aren’t made are evidence of independent thought.

    • It’s all rather ironic given a recent post that stated;

      “JC note to the IPCC: rely less on expert judgment and appeal to authority, and more on carefully crafted and documented arguments.”

      No signs of ‘carefully crafted…..arguments’ here.

    • Smoke filled rooms squared rings,
      Shadowboxing aerosols.
      CO2tenduh.
      ==========

    • randomengineer

      You are a nit picker. What you do is micro-examine various IPCC statements looking for exact matching wording, and finding no match to a general impression, declare Dr Curry to be a liar.

      Clearly the impression one gets from poring through the various documents etc is that the aggregate concusion is that it’s aerosols as per what Dr Curry says. Sure, there are weasel-worded “mights” and “coulds” pertaining to natural causes interspersed with the actual conclusions they want you to reach where any *normal* reader would rightfully conclude that the inclusion of the weasel words is specifically intended to form an inclusive contrast (why yes we looked at natural stuff and sure there’s a slight possibility this could be natural or even the result of evil dolphins, but really? No.)

      The upshot is that they think it’s “X” and there’s a slight possibility they could be wrong. Martha looks solely at the admission of a vanishingly small possibility that they could be wrong and accuses Dr Curry of reaching an invalid conclusion. In truth the conclusion Dr Cury has is the correct one, and Martha is an apologist for the IPCC relying solely on the inclusion of weasel phrasings as the crux of her argument.

      Is there a common latin phrase like *ad hominem* referring to “argument from weasel words”? If not there ought to be.

      • How about ‘ad stoat’?
        ==============

      • ‘per stoat’, purr feck.
        =======

      • Steve Milesworthy

        Part of caption of Fig 1 FAQ 3.1 AR4 WG1:

        “Results from climate models driven by estimated radiative forcings for the 20th century (Chapter 9) suggest that there was little change prior to about 1915, and that a substantial fraction of the early 20th-century change was contributed by naturally occurring influences including solar radiation changes, volcanism and natural variability. From about 1940 to 1970 the increasing industrialisation following World War II increased pollution in the Northern Hemisphere, contributing to cooling, and increases in carbon dioxide and other greenhouse gases dominate the observed warming after the mid-1970s.”

        Other diagrams show greenhouse forcing only really taking off in the late 1950s (as compared with net forcings from other influences) after which Agung puts a spanner in the works by erupting twice in the early 1960s.

        So “emphatic” seems to be a strong word for the mid 1940s to mid-1950s. In fact I personally can’t recall anyone ever focussing on this particular period (not that that means much). Usually the focus is normally on the 1940 peak or the mid 1940s all the way up to the 1970s.

        Does this adequately represent the IPCC view for example:

        http://scienceblogs.com/illconsidered/2006/03/what-about-mid-century-cooling.php

      • kim

        I think it’s “argumentum ad verbis mustelidae” but, to be sure, we should maybe ask Fred.

        Max

      • re,

        Judith isn’t a liar, she’s just got it wrong on this. I don’t know how, but it’s obvious, and given another recent post on ‘error cascades’ I would have hoped that judith would have nipped the Leake error in the bud rather than propogate it even further.

    • Martha

      Before you start hyperventilating on this, I suggest you read AR4 WG1 FAQ 3.1, Figure 1, where IPCC states:

      From about 1940 to 1970 the increasing industrialisation following World War II increased pollution in the Northern Hemisphere, contributing to cooling…

      Fact: there are no empirical data to support this suggestion.

      [Just as there are no empirical data to support the suggestion that current cooling is being caused by aerosol pollution from China.]

      But it makes a good story.

      As they say in Italy: si non è vero, è ben trovato

      Max

      • Martha

        Check IPCC AR4 WG1 FAQ 9.2 (p.120):

        During the 1950s and 1960s, average global temperatures levelled off, as increases in aerosols from fossil fuels and other sources cooled the planet.

        Sounds like a pretty definite claim to me (even if it is unfounded).

        Max

      • Steve Milesworthy

        manacker,

        Your first quote says “contributed” to cooling. The context of the quote is that natural variability also contributed (my post above includes more of the context from the quote).

        Your second quote refers to “other sources” of aerosols but also refers to 1950s and 1960s. Judith was referring to mid-1940 to mid-1950. Now I’m not being picky by pointing this out because once you get into the 1960s you can add in the effect of the eruption of Agung – something that is made clear in the second FAQ you quote from.

        “…aerosols from fossil fuels and other sources cooled the planet. The eruption of Mt. Agung in 1963 also put large quantities of reflective dust into the upper atmosphere.”

        So both quotes emphasised the impacts of natural variability as much as the impacts of pollution. “Emphatic” is undefendable. Sorry.

  78. theendisnighnot

    yeah but, yeah but yeah but…… as far as I’m aware the people at Climate etc don’t have much/any influence on government policies to “de-carbonise” our economies whereas the IPCC…………. surely to any dispationate observer the science should have been settled or at least not turned into a “cause” before all these measures which might/might not have an effect were implemented? Cart before horse anyone?

  79. Here you go Michael…how is this for now?

    http://www.spiegel.de/international/world/0,1518,813814,00.html

    As Eli likes to say: “you sure can cut a mean ‘Betrug'”
    YMMV

    • Die Ratten verlassen das sinkende Schiff.

      Some already feel duped and many more will feel so. It was almost a perfect crime.

  80. It looks like God, loves to share his ironic sense of humor with the whole world at this point in time. Enough joy to go around too. It’s about time.)

  81. theendisnighnot

    We know that smoking kills, we’re not so sure about secondary smoke, we know that humans have an influence on the climate, if “we’re” honest nobody knows by how much and what it could mean so surely the best thing to do is “lay down our swords” (both sides) and use the best scientific brains about to establish by how much without agendas (either side) leave the politics at the door and get back to real science . All this wondering about peoples motivations (marxist, socialist, right wing, creationsist, fossil fuel funded (i wish!), republican, democrat, labour, conservative) means sweet f.a. in the end. Professor Curry it seems to me you have some serious people on this blog who if they worked together might be able to figure some of this out . alas i’m not amongst them! don’t worry be happy

    • Unfortunately global warming became politiczed before it was generally accepted in the scientific community. This tended to polarize the debate before enough evidence could be obtained to either prove or disprove the theory. Now, too many people have staked their careers and/or reputations on either side to be able to backpeddle with tripping over their own feet. It may take an entire new set of scientists, who have no ties to any of the organizations you list, to remedy this situation.

      • Dan H.

        It may take an entire new set of scientists, who have no ties to any of the organizations you list, to remedy this situation.

        Is our host one of this “entire new set of scientists”?

        I hope and think so.

        She is taking a beating from the “mainstream consensus” crowd as a “heretic” (according to Scientific American, but I doubt that this is worrying her very much.

        Max

  82. Trends since 1982 (30 years):
    – CRU data : +0.16°C/decade
    – GISS data : +0.17°C/decade
    – RSS data : +0.16°C/decade
    – UAH data: +0.16°C/decade

    Trends since 1997 (15 years):
    – CRU data : +0.01°C/decade
    – GISS data : +0.1°C/decade
    – RSS data : 0
    – UAH data: +0.09°C/decade

    Trends since 2002 (10 years):
    – CRU data : -0.09°C/decade
    – GISS data : -0.001°C/decade
    – RSS data : -0.09°C/decade
    – UAH data: +0.02°C/decade

    Trends since 2007 (5 years):
    – CRU data : -0.02°C/decade
    – GISS data : +0.02°C/decade
    – RSS data : +0.07°C/decade
    – UAH data: +0.02°C/decade

    Conclusion: trends are obviously decreasing far below IPCC forecast and we are heading to a plateau or even a slight cooling which may last until 2030 according to hypothesis II (change driven natural variability and more especially PDO, AMO, NAO… cycles).
    There is no cherry picking in this !

    another way to view it

    • Eric

      That is excellent and answers a question I asked earlier. Does everyone agree that this is a fair interpretation of the time scales mentioned?

      Tonyb

    • It’s a good snap shot of long and short term trends at one point in time.

      It doesn’t show decreasing trends with time; it shows that shorter trends are more influenced than longer ones by a recent lull.

      How is this significant? It’s a simple consequence of what trends do, and no conflict with anything anyone has expected.

      Once again, with feeling. The IPCC does not predict trends over short windows, like 10 or 15 years. Such windows are expected to show substantial slow down and speed up, above and below a longer persistent trend.

      But you aren’t even LOOKING at how trends change over time here.

      Here’s a direct link to a spreadsheet people might like to play with, which I have mentioned before. TemperatureAnomalyTrends.xls. It includes data for UAH, Hadcrut3, GISS, NCDC and BEST; but not RSS. I’ll find and add that shortly.

      It allows you to plot the change of trends over time. Look in particular at the way 15 year windows change over time. Here’s an image I supplied previously. (Note the vertical axis is trend, and the horizonal axis is the center time of the window. Hence this is NOT plotting the data itself, but the way the trends change.) Plot of 15 year trends for all 15 year windows.

      There is indeed a sharp downturn in trend recently; which corresponds to the end of the window coming past the temperature increase around 1995-1998.

      The ONLY one of the windows chosen above that is long enough to make sense in terms of the actual projections being used is the 30 year window. A 20 year window would work as well. a 15 year window is a bit too short.

    • It’s a good snapshot at a point in time of long and short term trends.

      It shows that the short term trend is now showing a downturn. This is not a violation of any prediction.

      It doesn’t even try to show how trends change over time.

  83. Monuments tumbling,
    Civilisations crumbling.
    Ozymandias blames it on the weather.

  84. Dikran marsupial | February 9, 2012 at 9:28 am |
    “It isn’t difficult at all. If the observations (within their stated uncertainty) lay outside the stated uncertainty of the projections, then that would falsify the models. Yes the credible interval is broad, but that is because there are large uncertainties involved.However, that is beside the point. Prof. Curry has claimed that the observations are not consistent with the IPCC projections. If you plot the projections and their uncertainty, then it clearly not the case. I really don’t understand why some can’t accept this.”

    If the respected atmospheric scientist can bring up the topic then methinks she should be able to answer to holes poked in her argument… that is what a good blogger/professor would do…

    • Models failed or failing? The bar can be set as high or as low as imagination desires, and the finish line is a mirage.
      ================================

    • @Robert, Indeed, it would be intersting to hear Prof Currys response. The observations are clearly consistent with the models when the stated uncertainty of the projection is considered.

      • Chief Hydrologist

        Absolutely – the models are right because they are wrong and need to be rethunk. Ummm. The models are right because they are uncertain. Ummm. The models are right…

      • The observations being consistent with the models does not mean the models are right, it just means that they haven’t been demonstrated to be fundamentally wrong.

      • Dikran –

        The observations are clearly consistent with the models when the stated uncertainty of the projection is considered.

        I’ll add – and stating otherwise would then be false.

        I suggest you don’t hold your breath waiting for Judith to address her discussion of the IPCC predictions without mentioning that they were accompanied by:

        The rise will not be steady because of other factors.

      • @Joshua Yes, given that it was in the SUMMARY for policy makers it isn’t that suprising that the detail of the credible interval was simplified in that way for policymakers rather than scientists. Had all the details been left in it wouldn’t be a summary and it wouldn’t have been suitable for policymakers.

        O.K. so it may be possible to misconstrue the summary for policymakers, but if you actually go back and look at the science (in this case the AR4 model projections) it is completely clear that the observations are consistent with the models. Prof. Curry’s assertion is unequivocally false; it would be to her advantage to withdraw it.

      • What’s clear is that the summary for policymakers was misleading, and misled them. It should be withdrawn, but the horse is out of the barn, and four more about to run rampage.
        =================

      • Joshua –

        I think you can stretch that bit about uneven rises beyond its natural breaking point…

        It might be more reasonable to interpret it as saying year by year values will be quite varied but a) We’re can give a decadal figure and b) We predict the rise to be 1 degree by 2025.

        From that, it doesn’t mean to say we have to wait until 2025 to give our assessments about the prediction. It might be pointless after 5 years, not very meaningful after 10, more significant at 15 and very much coming out of the wash at 20…

        This is my reason for suggesting the FAR predictions are worth examining – for the purposes of learning. They were made 22 years ago, rather than 5 for AR4. I’m not suggesting passing final judgement, using terms like falsified or anything similar. The fact is, the predictions after 22 years look very high. That’s all!

        I’ll share your observation that some/many will use that to claim proof of a hoax or some such baloney. It shouldn’t hold us back though from being fearlessly honest about what we do and don’t know, and what we do to fill in the gaps [ie use imagination/confirmation bias]

  85. Climate science w/out guilt?
    ‘Impossible,’ says Ozymandias, ‘It’s the driver!
    Hmm, could substitute attractive models for maidens though.’

  86. In all these discussions, we tend to forget that Anthony Watts has presented considerable evidence that global temperatures are:

    1) Inaccurate – many measurements just don’t have 0.1C precision.

    2) Adjusted upwards in recent years (just the opposite to be expected of any UHI corrections).

    3) Based on a variable number of actual measuring sites, with the rest filled in by software!

    I wonder what the real global temperature graph is doing.

  87. Please bookmark this page (http://bit.ly/zA0a2j) so that we could compare IPCC’s projection with observation in the coming years. This the most easily verifiable graph IPCC ever gave regarding the performance of climate models in the near term.

    Why has the realclimate’s error shades look like a diverging tube instead of a cylindrical one?

    http://bit.ly/xj9PfS

    Is not the projection a linear warming of 0.2 deg C per decade? In this case, does not the error shade need to look like a cylindrical tube?

    May be they wanted to keep the recent observations inside the error shades.

    I prefer the following comparison than RC’s one => http://bit.ly/xQJNcF

  88. Realclimate:

    … the Hansen et al ‘B’ projection is running warm compared to the real world

    http://bit.ly/xj9PfS

    Thank you RC that is a slight admission.


    Scenario A assumes that growth rates of trace gas emissions typical of the 1970s and 1980s will continue indefinitely; the assumed annual growth averages about 1.5% of current emissions, so the net greenhouse forcing increases exponentially. Scenario B has decreasing trace gas growth rates, such that the annual increase of the greenhouse climate forcing remains approximately constant at the present level. Scenario C drastically reduces trace gas growth between 1990 and 2000 such that the greenhouse climate forcing ceases to increase after 2000.

    Why does Realclimate insist the business as usual case in Hansen et al model is scenario B instead of A?

    • Girma, you write “Why does Realclimate insist the business as usual case in Hansen et al model is scenario B instead of A?”

      Simple. If they agree you are right, then the IPCC predictions are clearly garbage; which we know anyway. By pretending that what has happened is Scenario B, then they may be able to keep their scam going for a few more months. But then again, they may not.

    • Why does Realclimate insist the business as usual case in Hansen et al model is scenario B instead of A?

      The answer to that is given in the article. Just read it.

      We noted in 2007, that Scenario B was running a little high compared with the forcings growth (by about 10%) using estimated forcings up to 2003 (Scenario A was significantly higher, and Scenario C was lower).

      Scenario B is the one that comes closest to what actually occurred, so THAT is the one to use.

      You can also get the data of atmospheric concentrations for the three scenarios here: Scenario trace gas data

      We did in fact have a downturn in emission rates, in line with scenario B. This was mainly from the global downturn, not a deliberate policy decision, but be that as it may, but that’s beside the point. Of the three possible futures considered, scenario B is the one that turns out closest to actuality.

      Do you guys even bother to read what you criticize?

      • Chris

        Forget Scenario B – that’s a canard.

        Scenario C was actually closest to the actual, as far as temperature is concerned
        http://farm3.static.flickr.com/2537/5738998081_b3b3e55049_b.jpg

        Scenario A was actually closest to the actual, as far as rate of CO2 emissions is concerned (in fact, CO2 rose at a slightly faster rate than Hansen projected)..

        But, as you can see, Hansen’s Scenario A projected around twice the rate of temperature increase as was actually observed.

        This was very likely because Hansen’s models used a climate sensitivity that was two times too high.

        Max

      • Scenario A was actually closest to the actual, as far as rate of CO2 emissions is concerned (in fact, CO2 rose at a slightly faster rate than Hansen projected)..

        Wrong. The data from the scenarios is here: H88_scenarios.dat.

        Do a regression. I dare you. Find the actual data and compare. Do it yourself. The “Linest” function in Excel is what I have been using.

        Rate of rise, 1980-2011

        Scenario A. 1.804 (+/- 0.038)
        Scenario B. 1.731 (+/- 0.022)
        Actual: 1.663 (+/- 0.049)

        Scenario C only has the rise occuring up to the year 2000, so to include C, I’ll just take that interval.

        Rate of rise; 1980-2000

        Scenario A. 1.670 (+/- 0.027)
        Scenario B. 1.652 (+/- 0.022)
        Scenario C. 1.520 (+/- 0.015)
        Actual: 1.497 (+/- 0.051)

        The actual is the lowest rate of rise, mainly because of a major DROP in emission increase rates around the early 1990s.

        The uncertainties are simply regression limits at 95% confidence.

        I don’t know how you get your information, but it’s flatly incorrect.

      • Chris,

        It is difficult for me to tell whether you are simply misinformed or are purposely fabricating a story in defense of Hansen’s failed forecast.

        The Hansen 1988 forecast to which I am referring is:
        http://pubs.giss.nasa.gov/docs/1988/1988_Hansen_etal.pdf

        Check Figure 3 (this is the graph, which I posted earlier – but will post again):
        http://farm3.static.flickr.com/2537/5738998081_b3b3e55049_b.jpg

        Hansen’s 1988 study stipulated:

        Scenario A assumes that growth rates of trace gas emissions typical of the 1970s and 1980s will continue indefinitely; the assumed annual growth rate averages about 1.5% of current emissions, so that the net greenhouse forcing increases exponentially.

        Scenario B has decreasing trace gas growth rates, such that the annual increase of the greenhouse climate forcing remains approximately constant at the present level.

        Scenario C drastically reduces trace gas growth between 1990 and 2000 such that the greenhouse climate forcing ceases to increase after 2000.

        Based on CDIAC data, the actual CO2 emission growth rate increased from 1.5% in the 1970s and 1980s to 1.7% from 1988 to today, so the actual rate of increase was actually around 13% greater than that assumed by Hansen for Scenario A.
        http://cdiac.ornl.gov/ftp/ndp030/global.1751_2008.ems

        Obviously, Scenarios B and C are way off the mark.

        The problem is that Hansen’s Scenario A grossly overestimated the GH warming that would result, very likely because he used a climate sensitivity estimate that was high by a factor of 2 or more..

        Actual warming turned out to be the same as Hansen’s Scenario C, based on the complete shut down of GHG emissions in 2000 ” such that the greenhouse climate forcing ceases to increase after 2000”. But this did not happen, did it?

        You can wiggle and squirm all you want to Chris, but all-in-all it was a forecast that turned out to be grossly exaggerated (like all of Hansen’s “predictions”).

        Max

      • Manaker says:

        It is difficult for me to tell whether you are simply misinformed or are purposely fabricating a story in defense of Hansen’s failed forecast.

        I’m taking available data, and doing my own calculations, and show the results for anyone to check or repeat. I get the same result as is used in various published work looking back at the old 1988 model.

        The actual forcings which turn out are a bit less than scenario B and a lot less than scenario A.

        Why do you think otherwise? I think it is because you are not even looking at what Hansen et al actually calculate.

        They are climate modelers. Their input to the climate model is an atmospheric composition. They don’t try to calculate from emissions; but they propose rates of increase of atmospheric composition based on models of rates of emission increase.

        I compared the model input (which is an atmospheric composition in ppm for each year) with the actual composition. That’s the correct way to check which scenario is closed to actual; reality is a bit less than scenario B and a lot less than scenario A.

        Your objection appears to be based on criticizing predictions of atmospheric composition based on emissions. That’s a level of indirection you would do well to avoid if you are wanting to check the skill of the climate modeling back then.

        But hey. Even if you do use emissions as a guide rather than the actual model input, your description of that data is still just wrong. There was a major reduction in the rate at which emissions increased around the 1990s before emissions took off again this century.

        That is also reflected with a dip in the rate of atmospheric increase in the 1990s, as I mention before.

        I’m not making anything up here or trying to fabricate a story. You’ve cited some emissions data, but have you actually done any calculations with it? Where are you getting your descriptions of trends? It certainly doesn’t match the actual data you are linking.

      • Chris,
        Are you suggesting Hansen has been proven right?

      • Hunter asks:

        Are you suggesting Hansen has been proven right?

        No. I haven’t even looked at that; all I have looked at or commented on is which of the scenarios he considered matches most close to the subsequent reality.

  89. For the posters here who visit gavin’s Real Climate site. I find it interesting that they would not post my comment.

    All I asked was how it made sense to average the results of multiple models when done of them had been demonstrated to accurately predict future conditions and how did it makes sense to have faith in models before they are shown to match observed results.

    • Your comment is #663 at the Bore Hole. I suspect the reason it’s there is not simply because it states a skeptical viewpoint but because it’s very general and can’t be answered briefly. I think if you had made a very specific criticism regarding an element in the current thread, it would have been posted in the thread, but that’s just my guess.

      • Fred- I suppose it would take a long nonsense explanation of why it makes sense to average the results of models of unknown quality of have faith in believing models that have not on not been shown to be accurate, but appear to be inaccurate.

        Far more general questions supporting gavin’s view are posted. I challenge you to find anyone who understands modeling and is not an AGW advocate to support believing the implementation of policies based on the current GCM’s

      • I suspect it is also because it is a canard. The models have shown an ability to model observed results (hindcasts) and it is unreasonable to expect models to have been demonstrated to predict future conditions as we don’t have access to observations from the future. Try reasing the chapter from the IPCC WG1 report that deals with climate models and their evaluation, then go an ask questions about model validation at RealClimate, you are likely to get a better reception.

      • Markus Fitzhenry

        “I suspect it is also because it is a canard. Try reasing(sic) the chapter from the IPCC WG1 report that deals with climate models and their evaluation,”

        Ah come on Dikran marsupial, any opposing view is a blasphemy over on the RC blog. I don’t visit there myself out of the principle of free speech.

        I myself prefer to check out whats up myself.

        If I had ever bothered to look at one of the convoluted IPCC reports, I’d probably end up being lost amongst the trees, as some of the more faithful have become.

        You should give liberty of thought more of go, and consider that whats up, not. That way, you can be rest assured your not just joining some mass groupthink delusion.

        There is tons of material outside the IPCC. Works from noted physicists who are probably to busy with real science, to be hobnailed into the watchamagigs of the greenhouse theory. Get a hold of some, It’s fascinating.

    • Rob and Fred

      I suspect (unlike Fred) that the real reason your question was censored out by Gavin is that it raised embarrassing questions, which he was unable to either answer or brush aside (my experience on this site).

      The problem is that Gavin has been playing around with models so long that he has forgotten that they are simply multi-million dollar extensions of the old slide rule and has started to actually believe them.

      The old “GIGO” rule still applies.

      Max

      • Max you write “The old “GIGO” rule still applies.”

        Sarc on. Hasn’t anyone told you that the proponents of CAGW have redefined GIGO. It now means Garbage in, Gospel out. Sarc off.

    • Steve Milesworthy

      Rob,

      Your comment was not posted because it made a boring reference to unjustifiable “faith” in “unproven” climate models. (Yawn!). I got snipped here yesterday for politely suggesting that Latimer had missed the point!

      The post had, however, already answered your question in its list of qualifications. The AR4 model archive is an “ensemble of opportunity”. ie the models in it are the models submitted by various institutions without formally judging each model’s qualification. I guess the comparison is a purely a game to counter the accusation that has been put, that the temperatures are outside the bounds of “what the IPCC has predicted”.

      I believe the more formality has gone into the choice of projections run for AR5 though I don’t know the details.

      • @steve m

        Are you sure your comment got snipped here? I;ve been posting since the beginning and I think I can count in the fingers of one hand the number of times people have been so censored (*). And JC notes that she has done so and why.

        Disagreeing with me should not be a snipping offence…an indication of some other deep seated malaise perhaps ( :-) ), but not a reason for being snipped. Please repost.

        (*) Yes – I was one of them. Probably correctly.

      • Steve Milesworthy

        Yes I got snipped along with some comments by someone whose name begins with “A” (can’t remember) who was making similar points to me, equally politely. Some of your replies to our comments got snipped. You must post so much here that you cannot remember what you posted.

        Obviously you do not realise you are one of Judith’s Chosen. Use your exalted position wisely! ;)

  90. Chris Ho-Stuart

    How come we cannot agree in the description of the existing global mean temperature record?

    Isn’t this the first step in developing a theory that explains the data?

    I claim the global mean temperature has a single pattern since record begun.

    This pattern is a uniform warming of about 0.06 deg C per decade with an oscillation of about 0.5 deg C every 30 years as shown in the following plot.

    http://bit.ly/Aei4Nd

    Why do you disagree with this description of the global mean temperature data?

    • Because you’ve got no statistical test for significance, and FAR too short a time to look for a long period oscillation like that — ESPECIALLY in the absence of any physical theory that would explain such a thing or give a prior expectation of periods.

      We do agree on the data itself. When I give diagnostics, like trends and so on, I use standard mathematical tools and significance tests. You eyeball patterns with no significance test and no physical basis for guiding the choice of model to test against the data.

      The idea of a very long term fixed 0.06 C/decade rise is really bizarre. It can’t be long term on the scale of millenia; that would have the Roman Empire living in the mother of all ice ages. There’s no physical factor that has been holding nice and steady like that to drive such a rise over the last 100 years.

      You’re doing the statistical equivalence of finding pictures of the virgin Mary on your breakfast toast..

      • Chris Ho-Stuart


        FAR too short a time to look for a long period oscillation like that — ESPECIALLY in the absence of any physical theory that would explain such a thing or give a prior expectation of periods

        I thought that was explained in Knight et al (http://bit.ly/nfQr92 ):


        The AMO is a genuine quasi-periodic cycle of internal climate variability persisting for many centuries, and is related to variability in the oceanic thermohaline circulation (THC).

        Chris:

        The idea of a very long term fixed 0.06 C/decade rise is really bizarre. It can’t be long term on the scale of millennia

        I agree. It cannot be fixed. But for the period since 1850s this is a fixed straight line, because that is what the data shows. Although the trend line for the period since 1850s is a straight line, in a longer time scale, it is part of a very long curve that contains the Little Ice Age, Medieval Climatic Optimum, Holocene Maximum, etc.

        Why do “climate scientists” at Realclimate choose the starting year for trend calculation from the 1970s or 1980s?

        If I start trend calculation starting from year 1910, here is how my projection would be wrong => http://bit.ly/w5P3c9

        That is what might happen to the current projection of “climate scientists” at Realclimate.

      • Girma, Knight et al do it right.

        They DO look over a much longer time; look at figure 2 of the paper. That’s what needs to be done if you are testing ideas about effects with a long period. Just eyeballing 150 years of instrument record can only give weak support to the hypothesis; the look at longer periods of time is essential.

        Note also that their model is “quasi-periodic”. It is not a simple sine wave with a definite period. It is rather shows a characteristic time scale for changes, but shifts up and down somewhat chaotically at that scale. That’s pretty standard for these kinds of effect.

        Finally, although you’ve agreed that the long term underlying linear line is unrealistic, you use it crucially for “predicting” or “falsifying” your supposed model into the future. You need to look at tools for identifying a periodic (or quasiperiodic) signal on top of a base trend that is NOT linear; because there’s a heck of a lot more going on with climate that you can capture on such scales with one line a sine wave. There ARE such tools, but as I’ve said, you really need a professional statistician to deal with that. It’s not trivial. I just work at the level of basic significance tests for regression lines and so on, which are okay as a ball park starting point but not really up to a proper hypothesis test.

        I doubt any professional statistician would be much interested in how you’ve made your proposal, especially as there is no physical basis whatsoever being proposed which could be the basis of a test of prediction against data.

      • Chief Hydrologist

        Nonsense – there are shifts everywhere in climate.

        Here is what you get when the ‘internal signal’ is removed from the 20th Century temperature record. – http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view&current=MONOTONIC.gif – peer reviewed you might note. Just google Anastasios Tsonis and start reading.

        Girma is quite correct in the residual of ‘recent warming’ over the ‘long term’ is much less than 0.2 degrees/decade.

      • Chief Hydrologist

        Here is another one from my climate album – I forget who it is even.

        http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view&current=ensosubtractedfromtemperaturetrend.gif

        This looks at ENSO at downwelling short wave radiation – perfect correlation.

        http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view&current=ENSODSR-1.gif

        Clouds vanish in an El Nino and form in a La Nina – energy dynamics.

        The problem is not that a theory about why these things happen but that too many people in blogosphere are making it up as they go along. The information is there and there needs to be an effort to understand.

        Robert I Ellison
        Chief Hydrologist

      • Chief Hydrologist

        Oh – and need I say that 100 years of data is far too short to identify anything into the future – it is all hypothesis 3 stuff – as Tomas quite rightly says.

      • Chief Hydrologist

        Oh and we are in a cool Pacific mode – not prediction but here we are and it might last another decade or three. Google Pacific decadal variability and go from there

  91. Gee, I posted three comments to Chris and he responded to none. I give up. Let his silly argument stand.

  92. Chris

    “Statistical test for significance?”

    Huh?

    What kind of double-talk is that?

    Girma is showing you actual physical observations (warts and all) of the globally and annually average land and sea surface temperature anomaly over time.

    Is this “statistically significant”?

    You bet it is.

    And it shows multi-decadal cycles of warming and slight cooling of about 30 years each with an amplitude of around +/- 0.25C, like a sine curve on a tilted axis with an overall warming trend of around 0.6C per century.

    Max

  93. “Statistical test for significance?”

    Huh?

    What kind of double-talk is that?

    It’s standard statistical terminology.

    For example. Girma shows HadCrut3 from 1850 to 2011 inclusive.

    I calculate the trend to be 0.45 C/century. I don’t think Girma is doing any calculations at all. His 0.6 per century is way off what his graph shows.

    The 95% confidence bound on the regression is 0.002 — although that is based on a trend plus noise model. A sensible look at GIrma’s “model” would need to consider significance of the oscillation period and amplitude and phase, as well as the trend. That’s four degrees of freedom.

    I’m not doing the calculations here, but straight off the bat I can tell the significance is going to be low. Also, there’s no physical model to back up this bizarre model — and excellent physical reasons to be confident that climate is NOT increasing with a simple linear trend plus pure sine wave over that long a time.

    What you need is an actual physical model or cause, so you can check a theory against data. That’s what is done in the paper Girma cited earlier by Knight et al… and note that they use a “quasi-periodic” signal. Pure sine wave would be extremely surprising, and even if you had any reason to expect such a thing, showing one cycle only means Girma is proposing steady behaviours on much much longer scales than anything the IPCC considers.

    The real physics of the situation is not for a single consistent linear trend. The forcings change over time. The enhanced greenhouse effect took off in a big way from mid century — not based on temperature, but based on physics of the known forces at work. You test that physical theory against the data. That’s how science works.

    • Chris, you write “You test that physical theory against the data. That’s how science works.”

      Yes and no. We have not got that far yet. All we have, and all Girma shows, is ALL the data plotted on one graph. If there was more data, it would be plotted. What the data shows is that temperatures have been rising linearly since the data started. On top of this linear trend is some sort of sine wave. So far as I am aware no-one has any idea why this is happening. So there is no theory to explain the data. All we know is that there are factors which affect temperature, and which have produced the observed results. What these factors are is unknown. Yes, people have ideas what they might be, but there is no coherent theory to explain the data.

      However, what the data clearly shows is that there is no CO2 signature, as hypothesised by the proponenst fo CAGW. If there was a CO2 signal, then by now the observed temperatures would be outside the +/- 0.25 limits; on the high side. This has not happened; this is Trenbeth’s “missing heat”. this is what the CAGW hypothesis completely fails to explain.

      Of course this trend has not been going on for millenia. Nor will it last for millinia into the future. There are clearly long term factors which affect temperature, and which for the moment are not having any effect.

    • For example. Girma shows HadCrut3 from 1850 to 2011 inclusive.
      I calculate the trend to be 0.45 C/century. I don’t think Girma is doing any calculations at all. His 0.6 per century is way off what his graph shows.

      You cannot arbitrarily pick the beginning and end years in a trend calculation of data that shows oscillation. The start and end years must be at the same stage of the oscillation cycle. We know that the 1880s where global mean temperature peaks. We also know that the 2000s were global mean temperature peaks. As a result, these two years may be used as start and end points in calculating the global warming trend :

      Graph => http://bit.ly/xxMj6K

      Data (0.06 deg C per decade warming) => http://bit.ly/wav7fq

      • Note that you are now picking different end points to the ones you picked previously; you’ve got a new model entirely. One that is already falsified by looking back over the data prior to 1880.

        This new and different model now has a gradient of just over 0.6; your previous graph was a model with a gradient of 0.45, as I told you before.

        How will you decide which of these two models is the better fit to data you have already?

      • Oh Chris!

        It is just a very crude and approximate description of the complex observed climate data.

        Thank you very much for the exchange.

      • I know that; I’m glad to see you know that. We seem to be on the same page again.

        I guess you know that your original mention of 0.6 C/century was with respect to the older graph in which you actually have 0.45 C/century; rather than acknowledge a mathematical slip you just put up a different model to allow you to keep the 0.6 value.

        Suggestion. If you make a mathematical mistake, it’s best to acknowledge it. I make them too; it’s no big deal and it’s healthy.

        Manaker at least has seemed to perceive a significance to your model which has not been determined and which you and I both appear to agree is not really there. Manaker, would you like to reconsider and agree that the description was “very crude and approximate”?

        That’s pretty similar to saying “not particularly significant”, IMO.

        Grima and I both agree, I think, that a steady linear trend is not going to work as an underlying basis for long term trends over the last several centuries. We both agree, I think, that there are “quasi-periodic” factors impacting climate.

        Where we probably still disagree (Girma, feel free to correct me if I misrepresent you anywhere here) is whether quasi-periodic factors could plausibly explain the warming seen in the instrument record. My position on that is … no; the only hypothesis which has any legs involves a non-periodic factor with a major warming influence in recent decades. As I said previously, there’s a lot more than just looking at temperature records to test that idea.

      • Chris

        My description is much, much better than IPCC’s description of the global mean temperature data shown in the following graph =>

        http://bit.ly/b9eKXz

        What do you say?

      • I’d say you’re comparing apples and oranges again. That IPCC graph is not an attempt to model temperatures; it is simply to show how trend lines over different periods compare.

        The proper comparison is with conventional expectations as described in the IPCC reports is as I told you previously: the only hypothesis which has any legs involves a non-periodic factor with a major warming influence in recent decades.

        To see where the actual IPCC idea is given, try this figure:
        Figure TS-22. Also go on to TS-23 which works as well.

        The models used in climate science are based not on extrapolated linear trends, but on expected consequences of all known physical forcings — which are not periodic.

        Then the expectations can be compared with observations, as the figures show. The expectations are not perfect. But they are better that long line lines plus a sine wave.

      • Chris

        I’d say you’re comparing apples and oranges again. That IPCC graph is not an attempt to model temperatures; it is simply to show how trend lines over different periods compare.

        I strongly disagree.

        Here is IPCC’s conclusion based on that graph:

        Note that for shorter recent periods, the slope is greater, indicating accelerated warming.

        That conclusion is one of the saddest stories in the history of science.

      • Suit yourself. As far as I can see, you have yet again quoted material which confirms what I have been telling you, and taking away the complete opposite meaning.

        First, what you have quoted confirms, as I told you, that the figure is not an attempt to represent the IPCC model, but to show how trends are changing.

        Second, the inference of acceleration in warming from those trend lines is straightforward. I have no idea why you think it is “sad”. The data does show a plain acceleration on the scale of the last and shortest window; which in this case is 25 years.

        You might like to note that the precise same technique, of a series of successively shorter trend lines to the same end point, is used above by Eric Ollivet (this comment) in reverse, to show a deceleration on shorter time scales, which is the recent down turn. It’s a fair way to illustrate an acceleration or deceleration of trend. (It’s not the best technical test, but it’s fine as a useful graph to convey the sense of acceleration or deceleration in the rate of change.)

      • Chris

        On why the accelerated warming conclusion of the IPCC is one of the saddest stories in the history of science.

        Here the accelerated warming of the IPCC:
        http://bit.ly/b9eKXz

        Here is how the IPCC interprets the above data:

        1) Global warming rate for the 150 years period (RED) from 1856 to 2005 was 0.045 deg C per decade.

        2) Global warming rate for the 100 years period (PURPLE) from 1906 to 2005 was 0.074 deg C per decade.

        3) Global warming rate for the 50 years period from (ORANGE) 1956 to 2005 was 0.128 deg C per decade.

        4) Global warming rate for the 25 years period from (YELLOW) 1981 to 2005 was 0.177 deg C per decade.

        IPCC then states:
        “Note that for shorter recent periods, the slope is greater, indicating accelerated warming.”

        Okay, let us apply this “IPCC interpretation of data” procedure to compare the global warming rates in the last 25 years to that in the last 13 years going backward from 2010 as shown in the following plot.
        http://bit.ly/fMwWl1

        This result gives:
        1) Global warming rate for the 25 years period (RED) from 1986 to 2010 was 0.17 deg C per decade.

        2) Global warming rate for the 13 years period (GREEN) from 1998 to 2010 was 0.00 deg C per decade. (No warming!)

        Like the IPCC, I can then state.
        “Note that for shorter recent periods, the slope is smaller, indicating decelerated warming.”

        Is my conclusion valid?

      • Yes, Girma; it’s valid. You have given a valid basis for showing the recent short term deceleration.

        Indeed, it is this downturn in short term trends that prompts this whole discussion.

        Cheers — Chris

      • Chris –

        Can you set me straight on something that may be a terminological confusion on my part? Is there a commonly used term for what James Hansen is referring to when he says Equilibrium Sensitivity [including slow feedbacks] is 6 C for doubled Co2 compared with 3 C for ECS only considering fast feedbacks.

        I suppose I’d like to know if this is a widespread distinction or whether Hansen has his own interpretation. Perhaps I’m not sure when scientists speak of equilibrium climate sensitivity, what they usually mean, and are understood to mean. Do many people make a specific distinction between fast and slow feedbacks as well as between transient and equilibrium sensitivity?

        Hansen talks about this here – http://pubs.giss.nasa.gov/docs/2008/2008_Hansen_etal.pdf

  94. I just did some research on RC’s latest updates on temperature trends vs. models. The updated graphs all seem to me to show a temperature trend of roughly 0.1K or so per decade even for 1965-2010. Their Hansen graph claims 0.17K per decade for the last 30 years. But UAH lower troposphere looks like about 0.1K per decade. There was some mumbling about how lower troposphere will show wider swings that surface which makes sense. I note that actual temperature is still below Hansen’s Scenario C. There is some discussion of how long it will take before Schmidt might admit that the model trends are too high and how he can “get out in front of it” by coming up with a pre-emptive explanation if things continue to go badly. Anyway, it looks to me that a variety of sources seem to show that multidecadal trends are quite a bit lower than the models. Maybe I missed something.

    • David,
      In 1988, Hansen predicted that CO2 emissions would increase at a rate of 1.5% per year if continued unchecked. Since the actual increase has been slightly higher than that (closer to 2%), even his scenario A could be considered conservative. The scenario A temperature increase from 1984 through 2011 amounts to ~0.33C/decade. The actual rate of increase has been 0.16C/decade (CRU). Some people are hyping Hansen’s Scenario C as being similar to the recent temperature increase, but as you mentioned, it is still high (0.21C/decade) compared to the observed changes. This was his “best case” scenario, whereby CO2 emissions ceased in 2000. We know that has not happened.
      My suspicions is that Schmidt, and many others, will never admit that their models are too high. They have invested too much in them being right. You have not missed anything.

      • Dan H: My suspicions is that Schmidt, and many others, will never admit that their models are too high.

        Schmidt: As we stated before, the Hansen et al ‘B’ projection is running warm compared to the real world

        Another failed prediction?

        Regarding Hansen’s scenario A, the main difference between its scenario and what has actually happened is not the CO2 increase (which is incredibly almost exactly on point) but changes in CFCs and methane. Both are way below what was assumed in scenario A.

      • Both are way below what was assumed in scenario A.

        Here’s some graphs to quantify this point. It’s remarkable how well Scenario C tracks the evolution of CFC concentrations.

      • Paul S – it does no good to point this out. No matter how often it’s done, they revert to the same distortion. Some of the people who do it have called me dishonest, and I laugh my butt off.

      • Really PaulS, An integral part of Hansen’s projections were the portion of emissions that remained in the atmosphere. His model obviously failed by about a factor of 2 on this critical issue. In any case, the actual data is below Scenario C, which assumed that emissions ceased in 2000. That to me says his predictions are falsified. You can try to parse the obvious to say that part of his model might have been right, but that’s the tactic of the lawyer, not the scientist.

        In any case, I note on RC, Gavin’s plot of data vs. AR4 model predictions shows that the data has a trend of roughly 0.1K/decade. This I think supports Judith’s point in this post. This trend is a 30 year trend.

      • An integral part of Hansen’s projections were the portion of emissions that remained in the atmosphere.

        This is true but it’s not the reason for lower observed atmospheric concentration of gases compared to Hansen’s ‘A’ projections.

        CFCs represent the main difference between Scenario A and reality, and they were cut by the Montreal Protocol. In terms of CFC storyline Scenario C is the closest match to reality. As you say, a climate projection should be judged on its ability to track the quantities of certain gases and it seems to have done a remarkably good job in this respect.

        The tactic of a lawyer is to throw any argument at the other side that they think they can get get away with, rather than checking if it genuinely fits the facts. If you really want to get to the truth of why Hansen’s scenario A and B appear to be overpredicting temperature rise you’ll need to develop a better understanding of the factors involved. If you want to continue making simplistic “It’s wrong!” arguments to try and get a win for your ‘side’ amongst those who don’t know any better please continue as you are.

  95. Clearly, there is no such thing as ‘climate science’. Science requires a definite, testable prediction to be attached to a hypothesis. As Chris Ho-Stuart has argued consistently on here, no such prediction has ever been made. The IPCC merely supply projections (not predictions) for a range of scenarios from an ensemble of climate model runs.

    Global warming is on the same level as a pub debate.

    • You must go to a pretty low-grade pub, mate!

      • Chief Hydrologist

        G’day Mate,

        Yep. My local is the Great Western in Rockhampton. It has an indoor bull ring and pool tables. It is one of 2 places in the world where a man can drink, spit, swear and ride bulls at the same time. The other is in Texas. I love Texas too.

        I don’t rightly think we have ever seen a pissant warmist in the Great Western. It would be like a cowboy went up the the bar and ordered a chardonnay. It’s just not right.

        I’ll just mosey along now on my pretty blue pony called Shibboleth. – http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view&current=blue_horse.jpg

        Robert I Ellison
        Chief Hydrologist

      • Markus Fitzhenry

        Oh yea. My local is the Miriam Vale Hotel in Miriam Vale. It has an indoor bull ring and pool tables. It is one of 2 places in the world where a man can drink, spit, swear and ride cows at the same time. The other is the Great Western in Rockhampton

    • jdey123, you misrepresent what I have said.

      I did not say “no prediction” There ARE predictions a plenty, and they’ve been quoted here by many people.

      The source of confusion is not whether there are predictions or not (unless you are genuinely confused on that point!) but whether any of the predictions tell you what to expect over a short period of about a decade. The shortest prediction that has been quoted here is for two decades; and the IPCC states explicitly that this prediction (of a trend of “about” 0.2 C/decade) is largely independent of scenarios. The scenarios only really start to make a discernible difference on longer terms.

      • No, they’re projections not predictions. Read the small print. You’re good at that.

      • I was speaking of the short term prediction/projection for trends over two decades which everyone here has been talking about. It’s explicitly noted as being independent of the scenarios.

        It’s still a “projection” in the sense that it is conditional on, for example, no huge volcano showing up and cooling the planet down for a few years.

        But in practice this (in my view) does stand as a good falsifiable prediction, without strong preconditions on the scenarios.

        The distinction you appear to have made is that projections are something at the level of a pub debate. I think they are a lot more than that, and that they do allow for a subsequent test against observations. They are falsifiable. Wait until the time window is up, identify which scenario applied, and then check the corresponding projection.

        A projection is, IMO, a prediction with an associated (unpredicted) precondition (the scenario). The one we are speaking of is largely independent of scenario; hence the distinction doesn’t really matter here.

      • Chris

        The IPCC projection was stated fairly clearly, as follows (AR4 WG1 SPM, p.12):

        For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios. Even if the concentrations of all greenhouse gases and aerosols had been kept constant at year 2000 levels, a further warming of about 0.1°C per decade would be expected.

        In actual fact, there was NO warming – zero, zilch, nada – for the first decade of the 21st century (January 2001 through December 2010).

        The most recent decade (January 2002 through December 2011) has shown a net cooling of around -0.1°C instead.

        In AR4 WG1 Ch.10 (Figure 10.4 and Table 10.5), IPCC shows us how the warming forecasts for the first decades of the century tie into the longer-range forecasts up to the end of the century.

        The first part of the forecast did not occur as the models had projected, so why should we believe that the later portions will?

        Chris, it is pretty obvious that we should NOT believe that the models will do any better from here on out than they have done until now.

        That is the point of this whole discussion.

        THE MODELS HAVE DEMONSTRATED THAT THEY ARE UNABLE TO PROJECT FUTURE CLIMATE CHANGE

        Why is this, Chris?

        I would suggest that the reason is quite simply:

        THE MODELS ARE USING A VALUE FOR CLIMATE SENSITIVITY, WHICH IS EXAGGERATED (BY A FACTOR OF 2 TO 3).

        [Sorry for putting this into bold caps, but it appears it has been impossible for anyone here to get your attention with normal words.]

        Max

      • Manaker, the term “degrees per decade” is a unit; not a window or timeframe.

        The time frame given there is “for the next two decades”.

        You are taking here as settled that the prediction is meaning that each of the next two decades will show a trend of about 0.2 decades. You don’t even acknowledge that this has been the point of argument over the whole discussion.

        I am saying that the correct meaning of what is written is that the observations over two decades are the window over which a trend with the given magnitude should be observed.

        I base this, not, on quibbling over minutiae of wording as people keep accusing me, but on looking at the whole context of the report.

        (1) The large variations on 10 year windows is a consistent feature of the available data record as far as we can see.

        (2) The models used by the IPCC in AR4 also show the same kind of large variations over 10 year windows. (See Figure TS-23 of AR4 to see individual model runs giving this characteristic variation on a that time scale.)

        (3) It’s been a consistent part of IPCC reports that these shorter scale variations exist and an open research question. (See, for example, section 3.2.2.6 of AR4 on “Temporal Variability of Global Temperatures and Recent Warming”).

        It looks to me that it’s the people who are trying to use 10 year windows to test the predictions who are the ones focused on minutiae of wording and not looking at the whole report or the history of the science.

        Furthermore, I think that “degrees per decade” is not just narrow technical jargon. The use of units like this is well and truly established in general discourse… as Louise noted with “miles per hour”.

        I’m not just inventing some story to protect the IPCC. I’m repeating the same general point of information on the nature of climate that I’ve explained to folks for many years. There’s lots of short term variation.

        The 10 year window over the last 20 years has varied from -0.10 to +0.40.

        (Using HadCRUT3 monthly data, and all 10 year windows within 1992 to 2011 inclusive)

        The 10 year window over 1982-2001 has varied from -0.01 to 0.35
        The 10 year window over 1972-1991 has varied from 0.00 to 0.42

        The 20 year window over all windows within 1972-2011 has varied from 0.10 to 0.24. (The smallest of those trends was seen over 1977-1996; and the most recent is running at 0.16)

        Now, you can continue to look at “degrees per decade” and treat that as the window, or complain that they should have worded this sentence better to underling that it was the unit, and that the twenty mentioned in the same sentence was the time frame.

        Or perhaps you can think that the IPCC really did make a prediction that flies in the face of what data was already available to them at the time.

        But at least acknowledge that this is the point of dispute! Should you look at any 10 years within those two decades, or should you look at the whole two decades.

        You apparently think the prediction can be tested against decade within the specified 20 years. I say the test should be for the trend over the whole 20 years. Is that a fair statement of what we disagree upon?

      • PS. I most certainly agree with the statement that models are unable to predict conditions over 10 year time frames; just in case that is not obvious in the above.

  96. If I am planning a long journey and want to estimate how long it will take me, I look up the distance and say at an average of 50mph (assuming most of the journey is motorway or similar) I can work out how long it will probably take me. It doesn’t mean I expect to be travelling 50 miles in each and every hour. Some of my journey I will be stationary and drinking coffee, some of my journey I will be breaking the speed limit and travelling 80mph.

    Trying to distort common English usage in this way is not honourable.

    • @louise

      There was no reason why the couldn’t have written a different sentence had they chosen to do so .

      For example they could have said ‘ at an average of 0.2C per decade’, which would cover your point, and only adds two or three words.

      But they didn’t. The point stands.

    • The problem arises when you are stationary and drinking cofee longer than you expected. After that you continue the journey, but in the opposite direction for some time! At the end you have to turn around and travel at 400 mph to achieve an average of 50 mph.

    • And you’d be better at estimating your arrival time if you’re going to the next town than if you’re going half way across the country. Here’s where the climate pub debate differs. The claim is that they’d can’t possibly be expected to estimate the near term, but if you live long enough, they’ll be right in 2100, just you wait and see. Clearly complete nonsense. Oh and they don’t even make a prediction for that, it’s a projection because they can’t possibly be expected to make a prediction because it’s impossible to predict man’s emissions. So climate pub debaters who like warm beer find it harder to predict man’s emissions than to form a climate hypothesis.

  97. Just as the derivative f'(x) and second derivative f”(x)</i) can tell you what's happening with the principal function f(x) so too we can learn from plots of the gradient of the temperature and the trend of that gradient as shown in the yellow line in the plot at the foot of my Home page at http://climate-change-theory.com

    There we see an apparent long-term trend which is decreasing slightly in its rate of increase, This fits with the concept of a cyclic pattern with maximum in the 12th century, minimum in the 17th century and future maximum in the 22nd century, minimum in 27th century, roughly anyway.

    The plot also shows a very clear shorter cyclic pattern which has obviously been the cause of all the debate. Just take a look at it all and draw your own conclusions. Remember, the plot is not a plot of temperatures but instead it is a plot of the gradients of moving 30 year trend lines.

  98. Question to me from Vaughan, asked in this comment above:

    Personally I operate at one level: I “try to read equilibrium sensitivity from a temperature trend” as you put it. Since I apparently to need to learn a lot more before I can even follow discussions, if you could convince me you had a more reliable approach you would have my full attention!

    There are two basic definitions of sensitivity used; “equilibrium sensitivity”, and “transient response”.

    The equilibrium sensitivity concerns how much warming will occur in response to a net forcing up until the whole system is back in balance. It will take some time for this response to occur, and the value of equilibirum sensitivity is all about how much warming you get if you wait long enough; not about how quickly the warming occurs. Hence you can’t read it from a trend.

    The transient response sensitivity (or “transient climate response”, TCR) on the other hand, concerns how quickly warming occurs in response to a continuously increasing forcing. This is about the rate of warming, and it does in principle show up from a trend.

    That’s my attempt at a simple (low level) explanation. It’s not fully correct; I’ve simplified. TCR is not quite as simple as being the trend, but it’s certainly closer to that than the ECS. You can see two two concepts explained more carefully and with nice diagrams in the third assessment report, chapter 9, section 9.2.1. I don’t think such a careful description is given in the fourth report.

    Here’s a link, and see figure 9.1 on that page for a nice graphical illustration of how they are defined. AR3, section 9.2 on “Transient Climate Response”.

    Note that the TCR is always less than the equilibirum sensitivity; it is also better constrained. (See 4AR, technical summary, section TS.6.4.2 Equilibrium and Transient Climate Sensitivity.)

    Cheers — Chris

    • Chris

      In describing the difference between “equilibrium sensitivity” and “transient response” you get into descriptions which sound very much like religious belief or dogma.

      Show me the empirical data, based on real-time physical observations or reproducible experimentation (NOT climate model runs), which support the premise that GH warming requires decades or even centuries to reach “equilibrium”.

      Until you can do so, I will have to assume that it takes only a matter of a year or so for this to occur and that all the “hidden in the pipeline” postulations (Hansen et al. plus copies) are simply “balderdash”, “poppycock” (or B.S. = “bad science”).

      Max

      • Manaker, the best empirical evidence for it taking significant time to get to equilibirium is the measurement that the Earth is well out of balance now. There’s been quite a lot of research on that; it’s basically about trying to determine the rate of heat flow into the ocean. It’s not a solved problem; the amount of imbalance is known only imprecisely; but it is indeed well supported empirically that you don’t get to equilibrium quickly.

        The physical theory bearing upon this is much MUCH more straightforward than model runs. For the planet to come to equilibrium with a temperature rise of 1 degree, you have to get ocean temperatures up by that amount. The heat capacity involved is massive. It’s pretty elementary physics that this is going to take a significant period of time before the equilibrium.

        Cheers — Chris

      • “In describing the difference between “equilibrium sensitivity” and “transient response” you get into descriptions which sound very much like religious belief or dogma.”

        Manacker spreading FUD.

    • Thanks for that, Chris, but it leaves me none the wiser.

      For starters, when you refer to climate sensitivity yourself, do you mean the equilibrium or the transient variety?

      And in either case, do you have a one-sentence or one-paragraph definition of what you understand by “climate sensitivity?” In particular is it something that could in principle be measured in our lifetime, or are we stuck with calculating it based on our best understanding of how heat flows between the ocean and the atmosphere?

      So far I have not read anything about climate sensitivity in the climate science literature that could be considered even close to what a physicist would call a definition adequate to resolve disagreements between reasonable physicists about what CO2 has been doing to the temperature lately.

      • I thought that Hansen had this figured out back in 1985. The transient climate sensitivity is modulated by a fat-tail diffusion of heat into the oceans.
        http://theoilconundrum.blogspot.com/2012/01/thermal-diffusion-and-missing-heat.html
        Take a look at the last chart in particular.

        I think the big issue is that people to often use a knee-jerk approach to modeling thermal transients. They do stuff like force-fitting damped exponentials, whereas the more realistic approach should model a kind of diffusion.

        I am working on a much more comprehensive view of what I call dispersed diffusion by applying it to much smaller systems such as oxide growth.

      • I haven’t actually referred to it much here, I think; apart from a side comment or two on the equilibrium sensitivity when it’s been mentioned by others. The equilibrium sensitivity is the one that is used most often (by far) in blog posts like this one.

        There has been a recurring confusion popping up in this thread, which is that the changes in trend could be addressed somehow by thinking that estimates of sensitivity are too high. It doesn’t work like that, as could be seen from the definitions.

        For a one sentence definition of sensitivity, I’d propose:

        How much temperatures will rise in response to a doubling of CO2 concentration.

        This is the usual “equilibirum” definition. I use “doubling of CO2 concentration” because that is indeed the usual reference for forcing, and because it is a physically very well understood forcing. In other units, you could say “to restore an energy imbalance of 3.7 W/m^2”, which is what doubling CO2 gives you.

        Note that this is how much rise you get, not how rapidly the rise will occur.

        It’s not something to observe directly in our lifetime. It is calculated right now; but to limited accuracy. The calculation needs to give the whole planet response to the forcing; that is — a climate model. It is likely that climate models will continue to improve. I also suspect that within our lifetime (and I’m more than middle aged already) there will be enough data and theory accumulated to validate models sufficiently to have a substantially better hold on this value. At present, 2.0 to 4.5 degrees will have to do.

        Furthermore, it’s important to constrain all the other forcings at work rather better because it isn’t just CO2 that’s driving changes.

        So far I have not read anything about climate sensitivity in the climate science literature that could be considered even close to what a physicist would call a definition adequate to resolve disagreements between reasonable physicists about what CO2 has been doing to the temperature lately.

        My perspective is that the problem is not that the definition is inadequate. The problem is that the calculations (the models) are still not good enough to be confident of a precise value. We don’t, for example, have an adequate understanding of the internal variability going on; and some of the other forcings (aerosols being a classic example) are much less well understood than the CO2 forcing.

      • Chris –

        Can you comment on James Hansen’s differentiation between climate sensitivity taking account of only fast feedbacks and that including slow feedbacks. He gives a ball-park 3 degrees for the former and six degrees for the latter. Is this widely accepted as reasonable?

      • For a one sentence definition of sensitivity, I’d propose:
        How much temperatures will rise in response to a doubling of CO2 concentration. … This is the usual “equilibirum” definition. …..My perspective is that the problem is not that the definition is inadequate. The problem is that the calculations (the models) are still not good enough to be confident of a precise value.

        I submit that the definition is worse than “inadequate,” it is useless for forecasting the temperature in 2100. The definition assumes a transition between two equilibrium states. But we are nowhere near an equilibrium state now, and the odds of our being any closer to one in 2100 are zip assuming business as usual.

        Basing climate sensitivity on equilibrium states is about as useful as determining the cornering ability of a car by weighing it.

      • Anteros, I’m sorry I missed your question to me about Hansen’s thoughts on long term sensitivity.

        The distinction he draws between slow and fast feedbacks is fine and serves as a widely recognizable characterization of what he is describing.

        The value of about 3 for the standard “Charney” equilibirum sensitivity is also unexceptional; it’s just background to the discussion, referring to the current best available rough estimate.

        The longer term sensitivity he means needs feedbacks that are slower than the response of the ocean to come up to an approximate thermal equilibrium. They are a proposed gradual response to the new higher temperature reached after the Earth comes into energy balance again.

        The value he proposes, of 6 degrees, on the other hand, is not generally accepted. Nor is it easily rejected or falsified, given the time scales. The testing of those ideas needs to be more indirect (which is okay in principle) but will need much better modeling. For my own part, I don’t think a single number makes a lot of sense; since the magnitude of the kinds of response being considered is unlikely to be even approximately linear, I think. But going into my reasons would be another topic; and I’m not really an expert anyway.

      • Vaughan, the equilibrium sensitivity is not used to project temperatures to 2100; for the reason you mention. We don’t expect 2100 to be in equilibrium.

        To see the IPCC projections to 2100, refer to AR4 technical summary section 5. In all cases, the projections are simply based on models.

        Some people, mostly non-experts I think who are misusing the concepts, may use sensitivity to project temperature in that way; but in the literature I think the major use of equilibrium sensitivity is as a diagnostic for comparing or characterizing models.

        For example, Hansen’s work in 1988, which has been discussed here, is universally recognized, in Hansen’s own retrospectives also, as “running hot”. The model has a sensitivity that is rather higher than the more advanced models in use today. That’s an example of a straightforward use of sensitivity as a model diagnostic.

        Cheers — Chris

      • Vaughan, the equilibrium sensitivity is not used to project temperatures to 2100; for the reason you mention. We don’t expect 2100 to be in equilibrium.

        Good, but then why did you pick equilibrium sensitivity as the kind to define in response to my request for a one-sentence definition of it? Your assessment of its utility seems to be quite consistent with my original complaint, which was

        So far I have not read anything about climate sensitivity in the climate science literature that could be considered even close to what a physicist would call a definition adequate to resolve disagreements between reasonable physicists about what CO2 has been doing to the temperature lately.

        Although you seemed at first to be taking objection to this (unless I misunderstood you), I haven’t seen anything from you or anyone else that’s inconsistent with that complaint.

        I think the major use of equilibrium sensitivity is as a diagnostic for comparing or characterizing models.

        That would seem to make equilibrium sensitivity only as useful as the models they serve.

        In terms of having anything useful to say about the likely temperature profile over the coming century, climate models seem to me to have too many weak links. As DR would put it, there are unknown knowns (we know what methane and feedbacks are from a physics standpoint but not their expected contributions to the climate), known unknowns (we understand long-term ocean oscillations quantitatively but can only speculate as to their basis in physics), and unknown unknowns (we understand neither the physics nor the expected future contributions of whatever brought about such climate-relevant phenomena as LIA, MWP, etc.).

        The known knowns (the Stefan-Boltzmann law, the HITRAN tables, Beer’s law, and lapse rate) are only giving us a quarter of the picture needed for climate models to tell us what CO2 has been doing to the temperature lately and what it’s likely to do to it in the coming decades.

        My bet is on modern climate history, which I believe offers us a much clearer crystal ball into our future than climate modeling. This is because equilibrium is a state while disequilibrium is a transition. Climate modeling focuses primarily on the state and treats the transition as secondary. This is backwards. The 20th and 21st centuries are jointly a transition between equilibrium states, which is what we should be studying if we expect to be able to say anything useful about the likely climate profile of the coming century. (I would further claim it is a smooth transition, but I have great difficulty picturing either side of the climate debate going along with that.)

        Paleoclimatology has its transitions too, but those were on such a different time scale as to make their coefficients of disequilibrium essentially irrelevant to the current transition.

      • Vaughan asks:

        Good, but then why did you pick equilibrium sensitivity as the kind to define in response to my request for a one-sentence definition of it?

        You asked which definition I used. That’s that one I use.

        This is because it’s the one being used and quoted most often in discussions like this. You and everyone else are consistently using ES numbers, and I’m giving the definition to help explain what that means, or why it SHOULDN’T be used for projections or to compare with trends.

        This isn’t to say it has this definition has no use at all; it is a useful definition for several reasons. But as I also said, I’ve NOT being using it in this discussion; I’ve been focused on trying to explain what is projected about trends and why. I’ve only mentioned sensitivity here a couple of times, in contexts where other people had already been taking numbers from the usual ES definition. Or in response to direct questions, like yours.

        Your assessment of its utility seems to be quite consistent with my original complaint, which was

        So far I have not read anything about climate sensitivity in the climate science literature that could be considered even close to what a physicist would call a definition adequate to resolve disagreements between reasonable physicists about what CO2 has been doing to the temperature lately.

        Although you seemed at first to be taking objection to this (unless I misunderstood you), I haven’t seen anything from you or anyone else that’s inconsistent with that complaint.

        I’m simply pointing out that you don’t read that in the literature because that isn’t what it’s used for.

        It shouldn’t be used that way here in discussion, either. I’ve jumped in here to help explain the concepts because I saw people trying to use it in ways that don’t make a lot of sense.

        The one way you will see it crop up in the literature, in this context, is as a diagnostic of models. If a model has sensitivity wrong, then it’s not likely to be reliable for showing what’s happening now, or over recent or coming decades. But that is as far as you should go; you don’t try to calculate responses on that scale from sensitivity directly; you use a model for that calculation.

        I quoted above one example of how it is used in the literature; another relevant one is the paper by Tsonis and Swanson (2009) which I quoted below. (this comment) It’s used correctly there also; not as a means to calculate multi-decadal variability, but as a diagnostic of the models that are used to calculate multi-decadal variability.

        Another other appropriate way in which to use this sensitivity diagnostic is as a diagnostic of the Earth itself; where it stands as the measure of how much temperatures change over times where the Earth is roughly in equilibrium. Hence, for example, it’s an appropriate definition for looking at temperature changes involved in the ice age, or millennial scales within the Holocene. You need an independent estimate of the forcing and the temperature as well, of course.

        It’s an appropriate definition for looking to the changes well past 2100 as Earth settles into equilibrium again (by which I mean approximate radiative energy balance between solar input and thermal emission to space).

        Cheers — Chris

      • You and everyone else are consistently using ES numbers,

        Maybe others are but I most certainly am not. Where did I ever say I was using “climate sensitivity” to refer to the equilibrium concept? Except when used in conjunction with a GCM, ES is a meaningless concept for modern climate. To date I’ve been unable to see how either GCMs or ES help us predict the next half century of global land-sea temperature.

        I’m giving the definition to help explain what that means, or why it SHOULDN’T be used for projections or to compare with trends.

        Good. My original point was that it shouldn’t be used for those things. If I seemed to be making some other point my apology for being unclear.

        Since you only defined equilibrium sensitivity and not some version of transient climate response, does this mean the latter is less meaningful or relevant to you than the former?

        Another other appropriate way in which to use this sensitivity diagnostic is as a diagnostic of the Earth itself; where it stands as the measure of how much temperatures change over times where the Earth is roughly in equilibrium.

        Now that I can agree with. What I don’t see is the relationship between an Earth that is roughly in equilibrium and one that is badly out of it. The former places far fewer demands on our understanding of the Earth’s energy budget than the latter. In my mind equilibrium climate sensitivity has little bearing on the influence of CO2 when Earth is as seriously out of equilibrium as at present.

        It’s an appropriate definition for looking to the changes well past 2100 as Earth settles into equilibrium again (by which I mean approximate radiative energy balance between solar input and thermal emission to space).

        Agreed. Actually my guess would be that by 2075 new technologies like inertial confinement fusion will have decreased CO2 emissions to the point where nature is drawing down more than we emit, so that CO2 will decrease for a while, starting well before 2100. The Archer-Schmidt view of CO2 hanging around for centuries seems based on a model of residence time having what I see as at least three problems: fallacious appeal to paleoclimate, irrelevance of average residence time per molecule, and neglect of disequilibrium coefficients. Neither Mark Jacobson nor I buy their pessimistic forecast, which Archer’s book The Long Thaw exaggerates to an absurd degree.

      • Maybe others are but I most certainly am not. Where did I ever say I was using “climate sensitivity” to refer to the equilibrium concept? Except when used in conjunction with a GCM, ES is a meaningless concept for modern climate. To date I’ve been unable to see how either GCMs or ES help us predict the next half century of global land-sea temperature.

        My apologies; I think I have not been approaching your question as you had intended. The remark of yours I was working from is the one I quoted above when I started out this subthread.

        Personally I operate at one level: I “try to read equilibrium sensitivity from a temperature trend” as you put it. Since I apparently to need to learn a lot more before I can even follow discussions, if you could convince me you had a more reliable approach you would have my full attention!

        I took that at face value; as a question for me; and I have been trying to explain definitions of the concepts and how they are used — at a comparatively basic “level”. It wasn’t you, however, who wrote anything in this comment stream that was misusing the sensitivity in that way… so I may well have missed a bit of gentle sarcasm in your question. I’m really not trying to defend anything in particular here; so much as give the context in which any published references to sensitivity should be understood — whether one is skeptical of them or not.

        Good. My original point was that it shouldn’t be used for those things [projections or comparison with trends]. If I seemed to be making some other point my apology for being unclear.

        No problem.

        Since you only defined equilibrium sensitivity and not some version of transient climate response, does this mean the latter is less meaningful or relevant to you than the former?

        They are both equally meaningful, in the sense of both being carefully defined. Their relevance depends on the topic; and both are used appropriately in the literature where they are defined.

        The equilibrium concept has, I think, wider applicability. The transient response is probably more immediately relevant to projections over the coming century; and equilibrium sensitivity remains better as a model diagnostic (I think…). My own main interest is the basic background physics itself; but in discussions at blogs I try to engage making use of the concepts already being used in the discussion. And that is almost always “equilibrium sensitivity”. It’s a well-defined and widely used diagnostic, and when anyone in discussion simply says “sensitivity”; this is almost always what is meant.

        For what it is worth; my own view is that the models ARE useful for projections for this century, and that the wide confidence limits on those projections is a reasonable and cautious recognition that models are a long way from perfect.

        [snip some points of agreement]

        Agreed. Actually my guess would be that by 2075 new technologies like inertial confinement fusion will have decreased CO2 emissions to the point where nature is drawing down more than we emit, so that CO2 will decrease for a while, starting well before 2100. The Archer-Schmidt view of CO2 hanging around for centuries seems based on a model of residence time having what I see as at least three problems: fallacious appeal to paleoclimate, irrelevance of average residence time per molecule, and neglect of disequilibrium coefficients. Neither Mark Jacobson nor I buy their pessimistic forecast, which Archer’s book The Long Thaw exaggerates to an absurd degree.

        I haven’t read that one; but I’ll just comment briefly on the time scales for CO2. My understanding is that the conventional view (which shows up in pretty much every carbon-cycle model I’ve seen; the “Bern” model being a good representative) doesn’t use residence times per molecule. In fact, that’s the classic error by people who suggestion that the conventionally understood persistence scales are too long; they look at the short residence times per molecule; which is quite a different thing when there are multi-way exchanges and multiple reservoirs.

        As it turns out, Isaac Held’s blog has just put up a comment on this. It’s a very good blog for looking at the unsolved problems in climate science; though at a very high technical level; well above my level. Not for the faint of heart. Held is a quietly spoken careful scientist with a well-deserved top level reputation, who tends not to engage the usual denier/alarmist type exchanges. He looks at the problems that are driving conventional research. See: Cumulative emissions. It is dealing with longer terms which are impacted both by equilibrating of the carbon cycle and by changes in emission rates.

        Held concludes by recommending more emphasis on transient response rather than equilibrium sensitivity; the reasons are set out in the blog post, and in some of his other blogs. Well worth a read.

        Cheers — Chris

      • “As it turns out, Isaac Held’s blog has just put up a comment on this.”

        I appreciate that Held is looking at this, but he has taken a slightly naive approach to determining the response function. A diffusional response in such a situation is never a damped exponential. What you will find is that the response notches down as a squareRoot(time) dependence. So if a forcing function is a unit step, corresponding to a linear cumulative, the response will follow approximately a time^(1/2) profile. And if the forcing is a ramp, corresponding to a parabolic cumulative, the response will be time^(3/2).

        I am also puzzled by the fact that he doesn’t apply or explain the convolution operator, which is the recommended way to solve for the response with an arbitrary forcing function.

        Coming from a semiconductor processing background, I see the entire planet’s surface as a planar diffusion problem. The only hiccup is that we have a dispersive profile with respect to varying diffusion coefficients and interface depth.

        Again, I like what Lubos said about these kinds of physics problems recently:

        “People including me (and especially the climate skeptics) often like to say how complicated the climate is, and so on. But we shouldn’t forget that in many contexts, the physical problem is rather simple, clean, and doable. I am confident that a proper physicist who studies this physical system – or another system – has to know these simplified situations very well. This is the real knowledge he or she builds upon. Starting with hopelessly complicated situations that can’t be solved simply isn’t the right scientific attitude. The right scientific attitude is to cover the “space of possible situations” by special cases which are solvable and whose physics you largely remember and by calculating the more complex intermediate problems by various detailed numerical and perturbative methods and interpolation.”

        Isaac Held has the right attitude in trying to simplify, but it still needs some fine tuning to get something that is representative of the real transient that we should observe. As Lubos said, it does not have to be a “hopelessly complicated” situation, and so if we can apply the right abstraction to the problem domain, we can make some progress.

  99. In science, when the observation does not match theory, we chuck the theory.

    Here is comparison of Anthropogenic Global Warming (AGW) theory and observation:

    IPCC AR4, 2007 => http://bit.ly/z7cOHe

    This comparison shows the observed global mean temperatures (GMT) are less than model projections if human CO2 emission were held constant at the 2000 level.

    In addition, there has not been any change in the climate as there has been only a single GMT pattern since record begun 160 years ago. This pattern can be clearly observed in the data from NASA and the University of East Anglia as shown in the following graph.

    http://bit.ly/Aei4Nd

    This pattern has a unique property of a warming trend of only 0.06 deg C per decade and an oscillation of 0.5 deg C every 30 years.

    This result shows, for 160 years, the GMT pattern (the climate) has not been affected by human CO2 emission, volcanoes and aerosols! These variables did not have effect because the GMT pattern before and after mid-20th century were nearly identical.

    As observations do not match AGW theory, chuck the AGW theory!

    • And ruin the pub debate? We’d have to go back to debating UFOs.

      • You can debate the chucking of the AGW theory in the pub. Also which theory should be chucked next.

    • Girma, that image is a lot better.

      Still some problems and I’ll show how to fix it up shortly, but mostly I want to acknowledge straight up that this is a big step in the right direction.

      Cheers — Chris

    • This result shows, for 160 years, the GMT pattern (the climate) has not been affected by human CO2 emission, volcanoes and aerosols! These variables did not have effect because the GMT pattern before and after mid-20th century were nearly identical.

      Very creative, Girma. Bravo.

      You and Arfur Bryant have been this blog’s leading resident experts on methods for hiding the curvature in HADCRUT3. The method you’ve both relied on in the past was to fit a trend line to HADCRUT3 and then point out that the trend line was straight. Very straightforward reasoning, so to speak. Max Manacker uses essentially the same method for the log of total (as opposed to anthropogenic) CO2 to prove that its CAGR is a mere 0.5%, which exploits the fact that the substantial natural component of CO2 is stationary thereby hiding the 2.2% CAGR of anthropogenic CO2, our contribution.

      In the meantime this blog seems to have grown in sophistication during its year or so of existence. Arfur’s response has been to take his argument elsewhere, presumably to less sophisticated blogs. 19th century vendors of home remedies maintained market share (not to mention their liberty) in that way, always keeping one town ahead of the law and angry citizens.

      Your response would seem to have been the more professional one of keeping pace with your audience’s sophistication. Putting science ahead of marketing is most commendable.

      As can be seen at the Wikipedia article Curvature of a graph, the signed curvature k is given by k = y”/(1 + y’ ²)^{3/2} where y’ and y” are the first and second derivatives of the graph. When the graph is scaled by a factor of s this becomes k = sy”/(1 + sy’ ²)^{3/2}. For very large s this simplifies to k = y”/(sqrt(s)*y’ ³), which tends to zero as s tends to infinity, while for very small s it becomes k = sy”, which again tends to zero but this time as s tends to zero instead of infinity.

      Hence curvature can be hidden by making s either very large or very small. The only way to see curvature in a graph is to keep s in the general neighborhood of 1/y’ ², as in this graph, which fits one trend line to each of the two equal halves of HADCRUT3.

      Your technique of decreasing the scale s by using offsets at ±2 reduces the curvature considerably as can be seen here. But while much less pronounced the curvature is still visible. How to fix that?

      Easy. Don’t draw anything that would give away the curvature. Instead throw in three copies of the old Girma-Bryant trend line in this way. If one is straight, three must be even straighter! Seeing is believing, after all. That’s how Chief Hydrologist argues: “open your eyes” as he puts it.

      I would give your argument a 7 on a scale of 1 to 10 for creative ways of proving climate scientists are monkeys.

      As I’ve said before, in my judgement Harry Huffman still deserves an 8 for his heavily technical way of proving that CO2 is not the reason why the surface of Venus is so hot. Proving zero curvature by appealing to the straightness of the trend line is by comparison just a cheap trick with no serious underlying science that’s not worth more than 5.

      Don Easterbrook’s creative technique of hiding the entire rise of the 20th century under the right-most vertical grid line of a graph covering hundreds of centuries is a neat bit of sleight-of-hand that seems to work on 98% of visitors to YouTube, justifying a 6.

      But because curvature k depends in a more delicate way on scale s, tending to zero for both large and small s, I would say the subtlety of that dependency fully justifies a 7.

      Well done, Girma.

      • Chief Hydrologist

        I don’t that’s so. I argued with refernce to the NAS – you argued with a link to Chairman Pratt. I suggested this was metacognitive asymetty caused by you talking through your arse.

      • CH,

        I yield at once, with humbled AAS,
        Because with all my faults I love our NAS.

        Yet I’m a member of the common throng
        Who all agree you’ve got it wrong.

        Naive belief in what you see must stop.
        The truth has been replaced by Photoshop.

      • Chief Hydrologist

        Vaughan,

        Just to help you out – old buddy – and as physicists can’t dance for spit – I recast it in iambic pentameter.

        I yield at once, with humbled AAS
        I crave indulgence, love our NAS
        We all agree you’ve got it wrong,
        That’s not the way to wear a thong.
        Belief in what you see must stop,
        Our truth we get from photoshop.

        If you want to be a poet – you need a bottle of bourbon, a pack of cigarettes and a trailer park. You then write maudlin love sonnets about good looking but lonesome cowboys and strong women.

        Robert I Ellison
        Chief Hydrologist

      • Chief Hydrologist

        Of course I really meant iambic tetrameter – mortified

      • Great re-rendering, CH. Two very minor picky points.

        You were ok with my humdrum aabbcc rhyming scheme but for some ridiculous reason (to which however I’ve no desire to be disloyal) you felt obliged to replace my unconventional tpptpp scansion scheme with the even more humdrum tttttt scansion scheme beloved of fifth grade teachers. To your rhythmic da-da da-da da-da da-da I can only say tpptpp.

        Your replacement of “throng” by “thong” is more Jake Gyllenhaal than Johnny Depp. Google for
        “no desire to be disloyal” “common throng”

      • Forgot two things: this link (apropos of peer review), and that “ridiculous reason” is apropos of the current month. :)

      • Chief Hydrologist

        Like all these things you need to walk first before you can gambol. BTW – how’s that third law going? It sure is a doozy but I’m certain you can get the hang of it.

        I am lying back with laptop balanced and Miles Davis in the headset – and am reminded again of the endless opportunities for rythmic variation.

        In truth however – your subject was mundane and your scansion a mess. I went back to the basics and added a little much needed frisson. Although the latter was a mistake as now I am haunted by nauseating thoughts of Pratt in a thong.

        Anytime I can help Pratt old buddy – happy to oblige.

  100. Dikran:
    “Is there anybody here who can admit that when the stated uncertainty of the projection is considered Prof. Curry’s assertion is incorrect?”

    It think you’ve got your answer. Telling, isn’t it?

    Markus Fitzhenry:
    “Ah come on Dikran marsupial, any opposing view is a blasphemy over on the RC blog. ”

    And that’s just false. On almost any thread there, you can see the RC mods swatting down the same zombie arguments you guys keep putting up over and over. Just ask Dan H. Hint: their replies are in brackets.

    • Dikran Marsupial

      Yes, it is rather telling, isn’t it? Plenty of ad-bloginems, and bluster, but no attemt to address the substance of the argument, and no admission that Prof. Curry’s assertion is factually incorrect. :-(

    • Steve,
      Funny but it is you AGW believers who are using the same retread talking points. You can pretend to swat down skeptics all you want, but reality does not give a fig how strong your faith is.

  101. Louise

    Two factors essentially negate your logic:

    – IPCC has been clear in its wording – no “coffee pauses” were postulated, but a clear warming of 0.2C per decade was projected by the models for “he next two decades” in AR4 (and a warming of 0.15C to 0.3C per decade in the previous TAR)
    – In AR4 Ch.10, Figure 10.4 and Table 10.5, IPCC show us how the projected warming of the early decades ties into the longer-term forecast for the entire century, IOW the warming of the early decades is an integral part of the “entire postulated journey”..

    The projected warming did not occur (subject of this blog).

    Ergo: The models failed.

    Ergo: There is no reason to assume that they will not fail for the longer-term forecast.

    Quite simple, actually.

    Max

    • Max

      This is what the IPCC projected in black and white!

      http://bit.ly/zA0a2j

      It is a global mean temperature (GMT) anomaly of 1 deg C by 2030, in twenty years time.

      Will the GMT be 1 deg C by 2030?

      Will the GMT still be about 0.4 deg C by 2030?

      There is nowhere to hide.

      The debate will be resolved soon.

      • Girma

        I agree with you that

        The debate will be resolved soon.

        Except, when I read the logic used by Chris, for example, I’d have to add:

        unless the goalposts get moved to delay the day of reckoning

        Plus there is always the rationalizatio that “my prediction was correct except for…” (add any imagined or postulated event that was unforeseen, such as Chinese pollution, ENSO variability, solar cycle 24 inactivity, global recession, increased price of oil on Rotterdam spot market, etc., etc.).

        This phenomenon is described in Nassim Taleb’s The Black Swan as one of the rationalizations used by forecasters when their predictions fail to be correct.

        The other (too ludicrous to have been mentioned by Taleb) is well, we may have missed the 20-year projection due to (add in excuses cited above), but our long term projection still stands, because these unforeseen factors cancel out over the long term

        Huh?

        Max

      • Girma, you are on the right track… except that the graph you present shows the prediction as a range from 0.7 to 1.0; and it can be tested at 2025, which is less than twenty years away. That’s the red, green and blue bars on the right.

        I’m not moving any goal posts. I’m going by what is written down in the report.

        The other prediction, which I am not rewording either, is for about 0.2 C/decade trend over the next two decades. That’s the range 2005 to 2025.

        People who accuse me of moving goal posts are not reading my comments very clearly. I’m describing the goal posts expressed in AR4.

        On my own behalf, I’m expecting the twenty years from 1997 (extending the 15 years Leake was talking about by another five, up to 2017) to show a trend a bit below the 0.2 value; somewhere from 0.15 to 0.20 most likely.

        For the twenty years 2005 to 2025 it could easily be up well over 0.2 again, or still back at 0.15 or so. I’d be surprised to see the 20 year trend go much lower than that.

        But yes, there’s nowhere to hide, and no-one is hiding or moving goal posts. You’ve got that graph as a record, and it’s a perfectly good expression of a range of 0.7 to 1.0 by 2025; (starting out from about 0.16 in 1990, so a rise of anything from 0.55 to 0.85 degrees over 35 years. (That corresponds to trend from 1990 of something between 0.16 to 0.24 C/decade)

        One point to emphasize, in case anyone mixes up what trend means. The prediction concerns the mean trend; not the particular anomaly value you get on the year 2025. Year to year anomalies vary by 0.2 in a single year. The trend, over 20 years, however, is more stable and that is what is predicted.

        Cheers — Chris

  102. Chief Hydrologist

    There is a need to evaluate data sources. Since 2003 – the SORCE project is the best available. TSI is peaking in the Schwabe cycle.
    http://lasp.colorado.edu/sorce/total_solar_irradiance_plots/images/tim_level3_tsi_24hour_640x480.png

    ‘This study uses proxy climate records derived from paleoclimate data to investigate the long-term behaviour of the Pacific Decadal Oscillation (PDO) and the El Niño Southern Oscillation (ENSO). During the past 400 years, climate shifts associated with changes in the PDO are shown to have occurred with a similar frequency to those documented in the 20th Century. Importantly, phase changes in the PDO have a propensity to coincide with changes in the relative frequency of ENSO events, where the positive phase of the PDO is associated with an enhanced frequency of El Niño events, while the negative phase is shown to be more favourable for the development of La Niña events.’ Long-term behaviour of ENSO: Interactions with the PDO over the past 400 years inferred from paleoclimate records

    Long-term behaviour of ENSO: Interactions with the PDO over the past 400 years inferred from paleoclimate records (Verdon and Franks, 2006)

    We are definitively in a cool Pacific decadal phase.

    Here is the SST chart for 2 days ago – http://www.osdpd.noaa.gov/data/sst/anomaly/2012/anomnight.2.9.2012.gif
    – it shows the typical V of the cool mode in the central Pacific.

    Here is the extended Claus Wolter’s MEI – http://www.esrl.noaa.gov/psd/enso/mei.ext/ – if you look carefully you can see important and well documented change points in the mid 1940’s, the late 1970’s and 1998-2001.
    http://www.esrl.noaa.gov/psd/enso/mei.ext/

    These I believe are diagnositic of type 3 dynamical complexity.

  103. Chris Ho-stuart

    Manaker, the term “degrees per decade” is a unit; not a window or timeframe.

    The time frame given there is “for the next two decades”.

    You are taking here as settled that the prediction is meaning that each of the next two decades will show a trend of about 0.2 decades. You don’t even acknowledge that this has been the point of argument over the whole discussion.

    http://bit.ly/xR9qX6

    Here is an example of how a trend “for a decade” is related to the trend “for two-decades.”
    http://bit.ly/yAP8Yu

    For 1990 to 2000 => 0.25 deg C per decade (Based on 10 years)

    For 2000 to 2010 => 0.03 deg C per decade (Based on 10 years)

    For 1990 to 2010 => 0.16 deg C per decade (Based on 20 years)

    This example shows that the trend based on the 20 years data appears to be approximated by the average of the two trend values based on the 10 years data.

    Therefore, approximately,

    Trend for 1990 to 2010 = ((Trend for 1990 to 2000) + (Trend for 2000 to 2010))/2 = (0.25 + 0.03)/2 = 0.14 deg C per decade, which is approximately close to the correct value above of 0.16 deg C per decade.

    Now, in order to get IPCC’s 0.2 deg C per decade for the period from 2000 to 2020, we have,

    Trend for 2000 to 2020 = ((Trend for 2000 to 2010) + (Trend for 2010 to 2020))/2
    0.2 = (0.03 + Trend for 2010 to 2020)/2
    0.4 = 0.03 + Trend for 2010 to 2020
    Trend for 2010 to 2020 = 0.37 deg C per decade.

    Chris, for the IPCC’s 0.2 deg C for 2000 to 2020 to be realized, you need a global warming rate of about 0.37 deg C per decade for the period 2010 to 2020!

    Is it realistic to expect the global warming rate of 0.03 deg C per decade for the period 2000 to 2010 to jump to 0.37 deg C per decade in the period 2010 to 2020?

    It is extremely unlikely!

    • Why it is extremely unlikely?

      Because in the temperature record, in a 10 year period, a global warming rate of 0.37 deg C per decade was never observed.

      The observed maximum global warming rate was 0.25 deg C per decade for the period 1990 to 2000.

      As a result, the MAXIMUM possible global warming rate for the period 2000 to 2020 is going to be

      = (0.03 + 0.25)/2 = 0.14 deg C.

      • Girma, the trend over 20 years is not equal to the average of the trend over the first 10, and the trend over the second 10.

        Here’s an example to consider. I’ve calculated using HadCrut3 monthly data.

        The trend from 1997-2006 inclusive was 0.094 C/decade
        The trend from 1987-1996 inclusive was -0.0065 C/decade

        The trend from 1987-2006 inclusive was 0.198 C/decade

        If this violates your intuition, then your intuition about trend needs a bit of help. You’ve been using “Wood for trees”. Here’s a graphical representation: http://bit.ly/xFJErP

        If you look at this, it will probably help you see what’s wrong with your calculation.

        Let me add… full credit to you for going to the data and trying to analyze it yourself. That’s something to encourage. It’s a good way to learn more about data and how to analyze it. A few mistakes along the way is normal and not being afraid to keep trying is how you’ll keep improving.

        Do note that it’s also going to be important to keep up some book study on the methods you are using as well; learning more about regression and how it works from text book or tutorial web page. Best of luck with it.

        Cheers — Chris

      • PS. Strongest trend in HadCrut3 over a 10 year window was from 1974 through 1983 inclusive, at 0.42 C per decade. (Use 1974-1984 with Wood for trees).

      • Chris

        Okay, given 0.03 deg C per decade warming rate for the period 2000 to 2010, what warming rate for the period 2010 to 2020 gives you a warming rate of 0.2 deg C per decade for the period from 2000 to 2020?

        It is easy to find mistakes, but hard to find solutions. Please give me your estimate.

        I have given the maximum possible warming rate for the period 2010 to 2020 to be about 0.14 deg C per decade.

      • Girma, as I told you, you can’t give a warming rate for the next 10 years that would let you infer the warming rate for 20 years.

        It’s mathematically possible to have a NEGATIVE warming rate for the next ten years, and still get the 20 year trend well over 0.2.

        Physically, that’s unlikely; but not nearly as unlikely as you think. What it would take is another big El Nino event (analogous to 1998) that turns over the ocean waters somewhat, followed by another lull, but at a higher temperature. You can see that there’s a kind “step” change around the huge 1998 El Nino. That’s not a co-incidence, I think. It has to do with intermittent transport of heat into the deep ocean. The details of heat flow into the ocean is much harder to measure than surface temperature and the details of it much less well understood. It’s one of the sources of quasi-periodic internal variability.

        Be that as it may, it would be much too much of a co-incidence to get that showing up at just the right time. As I have already said, my bet is on the 20 year trend from 2000-2020 to be at the low end of projections. Below 0.20; but unlikely to be less than 0.15.

      • Chief Hydrologist

        So here is the trend from 1945 to 2000 and 2000 to 2012.

        Now there is a reason for that and it is the same one as in this peer study from Tsonis and colleagues – http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view&current=MONOTONIC.gif – it captures at least the decadal modes of natural variability.

        Part of this shows up as the PDO – http://jisao.washington.edu/pdo/

        Part of it shows up in the intensity and frequency of ENSO – La Nina dominant to 1976, El Nino dominant to 1998 and La Nina again since.

        http://www.esrl.noaa.gov/psd/enso/mei/

        It all shows up as a big blue V in the central Pacific – http://www.osdpd.noaa.gov/data/sst/anomaly/2012/anomnight.2.9.2012.gif

        So really – if you persist on playing stupid games and being so unbearably patronising – at the very least try for a broad understanding of an overwhelmingly complex system.

        Robert I Ellison
        Chief Hydrologist

      • PS. I’ve said repeatedly that 10 year trends are not predictable. So asking me for my 10 year prediction is not really sensible. I am, however, willing to take a stab at the 20 year interval 2000-2020.

        This is going to require a short term upswing sometime soon. Not necessarily a 10 year up turn, and not necessarily right away. Short term up and down swings have been going on all through the record. It looks plausible for a short term upswing to show up by 2014, which is not far away now; though we are still in a La Nina at the moment. We’ll see.

      • I think the decade 2011-2020 will be more than 0.15 degrees warmer than 2001-2010 which was a soft target with a long solar minimum to end it.

      • Chief Hydrologist; I’m not sure what understanding you are trying to give me here. I can indeed stand to learn more about PDO and so on of course; but I’m already familiar with the kind of summary you give above.

        I’ll take you more seriously about being “patronizing” if you can show how to address basic mathematical errors without coming across as knowing more than the person you are correcting. I mean, if my comments on trend calculation are patronizing, aren’t your comments on ENSO to me patronizing also?

        I’m not complaining; I think it would be fine to see something more substantive and informative of how ENSO impacts surface temperatures; and I’m not the person to do that well. You might be, and I’ll never take offense at someone who knows more than I do on a particular aspect of science seriously trying to help me improve in a constructive way. That’s what I’m trying to do also.

      • The trend over 20 years is APPROXIMATELY “equal to the average of the trend over the first 10, and the trend over the second 10.”

        But there is a condition that must be satisfied: there must not be a flip in the global mean temperature profile within the range considered.

        Here are my examples.

        http://bit.ly/wabn1t

        EXAMPLE 1

        For 1920 to 1930 => 0.06 deg C per decade (Based on 10 years)

        For 1930 to 1940 => 0.22 deg C per decade (Based on 10 years)

        For 1920 to 1940 => 0.16 deg C per decade (Based on 20 years)

        Approximate estimate for 1920 to 1940 => (0.06 + 0.22)/2 = 0.14 deg C per decade, which is a good estimate.

        EXAMPLE 2

        For 1980 to 1990 => 0.07 deg C per decade (Based on 10 years)

        For 1990 to 2000 => 0.25 deg C per decade (Based on 10 years)

        For 1980 to 2000 => 0.16 deg C per decade (Based on 20 years)

        Approximate estimate for 1980 to 2000 => (0.07 + 0.25)/2 = 0.16 deg C per decade, which is an exact estimate!

      • Complexity is overrated, and one can apply simple math to model seemingly complex systems:
        http://theoilconundrum.blogspot.com/2012/02/wind-speeds-of-world.html

        Chief, This is what happens when you spend time actually understanding the empirical data. You end up moving the yardstick which measures scientific progress.

      • Chief Hydrologist

        Chris the only message is that any trend to be meaningful must encompass the decadal variability that we are aware of thus anything less than 65 years is nonsense. You play with decades, 20 years or less. It is simply not meaningful.

        Our ‘interest is to understand – first the natural variability of climate – and then take it from there. So we were very excited when we realized a lot of changes in the past century from warmer to cooler and then back to warmer were all natural,’ Tsonis said.

        There are a couple of articles here by me – it might contain one or 2 things you are not aware of.

        http://www.sciencefile.org/system/content/item/2297-how-the-pacific-ocean-influences-global-climate-–a-review-of-the-physical-evidence

        Robert I Ellison
        Chief Hydrologist

      • Girma says:

        The trend over 20 years is APPROXIMATELY “equal to the average of the trend over the first 10, and the trend over the second 10.”

        No, it isn’t. It really isn’t. Look again at the example I gave you last time.

        Here it is again:

        The trend from 1997-2006 inclusive was 0.094 C/decade
        The trend from 1987-1996 inclusive was -0.0065 C/decade
        The trend from 1987-2006 inclusive was 0.198 C/decade

        0.198 is nothing even CLOSE to the average of 0.094 and -0.0065

        Here again is the plot to show it pictorially. http://bit.ly/xFJErP

      • Chief Hydrologist, thanks for the links. I’ll have a look.

        It’s indeed true that I’m not up on the details of quasi-periodic variability.

        I’m not just a simple apologist for the IPCC, despite what some people seem to think. I’m fine with them being wrong about things; but what this thread has done mostly is misunderstand and distort what was actually claimed.

        The IPCC made a claim for a 20 year window. You are suggesting that they are wrong to look at such a short window. That could well be the case. I did look at the Tsonis et al paper briefly when it came out, and also was helped a lot by the discussions provided by his co-author Kyle Swanson at a guest post in realclimate. Ray Pierrehumbert (who invited the post) also gave very useful comments on it. But that’s about it; I’ve not studied it any further and couldn’t describe it well now.

        I don’t think I agree with you about the need for 65 year windows, but that’s another debate. Swanson — if I read him correctly — seems to think you can get by with shorter windows as long as they don’t encompass anomalous jumps. I’m reading this paragraph (from Swanson’s blog post):

        Now, anomalous behavior is always in the eye of the beholder. However, the jump in temperature between 1997 and 1998 in this record certainly appears to pass the “smell test” (better than 3 standard deviations of interannual variability) for something out of the ordinary. Nor is this behavior dependent on the underlying time interval chosen, as the same basic picture emerges for any starting time up until the 1980′s, provided you look at locations that have continuous coverage over your interval. Again, as the temperature anomaly associated with this jump dissipates, we hypothesize that the climate system will return to its signal as defined by its pre-1998 behavior in roughly 2020 and resume warming.

        If Swanson is right, then the lull has almost ten years to go. That would invalidate the IPCC prediction. It also propose the appropriate correction. Not a change to the underlying trend, but a better understanding of the variability around it… something I know is an open research problem.

        Suffice to say, if you are right, that will become apparent as we see longer lulls than is expected according to AR4.

        What I’m really trying to do here is not prove the IPCC correct, but address confusions in what they predicted and how to test it. The prediction is out there, and if it fails, we learn something.

      • Chief Hydrologist

        Chris,

        Swanson’s RC post is another way of looking at it. The 1976/1977 and 1998/2001 events are extreme events assoiated with chaotic bifurcation.

        ‘We develop the concept of “dragon-kings” corresponding to meaningful outliers, which are found to coexist with power laws in the distributions of event sizes under a broad range of conditions in a large variety of systems. These dragon-kings reveal the existence of mechanisms of self-organization that are not apparent otherwise from the distribution of their smaller siblings. We present a generic phase diagram to explain the generation of dragon-kings and document their presence in six different examples (distribution of city sizes, distribution of acoustic emissions associated with material failure, distribution of velocity increments in hydrodynamic turbulence, distribution of financial drawdowns, distribution of the energies of epileptic seizures in humans and in model animals, distribution of the earthquake energies). We emphasize the importance of understanding dragon-kings as being often associated with a neighborhood of what can be called equivalently a phase transition, a bifurcation, a catastrophe (in the sense of Rene Thom), or a tipping point. The presence of a phase transition is crucial to learn how to diagnose in advance the symptoms associated with a coming dragon-king.’ http://arxiv.org/abs/0907.4290

        So the idea is of course to exclude these points to arrive at a residual trend. It is again not the 0.17/decade trend commonly discussed.

        The Pacific phases seem stable for 20 to 40 years as standing spatio-temporl waves in the Earth’s climate system. They can be seen in hydrology, oceanography, climatology and biology. They have an influence on energy dynamics through clouds.

        Swanson is right and we have a decade or three of moderate warming if not cooling.

  104. Chris Ho-Stuart

    The trend over 20 years is APPROXIMATELY “equal to the average of the trend over the first 10, and the trend over the second 10.”

    But there is a condition that must be satisfied: there must not be a flip in the global mean temperature profile within the range considered.

    Here are my examples.

    http://bit.ly/wabn1t

    EXAMPLE 1

    For 1920 to 1930 => 0.06 deg C per decade (Based on 10 years)

    For 1930 to 1940 => 0.22 deg C per decade (Based on 10 years)

    For 1920 to 1940 => 0.16 deg C per decade (Based on 20 years)

    Approximate estimate for 1920 to 1940 => (0.06 + 0.22)/2 = 0.14 deg C per decade, which is a good estimate.

    EXAMPLE 2

    For 1980 to 1990 => 0.07 deg C per decade (Based on 10 years)

    For 1990 to 2000 => 0.25 deg C per decade (Based on 10 years)

    For 1980 to 2000 => 0.16 deg C per decade (Based on 20 years)

    Approximate estimate for 1980 to 2000 => (0.07 + 0.25)/2 = 0.16 deg C per decade, which is an exact estimate!

    • Chris, for the IPCC’s 0.2 deg C for 2000 to 2020 to be realized, you need a global warming rate of about 0.37 deg C per decade for the period 2010 to 2020!

      • Girma,

        I’m understanding where you’re coming from with thus, but Chris has a point. The period 2010 to 2020 does not have to have a warming rate of 0.37 deg C per decade. It can be anything including a negative number and still produce the 0.2 deg C per decade for the whole 20 years…

        Of course – and as Chris said – it is very unlikely to be negative, but it has no need to be anywhere near 0.37. It doesn’t work like that!

      • Anteros

        You’re going to have to run me through your arithmetic.

        Here’s mine:

        Let’s say IPCC projected warming of 0.2C per decade for the first decades of the new century. This is a projection of 0.4°C warming over the first two decades.

        The 11-year period 2001-2011 showed cooling of, let’s say 0.1C.

        The 9-year period 2012-2020 must now show warming of:

        (20*.02 + 0.1)/9 = 0.5/9 = 0.0556°C per year or 0.556°C per decade, in order for the 20-year projection to have been correct.

        Right?

        I personally believe this will not occur. It appears that the next 9 years may even show continued cooling, but who knows? I just do not think that it is reasonable to assume that warming will resume at a rate that is three times the peak rate seen in the 1990s.

        And that is what will have to happen for the IPCC projection for the first two decades to be correct.

        Max

  105. Chris

    Since you are having problems understanding what is really going on compared to what IPCC projected should happen, here is a picture.
    http://farm8.staticflickr.com/7065/6855308561_4a4c435d4e_b.jpg

    (A picture is worth a thousand words, they say.)

    Max

    • Manacker, you again show a picture over a small ten year window.

      The IPCC projected a value over a window twice that size. Given the nature of short term variability, the ten year window really doesn’t give you a good basis for expecting what the enclosing 20 year window will do.

      This IPCC projection for the 2 decades may well be wrong; we’ll see. I won’t shy away in the least from it being wrong if that turns out to be the case. But to look at the projection, you are going to have to look at the 2 decades… because that is explicit in what the projection says.

      • Chris

        My last post responds to your point about the “small ten year window”.

        You’re right. These “blips” we are discussing are meaningless compared to long-term trends.

        The topic of this thread was the recent lack of warming and whether or not this falsified the IPCC climate sensitivity assumptions.

        The first decade of the new century showed slight cooling, where IPCC had projected warming of 0.2C per decade.

        As Girma has noted, this raises serious questions regarding the IPCC models’ ability to project future temperatures.

        A normal observer would say, “if they can’t get one decade right, why are we to believe that they can get several decades or even centuries right?”, and this is a very logical question, which is difficult to answer.

        The general conclusion is that it points to some real problems with the IPCC assumptions if it continues.

        It’s sort of like watching a dog race. The gates are opened and the favorite dog (whom you have betted on) starts off by running in the wrong direction.

        OK, the race is only one tenth finished, but your dog (called “actual”) is going to have a hard time catching up with the “projection” dog, who is already one-tenth down the track.

        Will he do it?

        Or will he continue running in the wrong direction – or just stop?

        Who knows?

        Max

        PS I don’t have “a dog in this race” – I’m just watching; but I’ve become skeptical that your dog can win.

      • @max, @chris h-s

        ‘A normal observer would say, “if they can’t get one decade right, why are we to believe that they can get several decades or even centuries right?”, and this is a very logical question, which is difficult to answer’

        And does the fact that nobody even tries to answer it other than with hand-waving lead either of you to conclude anything?

        Saying ‘gosh that’s a tough question’ but not answering it does not make it go away. Why are alarm bells not ringing all over modelling land rather than wallowing in your own complacency that it’ll all be all right at some distant future time when those of us who aren’t already dead probably won’t have the mental faculties to understand anyway.

        So – please try to answer. If you can’t forecast for 10 or 12 or 17 or whatever the number du jour is, why should we believe your predictions for 20 or 40 or 100 years out? If a racing tipster fails to pick the winner for ten races on the trot, why should we give any credence to his 11th prediction.

        And ‘Trust us, We’re Climatologists’ is not a satisfactory answer.

    • Chris

      Let me add that the Chief is 100% correct in saying that all this talk about 10 or 20-year trends is actually meaningless, since nothing under 65 years or so really means anything.

      I’d say that even 65 years (1946-2011, with linear warming trend of 0.1°C per decade) is probably too short to tell us much, in view of all the natural oscillations.

      We have a 161 year record (HadCRUT3), warts and all, that tells us the warming was around 0.7°C over the entire period, or a linear warming rate of between 0.04 and 0.05°C per decade, so this is probably more meaningful than the 65-year “blip” (or the even less meaningful most recent 30-year “blip”, 1976-2005, used by IPCC to demonstrate AGW).

      Think loooooong-term, Chris.

      Max
      h

      • Max, thanks for the exchange, and the more friendly approach above.

        I’m fine with this as a summary (and tell me if you agree with this summary or not!)

        (1) We both agree that 10 year windows don’t actually say much about the longer term. I also claim that the IPCC recognizes this as well, and is not predicting 10 year windows; but that’s only my understanding which apparently is still disputed by some folks.

        (2) We both agree that the IPCC is expecting 20 year windows to say something useful about the longer term. We both agree that this is a prediction that can be tested against data. The IPCC says “about” 0.2 C/decade; I take that as being 0.15 to 0.25 based on the graph we’ve seen many times now; but I speak for myself there.

        (3) Robert I Ellison proposes that 65 year windows are needed to show the underlying trend. I think you agree? My inclination is to think that overstates the case. The length of the window needed to smooth out quasi-periodic internal variability is not only based on the frequency of those changes, but their amplitude as well. It seems to me that the underlying forcing is strong enough to show up on shorter windows even given a longer term quasi-period variability. So I guess this point 3 is where we differ most?

        I’ll continue to keep an eye out for developments on that score, and I don’t mind at all if the IPCC turns out to have been wrong about the 20 year window.

        Cheers — Chris

      • Chris Ho-Stuart

        Thanks for your response

        We agree

        1. That 10-year windows do not mean much. [But I interpret the IPCC statement, For the next two decades, a warming of 0.2°C per decade is projected” to mean exactly what it says, i.e. each of the next two decades is projected to warm at a rate of 0.2°C per decade.]

        2. That a 20-year period will be more meaningful than a 10-year period.

        3. That a 65-year period, as proposed by the Chief, is even more meaningful.

        I would add my personal opinion that a 160-year period is even more meaningful than the ~30-year period starting in 1976 cited by IPCC in AR4 (Ch.3, p.240):

        The 1976 divide is the date of a widely acknowledged ‘climate shift’ (e.g. Trenberth, 1990) and seems to mark a time (see Chapter 9) when global mean temperature began a discernable upward trend that has been at least partly attributed to increases in greenhouse gas concentrations in the atmosphere (see the TAR; IPCC 2001).

        This, along with the answer to the questionHas global warming stopped for now? may be the two points, on which you and I cannot find agreement.

        I’d say the 30-year period is simply a “blip” (or maybe an “oscillation”) in a longer record, and is meaningless in itself.

        I’d also say that the record shows that “global warming has stopped ” (but could very well start up again).

        But let’s leave it at that.

        Thanks again for your time and comments.

        Max

  106. Chris

    Your choosing of 1997 for the two periods as start and end points (http://bit.ly/xFJErP) is a “clever trick” of obfuscation because it is a discontinuity in global mean temperature as the temperature suddenly rises.

    Very sad!

    Here is a proper example with out a “trick”

    For 1980 to 1990 => 0.07 deg C per decade (Based on 10 years)

    For 1990 to 2000 => 0.25 deg C per decade (Based on 10 years)

    For 1980 to 2000 => 0.16 deg C per decade (Based on 20 years)

    Approximate estimate for 1980 to 2000 => (0.07 + 0.25)/2 = 0.16 deg C per decade, which is an exact estimate!

    • The proper way to show that some rule is wrong is to find and show counter examples, so I did.

      The same thing shows regularly in the record, or in any auto-correlated random series. Here are a couple more (using the values you should enter in Wood for Trees) The window is from the start of first year to the start of the second.

      1969-1979: trend 0.084
      1979-1989: trend 0.063
      1969-1989: trend 0.129

      1988-1998: trend 0.036
      1998-2008: trend 0.102
      1988-2008: trend 0.195

      Here’s one in reverse.
      1964-1974: trend 0.282
      1974-1984: trend 0.418
      1964-1984: trend 0.160

      This isn’t obfuscation. This is an attempt to help show you why your rule is mathematically incorrect.

      You can look for an additional condition which would allow your rule to hold. (You spoke of needing no “flip”; but you’d have to define that before I could tell if the condition is mathematically sufficient.) The point is that this “flip” or whatever it is shows up quite a lot in the record.

      Cheers — Chris

  107. Max

    Could you please explain the following calculation of your?

    Let’s say IPCC projected warming of 0.2C per decade for the first decades of the new century. This is a projection of 0.4°C warming over the first two decades.

    The 11-year period 2001-2011 showed cooling of, let’s say 0.1C.

    The 9-year period 2012-2020 must now show warming of:

    (20*.02 + 0.1)/9 = 0.5/9 = 0.0556°C per year or 0.556°C per decade, in order for the 20-year projection to have been correct.

    Right?

    • Max

      How about doing it like this.

      There was no warming in the first decade of the 21st century. (http://bit.ly/yeqsz0 )

      For IPCC’s projection of 0.2 deg C per decade to be correct, at the end of the second decade of the 21st century, the increase in temperature should be about 0.4 deg C. All this increase should occur in the second decade of the 21st century. As a result, to satisfy the IPCC’s projection, we should see a global warming rate of 0.4 deg C per decade for the period from 2010 to 2020.

      Do you agree?

      • If I may chip in, again…

        Girma, your intuition that the trend for 2000-2020 is unlikely to be as high as 0.2 C/decade is sound. I agree it is unlikely to be that high.

        It’s just that giving a formal mathematical basis for that likelihood is not particularly simple.

        For myself, I’ve suggested that 2005-2025 is likely to be below 0.2; and specifically somewhere from 0.15 to 0.2, but that’s a quick guess not a calculation, based on rough consideration of decadal scale variability/

        As for the trend extending the 15 years Leake was looking at from 1997; I’ve hacked my spreadsheet to look for parallel cases in the past, where a low 15 year trend has developed into a high 20 year trend.

        The biggest such jump is back mid-century.

        Trend 1942-1957: -0.185
        Trend 1942-1962: -0.026 http://bit.ly/ycGDSO

        Trend increases by 0.159; except it’s from really negative to slightly negative, rather than being from flat to strongly positive.

        Another slightly smaller increase (by 0.113) on positive trends is this one

        Trend 1962-1977: -0.027
        Trend 1962-1982: 0.087 http://bit.ly/zIiyKU

        However, one thing I think — and on this I certainly agree with Chief Hydrologist — just looking at trends alone isn’t all that useful. Knowing the physical causes for things is a better basis for looking at what may happen.

        Chief cites interesting work by Swanson and Tsonis, which proposes a physical basis for a lull extending out to roughly 2020. Furthermore, their proposal also highlights the two “jumps” I list above. This all constitutes a physical reason for suspecting the 20 year trend won’t be up to 0.2. They do predict the same underlying non-periodic warming trend will continue to be the main factor through this century; their model is that the internal variation is greater than the IPCC projection would suggest. This also implies GREATER climate sensitivity, not smaller. You need more sensitivity to get a big response to these internal factors. They make this point explicitly in their work.

        Be that as it may, their proposal, if it pans out, would almost certainly result in the IPCC prediction failing over the immediate future out to 2025.

        Cheers — Chris

      • Why are you doing your figuring with HadCRUTCH3? We are on a planet not measured by HadCRUTCH3. The region measured by HadCRUTCH3 is of little interest.

        Use GisTemp.

      • JCH, I am inclined to agree that HadCrut3 is not quite as good; but you don’t get a whole heck of a lot of difference with GISS. My spreadsheet uses both, plus the NCDC series, plus the two satellite series for the lower troposphere — UAH and RSS.

        I steer clear of the lower troposphere data (RSS and UAH) as that really is measuring something different, and because it has greater associated uncertainties. But it’s a handy second comparison.

        For the surface record, any of GISS, HadCrut3 and NCDC would be okay by me; so I just use what other people have been using already.

        HadCrut4 is due soon. I expect that to bring HadCrut more into line with measurement of the full globe, as GISS is doing.

      • I agree the difference is not great, but it puts to rest Girma’s nonsense about no warming.

        Tsonis-Swanson shift on GisTemp.

      • Chris

        We finally agree.

        Thanks for the long chat.

      • Chris

        Looks like we are in agreement.

        IPCC projected warming of 0.2C for each of the “next two decades”. As the lead article points out, the first of these two decades has already passed without such warming, so the IPCC projection was clearly wrong for the first of the two decade..

        Our discussion has now shifted to whether or not IPCC’s projection will be wrong for both decades

        Agree with your statement that for the IPCC projection of 0.2C warming for each of the two first decades to have been correct overboth decades, this would require warming of 0.4C per decade for the second decade.

        One year of the second decade has already expired, with again no warming (in fact slight cooling), so it will take a rate of a bit more that 0.4C per decade over the next 9 years to arrive at the forecast level averaged over both decades.

        You predict that this will probably not occur.

        I agree that it is highly unlikely to occur.

        Let’s see what happens.

        As to JCH’s quibble about whether to use GISS or HadCRUT, it appears to me that IPCC has used HadCRUT throughout its reports for many of the claims of past warming, so I think it is logical to stick with the indicator used by IPCC, rather than switching to UAH, GISS, RSS, NCDC, BEST or any other indicator when discussing IPCC statements.

        However, I agree that your general use of a combined indicator may not be a bad idea, since these appear to show different trends.

        So, in summary, we agree.

        Thanks for an interesting exchange..

        Max

      • I may be a little late to the party here, but I agree that warming over the next 9 years would need to be quite high in order to satisfy the IPCC projections. I am really curious as to what will happen if we get to 17 years with little or no warming.

      • IPCC reports are overviews of existing publications. All conclusions that are not directly supported by the published research are either sloppy writing or more probably simplifications formulated to make the message clearer and unfortunately and unavoidably by that also more questionable. This applies obviously to the warming trend discussed in this thread.

        The IPCC statements are based on average model results. Thus they represent background trend that’s modified by the natural variability. Most climate models have great variability lasting commonly 10-20 years in and in some cases up to 30 years. Thus deviations of this length are to be expected, while the models cannot make any more specific forecasts about their phase. The flat period of 15 years fits in the expected range of variability, but its likelihood is certainly rather low, perhaps of the order of 5%.

        What we have seen is not in contradiction with the scientific understanding that’s represented in the IPCC reports, but it does certainly give some support for the lower estimates for the strength of the trend or equivalently for the transient climate sensitivity.

        In this situation there are two kinds of error ranges. The first type shown in the IPCC graphs concerns the uncertainties in the background range. The other part discussed much less visibly in the IPCC reports corresponds to the extent of natural variability around this background trend. Leaving this second type of uncertainty out of most discussion may have appeared as a reasonable choice when the text was formulated, but now we can see that the decision has caused quite a lot of confusion and also offered unnecessary opportunities for the critiques of the IPCC.

      • From Manaker as part of an attempt to summarize what is agreed…

        IPCC projected warming of 0.2C for each of the “next two decades”. As the lead article points out, the first of these two decades has already passed without such warming, so the IPCC projection was clearly wrong for the first of the two decade..

        Except that this is, as I said very clearly in my summary, NOT what the IPCC says.

        If we are going to summarize please for the love of Harry let’s not just pretend agreement over things we consistently state as different.

        In my summary, I said this:

        (1) We both agree that 10 year windows don’t actually say much about the longer term. I also claim that the IPCC recognizes this as well, and is not predicting 10 year windows; but that’s only my understanding which apparently is still disputed by some folks.

        How did you miss this?

        The IPCC did not, and never has, predicted trends over a single decade; and in my summary I was careful not to attribute this understanding of the IPCC to you. And indeed it seems you continue to disagree with me on what the prediction means. Let’s acknowledge it.

        The only basis given for your interpretation is the use the unit “degrees per decade”; the same unit used through the report for every scale. Some people — even you it seems — take this as “clearly” meaning the IPCC prediction was for “each” of the next two decades. A word which is certainly not used by the IPCC!

        It’s just weird to think they’d do this, when the whole available record shows such large decadal variation. We DO NOT agree that this is the IPCC prediction.

        We agree that they have a prediction for a 20 year window. That’s it.

        Now if you continue to disagree with me, and claim that the IPCC meant “each” of the next two decades — despite all the context of the rest of the report I might add — fine. Suit yourself.

        But PLEASE. At least acknowledge that we DO NOT AGREE on this point of 10 year prediction. Read my comments for heavens sake. I tried to give a clear summary and you have turned it on its head.

        I also asked you to comment on my summary if I had it wrong anywhere. You seem to have skipped it entirely, and just invented some agreement on this point which has never been agreed.

        What’s so silly about this is that you have a good chance of the actual prediction failing. That prediction is one we do all agree on; the prediction of the trend over the 20 year window from 2005-2025.

        A failure of 10 year windows to match long term trend is really uninteresting. Those windows are all over the place for the entire record. It would be nothing new for them to continue to vary a lot from trend.

        A failure of the 20 year window to be an approximate match to the trend would be a lot more interesting because it really would be something different from the immediate past; and a genuine indication of something different physically from what is used in the conventional models.

        Not enormously different, and we can go into statistics if you like, but given potential and testable physical explanations being proposed (Tsonis et al), it matters.

      • Girma

        You ask:

        to satisfy the IPCC’s projection, we should see a global warming rate of 0.4 deg C per decade for the period from 2010 to 2020.

        Do you agree?

        Yes. I agree.

        Since we already have data for the full year of 2011, I have calculated the warming trend required for the next 9 years to reach 0.2 deg C over the entire 20-year period (and that is a linear warming rate of around 0.556 degC per decade, or a linear warming of 0.5 degC over the 9-year period that is still left.

        Do you agree?

        Thanks,

        Max

    • Girma

      You ask about my “arithmetic”

      Let us assume that the first “decade” of the new century has passed (2001-2010) with slight cooling instead of projected warming of 2 degC., based on the surface temperature record of HadCRUT3, used repeatedly by IPCC.

      And let’s also accept the HadCRUT3 temperature for the 11th year, 2011. We then have a linear trend over the 11-year period of around -0.1 degC, or a net cooling over the 11 years of close enough to -0.1 degC.

      So for IPCC’s projection of 0.2 degC per decade to be valid over both decades, we can calculate the warming required over the next 9 years.

      Total temperature change projected over 20 years = + 0.4 deg C.
      Actual temperature change observed, first 11 years = -0.1 deg C

      Temperature change required over next 9 years for forecast over both decades to be correct.

      0.4 – (-0.1) = 0.5 degC over next 9 years = linear rate of 0.5/0.9 = 0.555C per decade.

      This is just simple arithmetic with no statistical stuff thrown in, but seems logical to me.

      Do you have a problem with it?

      Thanks.

      Max

      • Max, since year to year variation is very large, up to 0.2 degrees and more between two successive years, people use trend lines over windows to estimate rate of increase. There isn’t a stable end point value to support the calculations you are doing.

        Interestingly, if you do the proper regression calculations with a pure linear rise extending the current series, you need a lot more than 0.555 C/decade linear rise over the next 9 years. Or perhaps not that interesting, as pure linear rises aren’t what happens.

        But just for the heck of it, appending a pure linear rise to the end of HadCRUT3 gives this for trends (starting at 2001 as you have done)

        2001-2012: 11 year window, trend -0.065
        2001-2021, appending 0.5 C/decade linear rise to 2021: 0.074
        2001-2021, appending 0.6 C/decade linear rise to 2021: 0.117
        2001-2021, appending 0.7 C/decade linear rise to 2021: 0.159
        2001-2021, appending 0.8 C/decade linear rise to 2021: 0.202

        Appending linear rises like this isn’t a very useful thing to do; but I’m giving it here as another illustration that your intuitions about mathematics of trend lines are letting you both down.

        What it would take to get to 0.2 (which we both agree is unlikely) is more like having the next 9 years with data around about 0.4 above the present and not a lot of trend. A kind of step change, if you like. That kind of variation can be seen occurring in the record, but not often enough to give more than an outside change it could occur just now in time to push 2001-2021 up to 0.2 C/decade.

      • Chris – Of interest regarding what interval(s) are specified, the AR4 Synthesis Report was published in 2008, with a SPM in November 2007. From the Report: “For the next two decades a warming of about 0.2°C per decade is projected for a range of SRES emissions scenarios. {WGI 10.3, 10.7, SPM} “. This implies that “the next two decades” could mean a 2007-2027 interval or even a 2008-2028 interval, or alternatively, it might simply mean an interval estimated from model projections in 2005 as you suggest. It does not mean the first two decades of the 21st century, where the IPCC made no prediction.

      • Fred, I was doing the extension Max proposed, regardless of the IPCC.

        My understanding of the IPCC work is that they are expected trends over 20 years to be comparatively stable, and on that basis gave a fairly strong statement for 2005-2025. From the graph, I take their “about 0.2” to be something in the range 0.15 to 0.25

        I personally think 2001-20021 is equally as good as 2005-2025 for considering what 20 year windows are doing; the IPCC was not making any suggestion of significance of one 20 years over another; so although I agree that 2005-2025 is the stated range, I won’t quibble with looking at other 20 year windows. Indeed, I think looking at all the 20 year windows is the best way to avoid getting lucky, or unlucky, with an outlier.

        As it happens, starting in 2001 is quite likely to become a new standard for “skeptics”; because it excludes a little bit of strong rise beforehand.

        Had Max decided on starting in the year 2000 (which is more likely to become used conventionally because it’s a nice round number, regardless of the precise definition for the start of the millenium) then the trend to the present is +0.01 rather than the -0.065 obtained starting in 2001. Statistically, this isn’t particularly significant; but it does mean starting in 2001 is going to show smaller trend numbers; so we can expect that to be preferred by people who are motivated to find low trends. I’m willing to assume that’s a co-incidence and Max was just picking the first year of the new millenium.

        I’m going to continue to look at all the windows, and won’t quibble about Max proposing the one starting in 2001.
        I

      • Chris – Your comment makes perfect sense, and my own comment was meant merely to correct apparent misrepresentations of what the IPCC predicted.

        Twenty year windows, wherever one starts, are better than 10 year windows, but there remains a serious misunderstanding of how to interpret even these. If the interval is long enough (where “enough” could be variously defined, but could be cited for example as 17 years), one can begin to get to good idea, statistically, as to whether we are seeing a rising, falling, or flat trend for that interval. The misunderstanding lies in interpreting such a trend as controverting a longer term effect from some particular climate dynamic such as CO2-mediated forcing. It is possible for the “true trend”, as estimated statistically, to be flat because of the composite phenomena operating over the specified interval, even though the longer trend is positive (for the last century) or negative (during certain paleoclimatologic intervals). In fact, the 25-30 years from mid-century to about 1976 were indeed quite flat in reality, not just statistically, and the reasons are fairly well understood from the work of Martin Wild and others to reflect mainly aerosol cooling as a counteracting influence on GHG-mediated warming; whether there were other phenomena also operating is possible but more speculative.

        The bottom line is that true bumps, dips, and flat times punctuate the climate record, and need not be spurious in order to understand them to be fluctuations around a longer term trend, which for the past 100 years has been upward, with the years since 1950 well explained mainly by GHG-mediated forcing, plus a smaller contribution from other factors.

      • Sounds good to me, Fred.

        In which case the IPCC prediction might be seen as an inappropriate expectation about the nature of short-term (20 years being short term) projections.

        That’s seems quite plausible to me, but I’m still willing to wait and see what pans out in coming decades. I expect the nature of quasi-periodic variation to become better understood; it’s an active research issue with a number of proposals that scientists are making and testing.

      • Max,

        I agree with your calculation.

        However, I think the drop in the trend in the period from 2001 to 2010 is about –0.03 deg C, instead of –0.1 deg C.

        http://bit.ly/zSBuxN

  108. Chief Hydrologist

    @webby

    I said I would be friends. We have been fitting floods to pdf’s for a long time. It is a similar process. It is used for calculating 1,000 or 10,000 year floods – where data doesn’t exist. You can look up the functions anywhere.

    http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view&current=logpearson.gif

    There is no real reason why wind speed should not have a skewed distribution. With complexity , however, we are looking for both extreme events and changes in regime. Extemes are dragon-kings that may not be captured by traditional power distributions. Thus my calculated 10,000 year storm may not be.

    The change is regimes is more subtle. In Australia we have 20 to 40 year hydrological regimes in which the average summer rainfall in the wet regime is 4 to 6 times the average in a dry regime. This doesn’t show up in a frequency distribution but is a critical issue for water resources planning for instance. So you can neglect complexity – but in the real world in can bite you hard on the bum.

    Robert I Ellison
    Chief Hydrologist

    • Chief, No one has any idea what you are talking about. You show a graph with neither axis labelled that was likely pulled from some statistics reference. You are a complete poseur, and it is beyond me why anyone would think that you have anything intelligent to add to the debate.

      • WHT

        Your latest “ad hom” on the Chief makes you appear to be arrogant and opinionated. [You may not actually be either, but that is the perception I get.]

        I personally do not conclude that the Chief is a “poseur”, as you do.

        I find the Chief’s posts interesting and informative but, then again, I am still open to new thoughts and opinions, as long as these are based on some sort of empirical data rather than simply dogma.

        Just my unsolicited input (which you are free to ignore).

        Max

      • Chief Hydrologist

        Webby – it is taken from decades of fitting flood data to a frequency distribution. The axes are obviously in context flow and cumulative probability – exactly the same as your pointless graph on wind speed.

        I give you a quite simple example of deterministic chaos in rainfall regimes and suggest that this variability is not captured by a frequncy distribution analysis. It is a simple concept that you should be able to grasp. The low values from dry decades are clustered at one end of the curve and the wet at the other. But the meaning is lost – we can not tell from a flood frequency analysis what the difference between regimes is.

        The other observation on extremes is suggested by Didier Sornette (2009) Dragon-Kings, Black Swans and the Prediction of Crises

        ‘One of the most remarkable emergent properties of natural and social sciences is that they are punctuated by rare large events, which often dominate their organization and lead to huge losses. This statement is usually quantified by heavy-tailed distributions of event sizes. Here, we present evidence that there is “life” beyond power laws: we introduce
        the concept of dragon-kings to refer to the existence of transient organization into extreme events that are statistically and mechanistically different from the rest of their smaller siblings. This realization opens the way for a systematic theory of predictability of
        catastrophes, which is outlined here and illustrated.’

        You have very little understanding of the natural or physical sciences beyond fitting data to curves. I am not impressed with fitting a function to data that is not understood at all. I am impressed by a deep knowledge of the physical reality behind the data. Data is fundamental – but unless you can see the pattern in the weft it is just meaningless data processing.

        Nothing you do is new. You take some data and fit a curve to it again and again. If I suggest that there is a broader understanding and you can’t or won’t cross the threshold – that is not my problem. But if I waste my time saying something that you can’t or won’t try to understand – and then you gratuitiouly insult me – well there is a limit and you have passed it Webby.

      • Manacker, Bring your game to the table or be content to sit on the sidelines.

      • Chief Hydrologist

        Webby – this is an open forum and while we are all prone to repitition either drop the snarks or be original and or amusing. So you play your one trick endlessly and expect us to be impressed and are otherwise noxious, self absorbed and have an inflated sense of your own capabilities. Now to me that seems very like an autism spectrum disorder. But I am not your therapist – so instead of gratuitous insults why don’t you just play with someone else – someone who gives a rat’s arse.

      • Once again, Chief can’t tell the difference between competitive trash-talking and psychological disorders.

  109. Markus Fitzhenry

    “”but in the real world in can bite you hard on the bum.””

    Ouch!

  110. Blah, blah, blah.

    This debate about the warming since 1998 is merely a goal post shift of the debate centered around no significant warming since 1995 that used to be making the rounds.

    Why isn’t it making the rounds anymore? Because with a bit more data, the significance crossed the 95% confidence interval that was the criterion of the original question.

    • Chris G

      The reason the “no warming since 1995” statement is no loner making the rounds is two-fold.

      First of all, IPCC has made specific projections of anticipated warming, both in the TAR in 2001 (based on 2000 data) and in AR4 in early 2007 (based on 2005 data).

      These projections were:

      TAR: warming of 0.15 to 0.3 degC per decade
      AR4: warming of 0.2 degC per decade

      The first decade after TAR was also the first decade of the new century, 2001-2010.

      We now have data for this decade – in fact, for the 11-year period starting in 2001.

      These data (HadCRUT3) show a net cooling trend over the 11-year period

      I personally do not like starting any trend analysis with the record hot ENSO year of 1998, because this leads to criticism that the trend is skewed because of the high start year, thereby deflecting the entire discussion to a side track.

      Starting a trend with the beginning of the new century (January 1, 2001) makes more sense to me for that reason.

      The second reason is that time has moved on. A statement made 3-4 years ago based on trends starting in 1995 are less interesting today than a statement made today based on the latest information that is available.

      Max

      • Max,
        It is less interesting because it can be demonstrated to be no longer true; whereas, if you shift the goal posts a few years, and shorten the interval, as you have just done, then it can become a point of contention again.

    • Chris G

      You missed the point of the actual question yourself. It was a question about 15 years. There still isn’t significant in the last 15 years [or any]

      What sort of sinks your contention is also that if Phil Jones was asked the same question today about 1995 [not 15 years] he would have to admit that there still isn’t significant warming. It only appeared briefly as a result of the 2010 El Nino.

      • I believe you are wrong, the warming since 1995 is significant above the 95% confidence interval.

        http://www.bbc.co.uk/news/science-environment-13719510

        Do you have any evidence to the contrary?

      • Chris G –

        Why isn’t it making the rounds anymore? Because with a bit more data, the significance crossed the 95% confidence interval that was the criterion of the original question.

        You can say the same thing again now, bit in reverse – because the ‘bit more data’ you quote is only 2010. Of course 2011 was the second coldest year of the decade. If you add that, the significance fails at 95%.

      • Anteros, citation/reference? Unfounded claim?

        Otherwise, sure, let’s have some fun with non-significant intervals on graphs.

        http://www.woodfortrees.org/plot/gistemp/mean:12/plot/gistemp/mean:66/plot/gistemp/from:1900/to:1917/trend/plot/gistemp/from:1944/to:1960/trend/plot/gistemp/from:1958/to:1972/trend/plot/gistemp/from:1981/to:1988/trend/plot/gistemp/from:1988/to:1996/trend

        If you applied the same logic that you are applying to the period from 1998 through today to argue that global warming is over, to other periods that show a leveling or cooling, you’d be about 0 for 5. What makes you think something is different now?

        You guys are still going down the up escalator.
        http://www.skepticalscience.com/news.php?n=1261

      • Chris G –

        The unfounded claim is yours. You came out with it – remember?
        Justify it if you can..

        And you still missed the point about the original question. It was because it was 15 years. I didn’t choose it [and I wouldn’t have bothered]

        Finally you make an unfounded assumption that I or ‘guys’ like me are arguing that global warming is over. You’ve got the wrong guy, pal.

        I’m just arguing that you made a false claim – it would be nice to see you back it up. ‘Warming since 1995’.

        Significant?

      • Anteros,
        Let’s see. The claim that there was no warming since 1995 was made based on a statement by Dr. Phil Jones in 2009(?), who said that it was just barely not significant. I provided another statement made by Dr. Jones in 2011, where he said that it warming since 1995 had crossed the 95% confidence threshold.

        Did you miss that part, where I matched the original source with the original source, intentionally, or is your reading comprehension not that good?

        You claimed that it is not significant now. I’m curious if that can be backed up. What is the basis of your claim?

      • Chris G –

        I believe you are wrong, the warming since 1995 is significant above the 95% confidence interval.

        You see the problem here exists because of the word is. What you did was quote Phil Jones talking about something that was relevant more than a year ago That is, with data to the end of the year 2010. It is now Feb 2012.

        Thus your quote was unrelated to your claim. If you claim that warming since 1995 is significant, you need to be able to back up that claim. I don’t believe you can which is why I’m calling you on it.

        Again.

      • I believe that June of 2011 is less than a year ago, and I have provided a source, and you have not, which is why I’m calling you on it.
        Again.

      • Chris G –

        I think you’re starting to sink now!

        The link you provided did not use data from June 2011. If you had read it, you would realise the significance was based on annual data. Phil Jones specifically says – the data from 2010.

        Just like I said – more than a year ago.

        If you want to claim warming from 1995 IS significant, you’re going to have to provide some evidence.

        You can’t – because there isn’t any!!

      • What part of I have provided a reference and you have not do you fail to understand?

        Let’s try re-phrasing the question. What evidence can you provide that the significance of the warming since 1995 has dropped below 95% confidence? Or, what makes you believe that adding 2011 data, which was warmer than 1995 data, would lower the confidence that there has been warming since 1995?

        Sinking? I’m still waiting for you to fire a shot.

      • Chris G –

        Time to get in a lifeboat quick!

        We’ve established that you’ve made a claim for which you have no evidence.
        You made the rather juvenile mistake of using some evidence for some other claim and, as there isn’t any for the claim you actually made, are now thrashing around hoping that this ‘other’ evidence is good enough.
        Of course, it isn’t. Not by a long shot.
        My suspicion is you know there isn’t any evidence, so you’re absolutely stymied.
        Hence the call to abandon ship.

        If you can’t back up your claim I say you’re wasting everybody’s time.

        I have to suspect you just made the claim up out of thin air – never a good idea :)

      • Anteros, I don’t know what other people have said on this; but by my own calculation, the regression line in HadCrut3 from the start of 1995 up to the present shows trend of +0.162, with conventional 95% confidence limits on the regression slope of 0.126 to 0.198

        So If Phil Jones was asked today about the trend from 1995 to now, he’d say statistically significant warming; because that’s what anyone will get using the numbers.

        Regardless of what references can be given to other sources, the data is right there now and does indeed show a statistically significant warming trend over that period.

        There are quite a number of shorter windows up to the present that don’t have statistically significant warming, but none that I can see which show statistically significant cooling; though it gets very close with the 9 year window from the start of 2003.

        2003-2012: Slope -0.061, 95% confidence bounds -0.127 to +0.004

        My perspective is that the predictions or hypotheses that matter most are those with some actual physical basis for expectations. It’s just too easy to find exceptional patterns when you are unconstrained in where you look for them. What makes the possibility of an extended lull exciting to me is the existence of real physical ideas about what can cause it… beyond looking for patterns of various cycles or lines fitted to data. Those are ideas worth testing, and I am indeed most interested to see what the next decade brings.

      • Anteros and Chris G — my apologies; dumb mistake in calculation.

        Where I said:

        Anteros, I don’t know what other people have said on this; but by my own calculation, the regression line in HadCrut3 from the start of 1995 up to the present shows trend of +0.162, with conventional 95% confidence limits on the regression slope of 0.126 to 0.198

        That should be: the regression line in HadCrut3 from the start of 1995 up to the present shows trend of +0.081, with conventional 95% confidence limits on the regression slope of 0.046 to 0.116

        The rest of the text is okay; I think. This is still statistically significant warming.

        Apologies — Chris

  111. Girma

    Girma

    The question was raised “what does the warming trend need to be for the IPCC projection of 0.2°C per decade to be reached over the first two decades of the new century, i.e. the period 2001 through 2020?”

    A quick arithmetic calculation shows this to be a decadal rate of 0.56°C per decade.

    It also works using the linear trend approach.

    Try it (for fun).

    1 – Plot the 11 years of monthly HadCRUT3 data in Excel.

    2 – Assume that for the remaining 9 years we need a warming rate of 0.56/decade or 0.004667 per month.

    3 – Add to the actual 2001-2011 temperature data a projection through 2020, using the equation:
    y = 0.00467x + 0.39 (where “x” is the month starting with “1” for January 2012)

    4 – Draw the linear trend line over the entire 20 years.

    5 – You will see that the equation is:
    y = 0.00167x + 0.3216

    6 – So the linear trend is 0.00167*120 = 0.2°C per decade (and IPCC’s projection turned out OK for the entire 20-year period (i.e. both decades).

    But I do not suspect that this is going to happen – do you?

    Max

    • You propose extending with y = 0.00467x + 0.39

      Where did you get 0.39? That number determines a step jump, if any, as well as the linear extension.

      • Chris

        To answer your question “where did you get the 0.39C [to start the future projection]?”:

        If you plot the HadCRUT3 monthly data from January 2001 through December 2011 you get a linear trend line that ends at 0.39°C (for December 2011).

        This is pretty arbitrary, but that’s the point I picked as the starting point for the 2012-2020 projection.

        If one starts at a lower starting point, (ex. December 2011 anomaly or year 2012 average) the linear trend for 2012-2020 would have to be slightly steeper to reach the average 0.2°C projected by IPCC for the entire 20-year period.

        Hope this answers your question.

        Max

      • Yes, that’s fine. That’s a good basis for picking it. Thanks.

  112. JCH Chris G and Anteros

    We have learned (Santer) that it takes precisely 17 years for a temperature trend to be statistically significant.

    JCH has shown us, based on HadCRUT3 data, that the most recent 17 years (1995-2011) still show a very slight warming trend.

    Next year the most recent 17-year trend will start in 1996. This will probably show an almost flat trend, possibly still with a bit of warming (depending how this year works out).

    By 2014 we will have a trend starting in 1997 and going through 2013. This may be essentially flat.

    The next year’s “17-year” period will be 1998 through 2014 and IPCC supporters will object to the fact that the trend started with a record high ENSO year, so is “meaningless”

    So we may have to wait until early 2016 to have a full 17-year period showing statistically significant cooling before the true believers finally concede that it has, indeed, stopped warming.

    But maybe by then Santer will come out with a new study showing that it takes 30 years to have a statistically significant trend.

    Who knows?

    Max

    • Santer’s paper with the 17 year window proposal is on the record; and if it gets falsified, then so be it. If that happens, and if he then recognizes and reworks, isn’t that what we should expect of a good scientist?

      Santer’s work on variations contrasts with suggestions by others (Tsonis et al for example) that there’s a significant internal variation which confounds even windows as long as 17 years and more. But both Tsonis and Santer see this as variation over a non-periodic anthropogenic forcing trend. This is also the indication of some pretty basic physics recognized by just about everyone working on the subject, regardless of differences over quasi-periodic variation.

      • Why do they see this as a ‘non-periodic anthropogenic forcing trend’ when it can’t be shown? They are seeing more than is evident or perhaps more than is there.
        ==================

      • Chief Hydrologist

        I am sure that Santer’s basis was simply that temperature changed by unspecified internal variation. On the basis of statisitics this could lead to no warming for 17 years. Anything beyond that means that some other type type of unspecified variation might be happening. I would be more impressed if he specified what the natural variation variation might be.

        Abrupt climate changes can be seen working through the El Niño Southern Oscillation, the Pacific Decadal Oscillation, the North Atlantic Oscillation, the Southern Annular Mode, the Artic Oscillation, the Indian Ocean Dipole and other measures of ocean and atmospheric states. These are measurements of sea surface temperature and atmospheric pressure over more than 100 years which show evidence for abrupt change to new climate conditions that persist for up to a few decades before shifting again. Global rainfall, ecological and flood records likewise show evidence for abrupt shifts and regimes that persist for decades. In Australia, less frequent flooding from early last century to the mid 1940’s, more frequent flooding to the late 1970’s and again a low rainfall regime to recent times.

        Anastasios Tsonis, of the Atmospheric Sciences Group at University of Wisconsin, Milwaukee, and colleagues used a mathematical network approach to analyse abrupt climate change on decadal timescales. Ocean and atmospheric indices – in this case the El Niño Southern Oscillation, the Pacific Decadal Oscillation, the North Atlantic Oscillation and the North Pacific Oscillation – can be thought of as chaotic oscillators that capture the major modes of climate variability. Tsonis and colleagues calculated the ‘distance’ between the indices. It was found that they would synchronise at certain times and then shift into a new state.

        Santer is a bit irrelevant if we begin to understnd the new physics of climate and the old sciences of oceanography and hydrology.

        Robert I Ellison
        Chief Hydrologist

      • Markus Fitzhenry

        I found this interesting. Although it is only a micro-function of climate, it might indicate some effect of changing air pressure on ENSO oscillation.

        Could a flow of atmospheric mass between poles cause creation of different ozone concentrations, hence cloud cover, in different hemispheres?

        Figures 1 and 2 seems to suggest there is connection between pressure/ozone/cloud cover and ENSO. SST seems follow pressure differentials.

        Is the build up of air pressure in the Eastern Pacific a catalyst for active Il Nino, and upwelling of colder water from South America being the eventual switch from Il Nino to La Nina, rather than a simple transfer of warmer surface waters from the Central/Western Pacific towards colder waters in Australasia?

        dWN (Differential pressure between latitude 30-40°north and 50-60° north, the pressure driving the South Westerly winds in the Northern Hemisphere). SST (Sea Surface Temperature).

        Figure 1 1948-1977
        http://climatechange1.files.wordpress.com/2011/01/11.jpg

        Figure 12 1978-2010
        http://climatechange1.files.wordpress.com/2011/01/12.jpg

        Figures 1 and 2 show us that the temperature of the sea in the mid latitudes of the northern hemisphere varies directly with the differential pressure driving the Westerly winds. When the wind blows harder we expect the sea to cool. But it warms. The cooling of the sea between 1948 and warming thereafter are entirely accounted for in the shift in the mass of the atmosphere that lies behind the change in wind strength and the flux in ozone that causes the cloud cover to change.

      • Chief Hydrologist

        Oh – and as the shifts are chaotic – the next one might take us anywhere in the climate phase space. You are assuming it is a cycle of some sort – it is not – it is non-stationary and non-Gaussian. Swanson might see it as returning to a warm mode a decade or so hence. I don’t think it is fair to characterise the Tsonis position as such.

      • Markus Fitzhenry

        “”Swanson might see it as returning to a warm mode a decade or so hence””

        Beauty, I can cut my Co2 footprint by watering with just the shed roof runoff for the next ten years. Save on new pumps too.

      • Chief Hydrologist

        Markus,

        There are major zones of upwelling in the north and south eastern Pacific resulting in some of the richest environments on Earth. The difference between upwelling and not is the temperature of the water above the thermocline which in turn depends on the strength of the the Californian and Peruvian Currents moving cold polars waters along the coast of North and South America.

        http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view&current=pacificcurrents.jpg

        Sea level pressure in the polar and sub polar regions are measured in the Southern and Northern Annular Modes – the changes in sea level pressure creates the potential for the anomalous movement of atmospheric mass into lower latitudes. The movement drives storms spinning off the polar vortices and surface ocean eddies moving in Ekman spirals towards the coast of the Americas.

        Together the oceanic upwelling in the north and south Pacific and the movement of atmospheric mass from and to the poles provide almost all of the decadal variability in Earth’s climate. They are linked quite obviously – and as is shown in the objective network method of Tsonis. The polar changes in sea level pressure seem to be driven by ozone and UV interactions in the stratosphere.

        Robert I Ellison
        Chief Hydrologist

      • Chief Hydrologist

        Markus,

        We are in a La Nina dominated period in which average summer rainfall is 4 to 6 times that in a El Nino period.

        You can see that here – http://www.esrl.noaa.gov/psd/enso/mei/ – La Nina (blue) dominant to 1976 – El Nino (red) dominant to 1998 and La Nina since.

        So much less drought in Miriam Vale for another decade or 3.

        Here is a very good ENSO animation.

      • Markus Fitzhenry

        Thanks Chief, I’m happy with, the polar changes in sea level pressure seem to be driven by ozone and UV interactions in the stratosphere.

        It’s been suggested to me that Solar isolation and pressure account for 99.6% of earths temperature. I’ll just have to keep searching before I can disprove it.

      • Chief Hydrologist

        whoops – http://www.duke.edu/~ts24/ENSO/

      • Chief Hydrologist

        I meant to include this as well – http://www.atmos.colostate.edu/ao/introduction.html

      • Chief, all the papers I have seen by Tsonis speak of variation imposed on top of an non-periodic anthropogenic forcing trend.

        That’s pretty obvious on many levels; the forcing itself is basic physics. The system response is where it gets difficult and this is where Tsonis’ work is most interesting. Tsonis does recognize the positive impact on temperatures of carbon dioxide, of course. So other natural modes of change are, inevitably, superimposed on that non-periodic change.

        That also is why 65 year windows don’t work for looking at the underlying trend; the non-periodic forcing has not been steady on that scale; it took off in a big way more like 40 or 50 years ago.

        Swanson mentioned the possible return to the underlying trend by about 2020; but that was in discussion on his own behalf. In their joint papers they both emphasize that there is still an underlying anthropogenic trend.

        Another interesting point they make jointly is on implications of their proposal for sensitivity of the climate system. This is from the conclusion of the 2009 paper:

        Finally, it is vital to note that there is no comfort to be gained by having a climate with a significant degree of internal variability, even if it results in a near-term cessation of global warming. It is straightforward to argue that a climate with significant internal variability is a climate that is very sensitive to applied anthropogenic radiative anomalies (c.f. Roe [2009]). If the role of internal variability in the climate system is as large as this analysis would seem to suggest, warming over the 21st century may well be larger than that predicted by the current generation of models, given the propensity of those models to underestimate climate internal variability

        Cheers — Chris

      • Chief Hydrologist

        Chris,

        I don’t mind at all – use the Swanson period of 1979 to 1997. That is exclude the dragon-kings of 1976/1977 and 1998.

        Our ‘interest is to understand – first the natural variability of climate – and then take it from there. So we were very excited when we realized a lot of changes in the past century from warmer to cooler and then back to warmer were all natural,’ Tsonis said.

      • Chief Hydrologist

        I’ll drop this link in 2 – http://judithcurry.com/2012/02/07/trends-change-points-hypotheses/#comment-168279 – complexity is complexity and it goes well beyond a febits of simple physics.

      • I understand that, Chief. Swanson and Tsonis are quite correct to be looking for change points over those windows.

        The physics of the sources of the internal variation is highly complex; what is even more limiting, however, is the lack of good comprehensive data bearing upon the ocean. It’s not as easy to measure as surface temperatures, as I’m sure you know.

        None of that invalidates the physics of what is known very well indeed; the greenhouse forcing. No matter how complex the quasi-periodic internal variability; there’s quite unambiguously a strong anthropogenic factor at work. It’s a total misreading of the work of Tsonis et al to see them as presenting an alternative to the well established anthropogenic factors.

        In fact, if the quasi-periodic effects on global temperature are as strong as Tsonis proposes, then this would strongly suggest that the sensitivity of models is at present too low; and that the heating on the scale of the coming century is likely to be at the high end, or worse, of IPCC expectations. That’s direct from their papers; not just my spin.

        They are proposing the long term heating is stronger but not as steady as the IPCC has projected,

      • Chief Hydrologist

        Hi Chris,

        The ocean data is almost entirely suspect. The heat that should be there is showing up in the deep oceans > 700m in ARGO. Whoops.

        Sensitivity in chaotic systems is not linear. Near the shift or tipping points climate is theoretically very sensitive – away from those points not so much. So the question is – are we near a tipping point? Well – do you feel lucky punk?

      • MAX – 2012 is likely to be warmer than 2011, so the 17-year trend at the end of 2012 will likely be stronger than the 17-year trend ending in January.

        I would make a graph using HadCRUtch3, but I see no point in looking at a regional temperature series.

  113. Let’s look at the last 40 years from GISTEMP.
    http://www.woodfortrees.org/plot/gistemp/from:1970/to:2012/mean:12/plot/gistemp/from:1970/to:2012/trend

    What we see here is something that could easily continue more or less linearly. Note in particular how the El Ninos have been getting warmer and so have the La Nina minima. Also in this context, there is nothing special about the last 10-15 years, as we have seen similar periods in the 80’s and 90’s. This trend is about 0.15 degrees per decade, which is near the lower end of IPCC estimates for the future as they expect an acceleration rather than linear behavior.

    • Jim D

      Why look at only 40 years?

      Look at the full 160+ years for which we have a record.

      Even this is a short time when looking at climate variations, but 40 years is simply a “blip”.

      Max

      • OK, but the 12-month running mean that I showed is also dominated by ENSO.

      • Chief Hydrologist

        The annual values are dominated by ENSO but also by the persistence of ENSO in any year. Thus if there were 8 months of El Nino in one year the temperature is likely to be higher to be higher than a year with 4 months on El Nino – independent of the size of the events. Data smoothing can be misleading.

      • The temperature trend over the last 40 years has been about 0.16 C/decade.

        This 0.16 deg C per decade is a cyclic warming rate, as there was the same warming rate from 1910 to 1940. The only persistent warming rate is only 0.06 deg C per decade, as shown here => http://bit.ly/Aei4Nd

        Assuming the current warming rate continues into the future may lead to incorrect conclusion if it were done in the 1940s => http://bit.ly/x6dzC2

        The global warming IS NOT 0.16 deg C. It is only 0.06 deg C per decade!

    • Jim D, This is not the way I see the data you plotted. It looks to me as if the increase is from about 0.0 to about 0.5 over 40 years or about 0.125 per decade. Curious that this is at the low end of the IPCC projections.

      • -0.05 to +0.6 in 40 years is about 0.16 per decade. This is about right for the IPCC sensitivity of 3 degrees per doubling over this period.

      • Jim, if you want to compare with CO2, use a logarithm. The impact on equilibrium temperature is logarithmic, and the forcing is 5.35 W/m^2 per natural log CO2.

        CO2 increase over the last 40 years is roughly from 325 ppm in 1970 to 390 in 2010.

        The change in natural log is Ln[390] – Ln[325] = Ln[390/325] = 0.1823

        So the extra forcing is about 0.975 W/m^2

        Note that there are plenty of other external forcings to consider; both positive and negative. As it happens, the net magnitude of the other forgings comes close to canceling out… with a large uncertainty, however.

        A comparison with CO2 only, therefore, is not going to tell you much at all, which is why you don’t see it in the IPCC reports. They consider total forcings instead; see chapter 2 of AR4, especially section 2.9, the concluding chapter synthesis.

        With an equilibrium sensitivity of about 0.8 degrees per W/m^2, that’s an increase of 0.78 degrees, or 0.195 C/decade

        The temperature trend over the last 40 years has been about 0.16 C/decade.

        The other major problem here is that you can’t really compare a trend against against equilibrium sensitivity; you’re better to compare with the transient climate response. (See discussion of these in my comments above with Vaughan Pratt) TCR is estimated (95% confidence) in the range 1 to 3 degrees per CO2 doubling, or about 0.3 to 0.8 degrees per W/m^2.

        Cheers — Chris

      • Chris

        The temperature trend over the last 40 years has been about 0.16 C/decade.

        This 0.16 deg C per decade is a cyclic warming rate, as there was the same warming rate from 1910 to 1940. The only persistent warming rate is only 0.06 deg C per decade, as shown here => http://bit.ly/Aei4Nd

        Assuming the current warming rate continues into the future may lead to incorrect conclusion if it were done in the 1940s => http://bit.ly/x6dzC2

        The global warming IS NOT 0.16 deg C. It is only 0.06 deg C per decade!

      • TCR is estimated (95% confidence) in the range 1 to 3 degrees per CO2 doubling, or about 0.3 to 0.8 degrees per W/m^2.

        As you were kind enough to offer a definition of equilibrium climate sensitivity, Chris, could you be prevailed on to do the same for TCR? (If you already did so, my apologies. This thread alone is approaching 800 comments.)

      • The formal definition is

        The global annual mean surface air temperature change… centred at the time of CO2 doubling in a 1% per year compound CO2
        increase scenario.

        See AR4 WG1, section 8.6.2.1. It takes 70 years to double CO2 with a 1% per year rise, and the mean temperature is estimated over a 20 year window centered on that time, including the last 10 years of increase and then another 10 years held steady at 2xCO2.

        It’s illustrated in AR3, figure 9.1.

    • Jim D

      there is nothing special about the last 10-15 years, as we have seen similar periods in the 80′s and 90′s.

      Whether or not a 11-year stop to global warming despite CO2 level reaching all time high is “statistically significant” is another question, but your statement is wrong.

      This slight cooling trend is not “nothing special”.

      There have been no cooling trends of this length (11 years) since the end of the mid-century cooling cycle around the early 1970s.

      The closest was an almost flat 9-year cooling period from 1979 to 1987.

      So, to put it in IPCC wording:

      The currently observed cooling of the 21st century so far is unprecedented in at least the previous 40 years

      But, as I pointed out to you, looking at 40-year periods is a waste of time.

      And over the long term modern record since 1850 there have been two recorded cooling cycles, each lasting around 30 years, so the current cooling may simply be the beginning of a third such cycle.

      Who knows?

      Not you. Not me.

      Not IPCC, who predicted warming for the first two decades (that has not yet started).

      Not the solar scientists, who predict a prolonged cooling cycle.

      We’ll all just have to wait and see, Jim.

      Max

  114. For your reference:

    The global warming pause => http://bit.ly/A96mFQ

  115. Re: “The implications for the next several decades are that the current trend will continue until the next climate shift, at some unknown point in the future.”

    Why is that unknown point in the future several decades off?

    • Chief Hydrologist

      It isn’t – we are taking the period in the instrumental record these period shifts in both temperature and oceans. Some of us have looked at the proxy record – these periods tend to last for 20 to 40 years but I wouldn’t take it to the bank just yet.

      • So, Hyp. III is an ill-formed expression then.
        We probably shouldn’t bother to debate its merits until it is revised to be internally consistent.

      • Chris G

        Yeah.

        “Hype. III”

        “CO2 as the control knob.”

        Both uncorroborated hypotheses.

        [But the CO2 control knob story has gotten more hype.]

        Max

      • It isn’t, necessarily, but the concatenation of cooling phases of the oceanic oscillations suggests that even an abrupt shift sooner than the expected decoupling might as well be a cooling one as a warming one.
        ========================

      • Chief Hydrologist

        Chris – so I answer your question honestly and you take it as an opportunity to have a snark. I should of known better. You remain utterly clueless but disturbingly confident. In reality your confidence should be in an inverse relationship to cuelessness. What the freakin’ hell do you mean by internally consistent? These are freakin’ chaotic bufurcations and the freakin’ planet could fall off the freakin’ edge of the freakin’ climate precipice any freakin’ minute now. And you want things to be simple and freakin’ internally consistent whatever that freakin’ means – freakin’ moron.

      • OK

        Chaotic system implies: Current trend will continue for several decades.
        Chaotic system implies: Trend could change at any time.

        That does not appear to be internally consistent to me. Sorry if you can’t see that for yourself.

      • Actually, it is simple. The trend is likely to continue but may change at any time, in any direction.
        ==============

      • Chief Hydrologist

        Kim,

        We will be in a finite area of climate phase space until we aren’t.

        What’s so freakin’ difficult?

        Regards

      • Kim,
        That is like saying, this coin is likely to come up heads, but it may come up tails. You could assert that we know nothing about how the earth will respond to forcings, that would at least be consistent. Not very useful, but consistent. I suppose that could be what Dr. Curry meant; not sure.

        We could test the idea that we know nothing. If we know nothing, there should be 0 correlation between the energy within the earth system, and the thermodynamic properties of the surface and atmosphere. Hmm, that does not appear to be the case.

        Silliness aside, yeah, we know that there very likely attraction points and non-linearity in the system, but you can’t keep energy inflow about the same and reduce energy outflow and have the system go to a lower energy state without breaking a law about the conservation of matter and energy.

      • Chief Hydrologist

        So here is where I look at the data at its most reliable and deduce that energy out declined over the period showing up in the deep ocean – exactly as the theory would suggest.

        http://judithcurry.com/2012/02/09/aq/#comment-167390

        This was my little joke and I am surprised that no one picked me up on it. First of all energy out doesn’t decline in greenhouse theory because the atmosphere heats up emitting more energy. Second, the big change was in the short wave as a result of cloud cover changes. This latter is related to both the polar pressure anomalies (SAM and NAM) and ENSO+PDO – together accounting for 119% (non-linear) of climate variation. The whole lot is a single dynamically complex Earth system. So we need to stop thinking in terms of forcings and feedbacks and think instead in terms of control variables, sensitive dependence and noisy bifurcation. This implies that climate is immensely sensitive at near bifurcations but not otherwise.

        So we have discovered that clouds change – especially where you have changing sea surface temperature – and this comes totally out of left field. Longer term there are THC and ice changes that are potentially run away feedbacks. Temperature changes of 20 degrees C – in places – in as little as a decade is possible. The intrinsic variability of the system saturates at the limits of natural variability seen in the Quaternary at least. Glacials or interglacials. You ante up or you fold. It’s up to you pardner – there’s no comfortable shibboleth to curl up in and this cowboys got a blue horse to saddle up.

      • Chaotic bifurcation is a no-nothing theory. We can’t predict anything using it, but know that an external forcing will always overpower the minor chaotic fluctuations that may occur. In other words, chaos is annoying noise in the bigger scheme of things.

        The only caveat is whether that noise will also scale with the external forcing — in other words the fluctuations growing bigger with added heat to the system. This is understood with respect to added heat raising the kinetic energy of the system – (1) increasing the average atmospheric wind speed, and (2) added heat to the ocean allowing for greater occasional releases into the atmosphere. These are likely subtle effects, but they will only go in that direction. Remember, reducing heat leads to lower thermal excitation.

        Look at my BesselK wind model that I have referred to elsewhere in the thread. This has a thermal excitation built-in, as the maximum entropy estimator for average energy is nothing but a thermalized Boltzmann partition function describing the system’s entropy.

        Entropy measures the thermal excitation of the system and that scales with temperature. Chief’s beloved non-linearities and chaos is just a way for the disordered system to explore the ergodic range of states, thus making the statistical mechanical/information theory valid. Look at how a naive pseudo-random number generator is created — by sticking a handful of non-linear functions together, you will explore the state space (see the topic of iterated maps). Voila, it will approach an ergodic limit. Then add in energy constraints, such as the mean energy of the system and you have the Boltzmann estimator, aka MaxEnt estimation.

      • Chief Hydrologist

        Natural variation saturates at the limits of – well – natural variability. Snowball Earth to blue green planet. I wish he would leave me alone. I am being cyber stalked by a cyber bully with mad theories about how things should be according to the practice of stochastic probability distributions. He has a probability distribution of wind speeds somewhere on the planet. Yeah – so what?

        ‘Chaos is a no(sic)-nothing theory…’ – says it all really.

      • Chief Hydrologist

        ‘The concepts of ergodicity and the ergodic hypothesis are central to applications of ergodic theory. The underlying idea is that for certain systems the time average of their properties is equal to the average over the entire space.’

        So dynamical systems have an average over the entire volume of the phase space. In climate this is not useful because we are interested in the properties of a finite volume of phase space. Whether it is a glacial or an interglacial – and not the average of the two.

        You could do a stochastic probability distrubution for global temperature – but all it would tell you is the frequency of occurance of a temperature or other. This is not teribly useful information. but it has been inflated in Webby’s mind to a breaktrough of cosmic proportions.

      • CH,

        So dynamical systems have an average over the entire volume of the phase space. In climate this is not useful because we are interested in the properties of a finite volume of phase space. Whether it is a glacial or an interglacial – and not the average of the two.

        The average is always a weighted average. Thus the influence of the the other alternative (glacial or interglacial) may be negligible for the ensemble average. If we have a chaotic system with two attractors where the choice of the attractor is not controlled by external forcing (like Milankovitch cycles) but by random factors then this does not work, but if the external forcings dominate in the choice then there are no problems of the type you indicate.

      • “You could do a stochastic probability distrubution for global temperature – but all it would tell you is the frequency of occurance of a temperature or other. This is not teribly useful information. but it has been inflated in Webby’s mind to a breaktrough of cosmic proportions.”

        I tell you what I would do for temperature, which is to analyze the heat equation and evaluate how much the ocean would uptake assuming there is an uncertainty in the effective diffusivity and a smearing of the stimulating thermal interface. Wait a second, I have already done that:
        http://theoilconundrum.blogspot.com/2012/01/thermal-diffusion-and-missing-heat.html
        This is an interesting exercise and it gives me a greater understanding of Hansen’s original model.

      • You remain utterly clueless

        Hey, turn down that clewier-than-thou thing a notch, CH. ;)

        (Justification of the spelling here.)

  116. @michael

    Are you going to give a critique of why you think it was ‘pathetic’?

    Or are we all to just gaze with awe upon your words and take them to heart anyway?

  117. Girma

    We now have a very slight cooling trend since January 1998 of -0.006°C per decade. This is as close to “flat” as you can get and is too short a time interval to be statistically significant.
    http://www.woodfortrees.org/plot/hadcrut3vgl/from:1998/to:2012/plot/hadcrut3vgl/from:1998/to:2012/trend

    However, if the 2011 annual average temperature continues for another three years we will have a 17-year record with a cooling trend of -0.04°C per decade.

    According to Santer, this will represent a statistically significant interval.

    It will also raise some serious questions regarding the climate sensitivity assumptions of the IPCC models and the presumption that “CO2 is the primary climate control knob”.

    And, finally, it will tend to validate your hypothesis that our planet’s temperature moves in 30-year warming/cooling cycles on a slightly tilted axis, as a result primarily of natural forcing factors and variability, the mechanisms for which we still do not fully understand today.

    We do not know what will really happen over these next three years, but it should be very exciting for everybody (as even the IPCC supporters, such as Chris Ho-Stuart, Jim D and others here should agree)..

    Cheers.

    Max

    • Max,
      Every additional cool year is additional evidence against strong warming. Every additional cool year should move the estimated range of transient climate sensitivity downwards. There are no precise limits that tell, when some particular approaches should be declared erroneous and disregearded, but no scientist should tell that a cool year does not change the quantitative estimates of various parameters or change the relative weight of various palusible models.

      The 17 year limit of Santer is valid under some specific assumptions, but that set of assumptions can be modified in many different ways. There isn’t anything magic in 17 years as there isn’t anything magic in any other length of interval.

  118. It’s funny, how, the arguments about the statistical signficiance of warming get heated every time, although they are meaningless as William Briggs has explained many time in his blogs.

    For answering the question, whether warming is statistically significant we need to take into account only the errors of measurement and combining the measurements. With that knowledge we can tell, whether the temperature is higher or not at the end of some period at some level of statistical significance. This is the answer that’s tells directly, whether there has been warming – just look at the data as Briggs stated.

    Asking for evidence on trends is a different issue. That requires a model of the stochastic variations in the data and a precise definition of, what the trend means in the particular consideration. Only, when all that is defined precisely, can we ask properly, whether the data provides evidence for a non-zero value for the model parameter that’s called trend.

    Most of the analyses are using monthly data for the estimation of staochastic variability without removal of interannual systematic variations and without appropriate emphasis given to autocorrelations and other model dependent assumptions. Such analyses are worthless in answering the question about statistical significance of the trend (whatever that is).

  119. Indeed climate has obviously switched to a cool mode, since at least 10 years , as it already did in 1880 and 1940, due to a change of PDO…
    Actually the best way to identify those change of climate regime is to perform
    data smoothing :

    [a] A 3 years averaging allows removing ENSO variability, but keeps solar and PDO cycles
    [b] A 11 years averaging also removes solar cycles but makes clearly appear quite reproducible patterns caused by PDO cycles :
    – Slight cooling trend of -0.04°C/decade observed over [1880 – 1910] and [1940 – 1970] periods, and roughly since 2000.
    – Warming trend of +0.16°C/decade observed over [1910 – 1940] and [1970 – 2000] periods
    [c] A 60 years averaging also makes PDO variability disappear so that only background trend of +0.06°C/decade, as observed since 1880, remains.

    trends

    Influence of PDO on temperature

  120. Indeed climate has obviously switched to a cool mode, since at least 10 years , as it already did in 1880 and 1940, due to a change of PDO…
    Actually the best way to identify those change of climate regime is to perform
    data smoothing :

    [a] A 3 years averaging allows removing ENSO variability, but keeps solar and PDO cycles
    [b] A 11 years averaging also removes solar cycles but makes clearly appear quite reproducible patterns caused by PDO cycles :
    – Slight cooling trend of -0.04°C/decade observed over [1880 – 1910] and [1940 – 1970] periods, and roughly since 2000.
    – Warming trend of +0.16°C/decade observed over [1910 – 1940] and [1970 – 2000] periods
    [c] A 60 years averaging also makes PDO variability disappear so that only background trend of +0.06°C/decade, as observed since 1880, remains.

    trends

    Influence of PDO on temperature

    • Eric

      I agree Eric.

      What you are showing is the following oscillation pattern of the global mean temperature => http://bit.ly/wIi4TM

      Here is independent source supporting your conclusion:
      http://jisao.washington.edu/pdo/

      The “Pacific Decadal Oscillation” (PDO) is a long-lived El Niño-like pattern of Pacific climate variability. While the two climate oscillations have similar spatial climate fingerprints, they have very different behavior in time. Fisheries scientist Steven Hare coined the term “Pacific Decadal Oscillation” (PDO) in 1996 while researching connections between Alaska salmon production cycles and Pacific climate (his dissertation topic with advisor Robert Francis). Two main characteristics distinguish PDO from El Niño/Southern Oscillation (ENSO): first, 20th century PDO “events” persisted for 20-to-30 years, while typical ENSO events persisted for 6 to 18 months; second, the climatic fingerprints of the PDO are most visible in the North Pacific/North American sector, while secondary signatures exist in the tropics – the opposite is true for ENSO. Several independent studies find evidence for just two full PDO cycles in the past century: “cool” PDO regimes prevailed from 1890-1924 and again from 1947-1976, while “warm” PDO regimes dominated from 1925-1946 and from 1977 through (at least) the mid-1990’s.

      • As a result, the global warming rate is only about 0.06 deg C per decade, not 0.16 deg C per decade as claimed by the IPCC => http://bit.ly/Aei4Nd

        This means that IPCC’s climate sensitivity of 3 deg C should be modified to 3*0.06/0.16 = 1.1 deg C.

      • Fully agreed Girma.

        Another outcome of this analysis is that it formally falsifies Foster & Rahmstorf paper that is referred into Tamino’s blog

        According to their analysis, a steady warming trend of almost +0.2°C/decade over the past 30 years would remain even after having removed natural variability, and especially ENSO influence.
        This analysis is just pure nonsense especially knowing that ENSO index has been mainly positive over these past 30 years. This means that ENSO had mainly a warming influence and that removing this influence should have led to a much lower warming trend : typically down to +0.06°C/decade corresponding to the background trend once all (known) natural variability removed (if natural variability can explain a variability of +/- 0.1°C per decade, it could also explain the background trend of +0.06°C per decade likely to correspond to a long term cycle – typically 1 millennium long).

        This analysis is also coherent with M. Latif opinion that is referred by Juy into into her last week in review post.
        According to Latif (who is an IPCC lead author !):
        1) Half of the warming is caused by natural variability (especially solar and oceans’ cycles)
        2) Warming has stopped for 12 years.
        3) This stop is mainly caused by PDO switching into cool mode.

  121. I am only interpreting the following data.

    http://bit.ly/njBdvW

    The above data shows a persistent warming of 0.06 deg C per decade with a cyclic cooling and warming of 0.5 deg C every 30 years. This result gives a cyclic 30-year global cooling of 0.32 deg C (= 0.5 – 0.06 deg/decade x 3 decade) and a cyclic 30-year global warming of 0.68 deg C (= 0.5 + 0.06 deg/decade x 3 decade).

    As this result has been valid for the last 130 years, it is reasonable to assume it will also be valid for the next 20 years.

    I am not going beyond 20 years before and after both ends.

    The GMT for the 2000s peak is about 0.45 deg C.
    As a result, the predicted GMT for the 2030s is about 0.13 deg C (= 0.45-0.32).

    The predicted GMT by the IPCC for the 2030s is about 1 deg C.
    http://bit.ly/n1S1Jf

    Let us see which prediction is realized.

  122. Chief Hydrologist

    @pekka

    We have a chaotic system with multiple attractors and, Tomas has argued, an almost infinite phase space. However, external forcings always dominate. We are not talking about some vague little internal wobble caused by God only knows what. The control variable might be Milankovitch cycles – but then there are feedbacks that include runaway increases in snow and ice. I have read that albedo varies from 25% to 50% – some 85W/m2 forcing.

    • Rob,

      The main point in my comment was that having a complex phase space with regions very different from the present is not necessarily a problem as all the disturbing remote parts of the phase space may get a small enough weight to keep them insignificant.

      Tomas has argued for the dominance of spatiotemporal chaos, which is much more complex than classical examples of chaos in systems of a few discrete variables. To me it’s obvious that the real Earth system has at some level properties of the type Tomas has discussed, but not at all obvious that these features dominate. They may very well act as minor disturbances to a simpler overall picture. The chaotic features may also be smoothened and effectively removed by stochastic influences. If stochasticity dominates over chaotic properties ergodicity is a better assumption over finite time intervals. If essential and effectively random transitions between significantly different attaractors dominate, then ergodicity is not a good assumption and the problem is inherently more difficult.

      • Its OK to assume ergodicity so long as we all acknowledge that any subsequent predictions or conclusions to be drawn by the reader must be heavily qualified.

        AGW doesn’t explicity state that ergodicity is being assumed and thus the level of uncertainty to this hypothesis has been and now still is way understated.

      • Its OK to assume ergodicity so long as we all acknowledge that any subsequent predictions or conclusions to be drawn by the reader must be heavily qualified.

        Translation: It’s ok to assume ergodicity so long as we don’t claim any forecasting skill under that assumption.

        From a signal processing standpoint, ergodicity asserts a lower bound on relevant frequencies, equivalently an upper bound on relevant periods.

        Looking at the last 162 years of global land-sea temperature, even if we detrend by subtracting a reliable estimate of the anthropogenic contribution to climate variability, the ocean oscillations show that any such upper bound on period cannot be less than 150 years. And looking at the last millennium it seems hard to justify anything less than half a millennium.

        At which point the relevance of the ergodic assumption to temperature in 2100 becomes moot.

        The ergodic assumption is neither sound nor necessary for reliable forecasting, so long as the distribution of variance between frequencies is understood.

      • Chief Hydrologist

        Pekka,

        You have simply restated your case which essentially relies on the absence of abrupt shifts of climate to fundamentally different states. Such things as drops of 10 degrees in places in as little as a decade and abrupt shifts in hydrology and ocean states. The example I gave was in hydrology. Decadal regimes with abrupt shifts between high rainfall and low rainfall states. Stochastic frequency distributions are done all the time. However, stratifying the regimes and then doing a probability distribution gives a much better and more socially useful answer. In this sense – as I said – we are interested in the properties of a finite volume of the phase space rather than an average over the entire bifurcated phase space.

        Tomas distinguised between the temporal chaos of models and the spatio-temporal chaos of Earth’s climate – different in the number of strange attractors and the volume of the phase space but not different in principle. They are both governed by fluid mixing and dissipation processes described by the 3D Navier-Stokes PDE.

        ‘Atmospheric and oceanic forcings are strongest at global equilibrium scales of 107 m and seasons to millennia. Fluid mixing and dissipation occur at microscales of 10−3 m and 10−3 s, and cloud particulate transformations happen at 10−6 m or smaller. Observed intrinsic variability is spectrally broad band across all intermediate scales. A full representation for all dynamical degrees of freedom in different quantities and scales is uncomputable even with optimistically foreseeable computer technology. No fundamentally reliable reduction of the size of the AOS dynamical system (i.e., a statistical mechanics analogous to the transition between molecular kinetics and fluid dynamics) is yet envisioned.’ http://www.pnas.org/content/104/21/8709.full

        I think the models might be closer to modelling the reality of the climate system than many realise. Simply as dynamically complex – deterministically chaotic – rather than linear or – seemingly more unlikely still – reliably random.

        Robert I Ellison
        Chief Hydrologist

      • Here we come back to the observation that no model is right, but some models are useful. Based on that we must try to decide, which models are likely to be useful and how they should be used to get the best out of them.

        I have also started to worry more and more about the significance of the fact that the Earth system is not in equilibrium and that we are looking at changes in the development of a system in transition rather than impacts on equlibribmium states.

        One good example is the heat transfer to oceans. Even if the scientists can make accurate enough and extensive enough measurements to tell, how the heat content of the oceans is changing that doesn’t tell, how much of that is due to recent warming without knowledge of the earlier rates of heat flow to the oceans. I think that there are good reasons to think that a persitent warming of deep ocean has been going on since tha last ice age and continuing now. More recent climate variations like the LIA make their contribution to that as well.

        The standard argument that we can predict climate better than weather is certainly based on some kind of ergodicity, perhaps not for the full phase space but for a fraction of it at the minimum. The same idea is basic for the concept of ES.

        All the above are proofs that the models are indeed not right. They cast doubts on their usefulness as well, but they don’t prove that the models are not useful.

        Given that the models may be useful, what should convince us that they indeed are?

      • The fact that we invoke any aspect of diffusion means that a variation of random walk is involved. And random walk needs some ergodic assumptions, of the type that Pekka is referring to, which can be weighted and thus importance sampled.

        Check what Lubos Motley is writing about recently:
        http://motls.blogspot.com/2012/02/willis-eschenbach-and-argo-on-warmest.html
        ” Starting with hopelessly complicated situations that can’t be solved simply isn’t the right scientific attitude. The right scientific attitude is to cover the “space of possible situations” by special cases which are solvable and whose physics you largely remember and by calculating the more complex intermediate problems by various detailed numerical and perturbative methods and interpolation.”

        That is what I call ergodic pragmatism.

      • Chief Hydrologist

        ‘Sometimes, a computer or a long, unreadable calculation is needed to figure out what we need to know. But we should still be able to understand the clean situation, perhaps on the top of our heads. The oceans without clouds and with the Sun above your head is an example of such a clean and omnipresent problem which is solvable by a set of ordinary differential equations. One of the results you should be able to get in this case is that the ocean surface temperature is close to 31 °C in this setup. This result is universally important and omnipresent across the globe (well, the tropics).

        Note that near 31 °C, there’s already a lot of water vapor around (in the nearby atmosphere) and it arguably dominates the greenhouse effect. I think that in such ocean conditions, the carbon dioxide will tend to be rather irrelevant in comparison. This means that you can’t really exceed something like 31 °C in the vicinity of the ocean. And it’s still pleasant to live near/in 31 °C warm water, isn’t it? Such water is warm but it still cools your 37 °C body. That’s one of thousands of reasons why it’s totally irrational to be afraid of global warming.’ Some more from the Lubos post.

        I think I was talking about mixing and dissipation as described by Navier-Stokes.

        My first thought on the ARGO data – is so what? Of course there is a maximum temperature but there is nothing to say that that is a limit.

      • Pekka Pirilla writes “Given that the models may be useful, what should convince us that they indeed are?”

        I have no doubt at all that models are useful. What I object to is the misuse of models. So far as I am concerned the main use of models is to help design the next experiment; or next data to be observed. What I object to is the use of models to predict the future, when these models have not been fully validated. So far as I can see, FEW of the models used in support of CAGW, and certainly not the GCMs, has ever been validated, so anything that suggests they can be a guide as to what is going to happen at ANY time in the future is vodoo science.

        I also object to the suggestion that the output of non-validated models has a similar status to observed data. Again, any suggestion that the output of non-vaiidated models is is any way reliable is just another example of vodoo science.

      • @jim cripwell

        Richard Betts of the UK Met Office has suggested that model output should be called ‘simulations’ rather than ‘experiments’.

        Seems to me that this is a small step, but in the right direction. ‘Experiments’ they most certainly are not. ‘Simulations’ allows a little more of the huge uncertainties to be conveyed.

      • Latimer, you write “Seems to me that this is a small step, but in the right direction.”

        I heartily disagree. Until it is accepted that any model that has not been fully validated cannot tell us anything at all about the future, then the misuse of models with respect to CAGW will continue. We basically have to clear out all the nonsense that has been written, in the peer reviewed literature no less, about “I have a model. I have run it. Here are the results.” We have to have it made crystal clear that non-validated models have absolutely no predicitve power whatsoever.

      • Jim,

        Sometimes we wish to have the best possible prediction even, when we have no thoroughly validated models at disposal. As every prediction is always based on a model of some kind, we may well judge it prudent to pick one or several of the less that perfectly validated models to make one or more predictions.

        At the same time some other people may be happy without any predictions as they believe that the resulting decisions will turn out to be those they prefer for some reason.

        This is exactly, what happens with the climate issue. Those worried about the risks of climate change try to use the models to get best possible predictions, while those who oppose for ideological reasons any action tell that you should not give any value to those results. (Here the ideological reasons are largely related to political philosophies and attitudes to the role of government.)

      • Pekka you write “As every prediction is always based on a model of some kind, we may well judge it prudent to pick one or several of the less than perfectly validated models to make one or more predictions”

        Sorry, Pekka, our philosophies are so vastely different that we will never agree. I ONLY rely on hard, measured data. This idea that somehow a non-validated model is better than nothing at all, is, IMHO, just plain nonsense. A non-validated model is worse than useless for predicting the future. It gives the uninitiated the idea that something useful is being said. When, in fact, all there is, is a wild arsed guess.

        I look at the alleged science that is supposed to show that CAGW exists, and it is as full of holes as swiss cheese. Then the complete reliance on non-validated models, and a disregard for the observed data. This nonsense of people trying to claim that Hansen’s 1988 predicitons are coming true, is really pathetic.

        So, as I say, I am sorry. But when I see the hard, measured data, I will take some notice. Until then I will do all I can to convince people that CAGW is a hoax.

      • Jim,

        I understand – and I already told, why you do that.

      • Until then I will do all I can to convince people that CAGW is a hoax.

        In other words until you have some evidence, you will continue to judge the scientists as being guilty of perpetrating a hoax.

        You must hail from one of those countries where the accused is guilty until proven innocent.

      • What I object to is the use of models to predict the future, when these models have not been fully validated.

        Jim, the phrase “non-validated model” occurs so often in your comments as to prompt me to ask, what would count as a validated model for you?

        Given that global climate is not something we can perform experiments on, is validation possible even in principle?

        If not, then you could save 14 characters each time by shortening “non-validated model” to just “model”.

      • Vaughan Pratt

        You seem to be stumbling over Jim Cripwell’s expression “non-validated model”, by questioning whether a model can, in fact be validated at all.

        Let me offer a suggestion:

        If a climate model has consistently shown that it is able to make projections that agree substantially with the actual observed data over a significant time period, one could reasonably argue that this model has been validated.

        An example would have been the model cited by James E. Hansen, when he made his now-famous projection of future warming back in 1988.

        Arguing about whether the 24-year period 1988-2011 is “statistically significant” sounds to me a lot like the debates on “How many angels can dance on the head of a pin?”

        Hansen’s input assumption (for the (business-as-usual case) was that human GHG emissions would continue to grow at the rate, which had been most recently observed.

        In actual fact, according to data published by the CDIAC, the emissions of the principal human GHG (CO2) rose at a rate around 12% higher than the rate assumed by Hansen.

        Yet the rate of warming was only ahalf the rate, which Hansen’s model had forecast.

        So here is a case of a model, which has been invalidated by the actual physical observations. Very likely, some input parameter assumed for Hansen’s model (the climate sensitivity?) – or possibly several parameters – was wrong .

        On the other hand, had the actual temperature trend been as Hansen’s model projected, one could have concluded that Hansen’s model (including the input assumptions made) were validated (or corroborated) by the observed data.

        Obviously a 100-year forecast cannot be fully disproved without waiting the full 100 years.

        But if the trend line of the forecast deviates greatly from the trend line actually observed, one can safely conclude that there is something basically wrong with the model assumptions and that these are, therefore, invalidated.

        Do you agree?

        Max

      • Check what Lubos Motley is writing about recently:

        On that site Lubos repeats the fallacy that because water vapor dominates CO2 in absolute quantity, increasing CO2 can’t have any effect.

        What this ignores is that the dependence of surface temperature on any greenhouse gas (like H2O or CO2) is logarithmic in its level.

        That is, if there were a thousand water molecules in the atmosphere for every CO2 molecule, and the climate sensitivities for each were the same, you would have to add a thousand water molecules to achieve the same impact on temperature as adding one CO2 molecule.

        Being the dominant greenhouse gas quantity-wise is actually a handicap in terms of competing with a minority gas. What matters is how fast the gas is increasing, expressed as CAGR. Right now CO2 is increasing at 0.66% per year, but assuming a continuation of the past 54 years, in 10 years time it will be increasing at 0.83% per year .

        If water vapor increases at a similar rate it will have a similar effect. But that would double the effect of rising CO2 alone. But even if it doesn’t increase at all (the more likely situation), that won’t stop the heating effect of increasing CO2, even if there’s a thousand times as much water vapor as CO2.

        Lubos’s fallacy is related to but not exactly the same as Angstrom’s fallacy around 1900 that the atmosphere is now saturated with CO2 whence adding more CO2 can’t block any more radiation. What this ignores is the many thousands of currently unblocked lines that further CO2 would block.

        In Lubos’s case, since water vapor isn’t blocking those lines (though clouds are when present), no amount of water vapor can prevent increasing CO2 from blocking increasingly many of those lines. It’s just a minor variant of Angstrom’s fallacy.

      • So here is a case of a model, which has been invalidated by the actual physical observations. Very likely, some input parameter assumed for Hansen’s model (the climate sensitivity?) – or possibly several parameters – was wrong .

        Hansen clearly explains this if you read his early-to-mid-1980’s research papers (such as this one). Excess heat is what is produced by GHG, but due to heat capacity differences and transient diffusion, the temperatures of ocean and atmosphere+land do not necessarily track until far down the road.

        It’s a difficult quandary, as the diffusion parameters have always had a large uncertainty, and the only way to extract a more precise estimate is to wait long enough for the transient response to be measurable, i.e. the old validation problem. Hansen said this:

        “The existence of unrealized warming complicates the CO2 and trace gas issue and limits the near-term effectiveness of reductions in greenhouse gas emissions”

        I think the best path is to apply uncertainty to the diffusion coefficient and this will generate an arguably more realistic representation of the transient response.

        “Yet the rate of warming was only ahalf the rate, which Hansen’s model had forecast.”

        Which warming are you looking at, ocean or land?
        His 1981 forecast was conservative, if compared to the current GISTEMP. See this chart I put together:
        http://img85.imageshack.us/img85/6623/hansen1981gis.gif

      • Max,

        I was asking Jim what would count as a validated model for him. However your criterion is a very well-known one and therefore worth responding to independently of how Jim judges these things.

        Your criterion is the one that hedge fund managers use when pitching their forecasting skills based on past performance.

        Now Warren Buffett’s Berkshire Hathaway fund has an enviable track record in that respect. Its difference from other funds is consistency, having been performing like that for decades.

        Some small fraction of funds have been “validated” in your sense by their past performance during some time period, say a few years. However the fraction of those validated funds that the next time period will validate again will be about the same, while some funds that were not validated in the previous time period will be validated this time around.

        Is there some difference between hedge funds and climate models which makes your proposal for how to validate the latter effective when it doesn’t seem to work for the former?

      • “Now Warren Buffett’s Berkshire Hathaway fund has an enviable track record in that respect. Its difference from other funds is consistency, having been performing like that for decades.”

        The difference between Berkshire Hathaway and other investment funds is not their superior “forecasting skills.” The difference is Warren Buffet. He does not just pick stocks, he picks stocks and then forces changes in the running of the companies where necessary to maximize their profits. (Not to mention avoiding debt like the plague.) It’s not the forecasting, it’s the management. He is just better at it than the other venture capitalists.

        When Buffett goes, so does the seeming economic invincibility of Berkshire Hathaway.

        Attributing the success of BH to “forecasting skills” is like attributing the success of the New York Yankees to their minor league scouts.

        (Is there anything in the world that can’t be spun into an argument in favor of CAGW?)

      • Here we go again.

        Point 1. Emissions rose at about 1% per year from 1990 to 2002. Then they took off in a big way, rising at over 3% per year. You can’t just take the average; the fact that it was lower than scenario A or B for that early bit means the rest is playing catch up, and the emission over the whole period is better characterized as well under what scenario A proposes.

        I repeat. You have the data; you can check this yourself. I’m not making it up.

        Much more important however…

        Point 2. The input to the climate model is the atmospheric concentration; not the emissions. The emissions were simply a guide to projecting how fast atmospheric concentrations grow; the input to the model is the atmospheric concentration. We’ve GOT the actual input to the model, for all scenarios. (Scenario Data HERE) We can compare directly with what actually took place. But to do this, you compare the measured atmospheric CO2 with the scenario atmospheric CO2.

        Lots of people have done this comparison. It’s the honest comparison to do. And the growth rate in the CO2 increment is about 1.5% per annum.

        Also — as with emissions — the growth rate up to 2000 is lower… only about 0.5%, and it sped up this century. The growth rate is much less steady than emissions, of course.

        Quite apart from the fact that it is just false to describe the emissions over the prediction period as rising steadily at 2% per annum; the only scenario is not the prediction. What matters is the climate model; and what drives climate is CO2 in the atmosphere.

        The forcing to the model turns out to have been a bit UNDER scenario B; and t LOT under scenario A. So if you are genuinely serious about wanting to evaluation the skill of the prediction, you use scenario B. Scenario A is just wrong; and manaker’s statements about the data — even the emissions, let alone the model input — are incorrect. As anyone with a spreadsheet can check.

        This is not difficult. It’s not ambiguous. It’s not a story made up post-facto. It’s the data, stupid.

      • WHT

        This has been discussed ad nauseam on this blog and elsewhere, most notably in an exchange with Chris Ho-Stuart, but you have apparently failed to understand, so I will repeat for you.

        The Hansen 1988 forecast to which I am referring is:
        http://pubs.giss.nasa.gov/docs/1988/1988_Hansen_etal.pdf

        Check Figure 3 (this is the graph, which I posted earlier, with actual temperatures added – but will post again):
        http://farm3.static.flickr.com/2537/5738998081_b3b3e55049_b.jpg

        Hansen’s 1988 study stipulated:

        Scenario A assumes that growth rates of trace gas emissions typical of the 1970s and 1980s will continue indefinitely; the assumed annual growth rate averages about 1.5% of current emissions, so that the net greenhouse forcing increases exponentially.

        Scenario B has decreasing trace gas growth rates, such that the annual increase of the greenhouse climate forcing remains approximately constant at the present level.

        Scenario C drastically reduces trace gas growth between 1990 and 2000 such that the greenhouse climate forcing ceases to increase after 2000.

        Based on CDIAC data, the actual CO2 (principal GHG) emission growth rate increased from 1.5% in the 1970s and 1980s to 1.7% from 1988 to today, so the actual rate of increase was actually around 13% greater than that assumed by Hansen for Scenario A.
        http://cdiac.ornl.gov/ftp/ndp030/global.1751_2008.ems

        Obviously, Hansen’s Scenarios B and C are way off the mark, as far as GHG emissions are concerned.

        In other words, Hansen’s model-based Scenario A grossly overestimated the GH warming that would result, thereby invalidating his climate model’s ability to correctly project GH warming.

        Hansen’s climate model was NOT validatedby the actual temperature measurements. His projection turned out to be grossly exaggerated. His model is (using Jim Cripwell’s terminology) non-validated.

        One can argue about WHY Hansen’s models failed

        I personally think it is very likely because he used a climate sensitivity estimate that was high by a factor of 2 or more.

        Actual warming turned out to be the same as Hansen’s Scenario C, based on the complete shut down of GHG emissions in 2000 “such that the greenhouse climate forcing ceases to increase after 2000”. But this did not happen, did it?

        The point I made to Vaughan (which he has not been able to refute) is simply that a model (like that of Hansen can either be validated (or corroborated) by a projection that turns out to be correct or invalidated by one that turns out to be incorrect, as Hansen’s 1988 projection clearly was.

        Max

      • Chris Ho-Stuart

        I can’t believe it!

        You are again trying to defend Hansen’s lousy 1988 projection.

        Unbelievable!

        All your waffling about year-by-year GHG emission rates is simply double-talk, Chris.

        Hansen clearly stated his assumptions on the rate of GHG emissions for all three “scenarios” (read his paper and you will see). I have quoted his assumptions in my earlier post to you, and re-posted this for WHT.

        The actual emissions of the principal GHG (CO2) were slightly higher than his assumed rate.

        Yet the temperature increase was only half of what he predicted.

        IOW Hansen’s projection was exaggerated by a factor of over 2X.

        Pretty lousy forecasting, Chris, no matter how you slice it.

        Max

        But, hey – if you and Web want to keep this discussion alive, go right ahead. It gives any sane lurker a pretty good idea how the AGW “believers” react when their forecasts are proven wrong by the facts – they wiggle, squirm and waffle – but refuse to admit that their forecasts were wrong.

      • Hansen’s Scenarios

        It’s quite clear that observations are closest to Scenario B.

      • I rarely pay attention to GCM simulations. I am referring to papers by Hansen in 1981 and 1985 where he was explaining heat diffusion. If you want to chase phantoms by interpreting GCM simulation fluctuations, knock yourself out. It will keep you busy for ages.

      • Here we had again a lengthy discussion about the role of models in science where people, who obviously don’t understand much about real science make up their requirements for what’s acceptable. Jim was here on the extreme and Max only sligthly better.

        It appears clear that fair amount of actual experience of doing science in the same or analogous fields is needed, before people can make reasonable judgments on the value of specific scientific approaches. Jim and Max are far from alone in failing on that point, there must by at least twenty active contributors to this site, who repeat the same error time after time.

      • Manaker, … I HAVE NOT EVEN COMMENTED on the predictions themselves. This is not about the prediction; but about the scenario. You are making basic mathematical errors and getting your descriptions of your own data hopelessly wrong.

        ALL I have done is to correct your mathematical errors about which scenario of forcings happened to fit the inputs best. That’s it. That is NOT a defense of the predictions.

        You’ve given the data, but you seem totally incapable of actually processing it. If you ever get around to actually processing the data for the scenarios — NOT FOR THE PREDICTION — then you will find scenario B is closest to the actuals.

        Here’s manaker’s claims:

        Based on CDIAC data, the actual CO2 (principal GHG) emission growth rate increased from 1.5% in the 1970s and 1980s to 1.7% from 1988 to today, so the actual rate of increase was actually around 13% greater than that assumed by Hansen for Scenario A.
        http://cdiac.ornl.gov/ftp/ndp030/global.1751_2008.ems

        Here’s what you get for the emissions growth rate. (Fit an exponential; or do a linear fit to the logarithm; either works.)

        1970-1980: 2.7%
        1980-1990: 2.0%
        1990-2000. 1.0%
        2000-2008. 3.3% (end of data)

        See that lull in the 1990s. It MATTERS. This is not a steady growth rate. Taking the average growth rate doesn’t work either.

        Hansen’s input used scenarios for numbers from 1985 onwards. The weak increase shows up through to about 2002. That dominates the scenario period; the subsequent stronger growth simply is not strong enough to bring the actual levels up to what was expected for scenario B — let alone scenario A.

        Here’s something else you can try. Emissions in 2008 were up at 8749. Back to 1985 is 23 years, and with a 1.5% per annum increase you’d have to start out at about 6200 in 1985; a level it didn’t reach until the early 1990s.

        And worse; it is CUMMULATIVE emissions that really matter.

        Don’t be such a lazy SOB. Get your spreadsheet, crunch the numbers for yourself. Your descriptions of your own data are totally incorrect.

        And THEN, if you want to get serious about evaluating predictions, throw away the emissions data altogether and use the atmospheric levels. THAT’S the point of the scenario and of the input to the model. To get really serious, note that the scenarios are not only for CO2.

        But the CO2 is enough to show that you are being an idiot. Go on, get your spreadsheet out and run the numbers.

      • Chris,

        I have gone through the same argument with Max (manacker). Before that I thought that he is here to learn something, but after that argument and with later evidence I have concluded that it was a false hope. He doesn’t want to learn, only argue even after been shown to be in error.

  123. Chief Hydrologist

    Vaughan is not funny – I guess he means that we haven’t fallen of the climate precipice in at least a thousand years. This turns conventional wisdom on its head by claiming that those who can learn from history are doomed to repeat it. Although even a millennium seems a myopic perspective. It is a bit like the laws of motion – stopping at number 2 seems shortsighted for a physicist.

  124. I guess he means that we haven’t fallen of the climate precipice in at least a thousand years.

    Correct. The hotly contested question of whether MWP was equal to 1910, 1950, 1980, or 2012 in temperature becomes an irrelevant sideshow when one takes into account that by comparison to today’s disequilibrium event the MWP was an equilibrium state. (Glad to see Pekka drawing that distinction—I’ve been writing about event-state duality since the early 1990s and tend to forget how long it took me to see it.)

    This turns conventional wisdom on its head by claiming that those who can learn from history are doomed to repeat it.

    Glad it’s your logic that leads you to that conclusion and not mine.

    Although even a millennium seems a myopic perspective. It is a bit like the laws of motion – stopping at number 2 seems shortsighted for a physicist.

    For those planning for the year 4000 I could be persuaded that ignoring 0-1000 is “myopic.” Personally I don’t see much point in planning more than a human lifetime ahead, let alone the 100,000 years David Archer’s The Long Thaw talks about. Who’d have guessed in 1935 what 2012 would hold for us? Orwell and Clarke wrote their “year books” in respectively 1949 and 1968, which bore no resemblance whatsoever to either 1984 or 2001 yet which were less than half a lifetime into the future at their respective dates of writing.

    • Chief Hydrologist

      Have you got that darn pesky Newton’s 3rd law down yet?

      I was just interpreting for the great unthonged masses. There seemed to be a glimmer of meaning in there although well camoflaged.

      Climate repeats because of ergodicity. If you know what is was – within uncertainties – you know what it is going to be. I don’t share your conviction. I have often linked to the NAS committee on abrupt climate change – Abrupt climate change: inevitable surprises’. I share their convictions. Frankly I find anyone who doesn’t love natural philosophy (and cowgirls and bourbon whisky) – and refers to signal processing in climate – to be a dullard and a bore.

      • Climate repeats because of ergodicity.

        Presumably you mean because of lack thereof. An ergodic system is least likely to repeat because all of its dynamically feasible states are equally likely to be visited in the long run.

        As a hypothetical non-ergodic climate scenario, if the iron core of the Earth were somehow oscillating in a way that pumped hot magma upwards in a regular fashion that periodically heated the thin crust under the oceans, climate would repeat itself on that account. Cyclic phenomena of that kind are at the opposite extreme from an ergodic behavior.

        As another example, sunspots repeat on a very regular basis, correlated with fluctuations in the Earth’s temperature. The extreme regularity of the solar cycle is highly non-ergodic, both in its own right and in its influence on Earth’s climate, which the odd-numbered cycles heat.

        The converse, non-repeating implies ergodic, need not hold. On a timescale longer than a year, the level of atmospheric CO2 has not repeated for at least half a century and very likely a couple of centuries. Yet CO2 is growing very predictably and therefore far from ergodically on any timescale between 10 and 60 years, and in all likelihood up to a century.

        During a glaciation CO2 gradually declines over 90,000 years or so. That decline itself is non-ergodic, but when the CO2 is suitably detrended it becomes an interesting question how ergodic its residual behavior is.

      • Have you got that darn pesky Newton’s 3rd law down yet?

        You mean the law that for every reactionary there is a counterbalancing liberal? ;)

      • Maybe a system could be to developed to number people’s pet theories. We could have a tracking system to see who’s pet theory was predicting what correctly.

        Vaughan seems correct that our guesses about future conditions being pretty inaccurate more than a few decades ahead.

      • Frankly I find anyone who doesn’t love natural philosophy (and cowgirls and bourbon whisky) – and refers to signal processing in climate – to be a dullard and a bore.

        This a climate site, CH. You can find your soulmate on a dating site.

      • Vaughan I think has it right. The question is really a very deep theoretical question. There is some recent theoretical work on the dimension of the attractor for the Navier-Stokes and the bond goes like the Reynolds’ number squared, a very large number. Maybe better bounds are possible.
        I think its likely Chief is right about abrupt changes though. I’m still conflicted though about simple models and their usefulness.

      • Markus Fitzhenry

        “”today’s disequilibrium””

        Why the heck would you want to even try and have a rational argument with somebody as myopic as that, Chief Hydro. You have better things to do with your time Chief Hydro, like solving the great mysteries of the Oceans. They are religious zealots, why go down to their level and try and resue them.

        Oh, that’s right, I forgot mate, You’re Aussie, so must be humanist.

      • Vaughan seems correct that our guesses about future conditions being pretty inaccurate more than a few decades ahead.

        Thanks, Rob. As a caveat I should point out that my examples were for technology forecasts. Forecasts of human energy use and natural climate cycles might be stretchable a little further, say to 70 years rather than 40, as they both seem to obey fewer laws than technology, which is what human creativity has largely focused on in recent centuries that has any relevance to climate.

        Creativity expressed through cultural icons like dada, 12-tone row, existentialism, and heavy metal seems less climate relevant. On the other hand traditional ideologies may be having some impact on climate out of all proportion to the creativity involved, for example the unfettered rights to buy and burn oil as respective gifts to America from Islam (as producers) and the Christian Right (as consumers).

      • Just curious, predictions made 100 years ago, 1912, utilized something: a pencil-and-paper model?

        How do we know computer models of the climate cannot make accurate predictions 100 years out?

      • JCH

        You ask:

        How do we know computer models of the climate cannot make accurate predictions 100 years out?

        A knee-jerk answer that has some validity would be, “if IPCC computer models cannot even make realistic predictions 10 years out, there is no reason to believe that they will do any better 100 years out”.

        But, of course, what will actually happen 100 years from now can only be ascertained with 100% certainty by waiting 100 years and checking the actual data then.

        But one can get a good idea without waiting the full 100 years.

        An example (which has been discussed here already).

        IPCC has made temperature projections for the end of this century based on continued human GHG emissions (principally CO2) over the next several decades.

        Let’s take a “middle of the pack” IPCC SRES model-based “scenario and storyline” representing “business as usual” with very rapid economic growth , human population continuing to grow but at a slower rate, leveling off at a population of around 10.5 billion by the end of the century and no “climate initiatives”

        This scenario (A1B) has CO2 increasing from today’s 390 ppmv to 706 ppmv and temperature increasing by around 2.6°C above today’s level (= 2.8°C above the 1980-1999 mean level).

        Is this reasonable?

        IPCC has given us a curve showing how this warming will develop, so we can check the actual warming against this curve.
        http://farm8.staticflickr.com/7065/6855308561_4a4c435d4e_b.jpg

        As can be seen from the graph, the IPCC models projected warming of around 0.2°C per decade for the first two decades of this century (a projection, which IPCC also clearly stated in a separate paragraph of its AR4WR1 SPM report for a range of SRES emission scenarios).

        Are we on track so far?

        Obviously not (although CO2 has continued to rise exponentially and is now at record high levels).

        But actual global average temperatures for the first 11 years of the new century have declined by around -0.1°C instead of warming at twice that rate as projected.

        Any normal forecaster would look at this trend and seriously consider revising the rest of the forecast to reflect what has actually happened to date: both by shifting the entire curve down by the current shortfall and by revising the slope downward to reflect actual observations more closely.

        Whether one would arrive at a revised end of the century warming projection (above today’s lower actual value) of 0.89 times the old forecast =2.3°C, adjusted downward for the first decade’s 0.3°C shortfall = 2.0°C, or whether one would also reduce the slope of the curve, to arrive at a forecast of 1.0°C or less, would be a matter of judgment.

        At any rate, one would revise the forecast downward significantly to take into account what has actually occurred over the past 11 years.

        I do not believe IPCC will do this.

        I personally believe that IPCC will ignore the actual lack of warming (i.e. cooling) and stick with its exaggerated forecasts in the hopes that things will turn around again until it becomes painfully obvious that these are unrealistic and that the IPCC models have lost all credibility.

        But who knows?

        Max

      • @DY: I’m still conflicted though about simple models and their usefulness.

        Do you have such a conflict about complex models? I do, more than about simple models.

        Simple models are only useful for simple phenomena. But what is a simple phenomenon? Why, one with a simple model of course.

        If every phenomenon, whether of duration a microsecond or a million years, had a unique model, we could tell which phenomena were simple in this sense just by examining its model.

        But they don’t, they can have lots of models. If all the known models of a phenomenon are complex, that does not of itself prove that the phenomenon is intrinsically complex, since it may have an as yet undiscovered simple model. It only proves that we don’t yet understand the phenomenon very well.

        My misgivings about complex models derive from the idea that the more complex the model of a phenomenon the less well we understand it. In Machine Learning (a subject served by entire departments these days) this is called overtraining.

        Long-term climate for example. It may look complex, just as the spectral lines of a gas looked complex before the 20th century cleared up that structure. But are we absolutely and positively certain it cannot have a simple model? Has anyone looked for one with the diligence that early 20th century physicists looked for an explanation of the spectral lines of gases?

        Heisenberg found the explanation of spectral lines in 1925, in the form of modern quantum mechanics, after physicists of the caliber of Rutherford, Planck, Einstein, and Bohr had searched in vain for decades.

        The argument that climate is many orders of magnitude larger than gas molecules could equally well be applied to show that there can be no coherent explanation of the epicycles comprising planetary orbits. It took a couple of millennia before Copernicus, Galileo, Kepler, and Newton (in that order, in case you thought Galileo predated Copernicus) were able to radically simplify the universally held view that planets, like God, moved in mysterious ways.

        And it’s not like the Greeks, or the two millennia of astronomers, mathematicians, and scientists that came after them were blithering idiots. They were simply pessimists who couldn’t imagine that anything as obviously complex as epicycles could be dramatically simplified with a simple model of planetary orbits.

        Climate science today is at about the same level of pessimism as were astronomers before Copernicus, physicists before Heisenberg, and molecular biologists before Crick and Watson.

        Long-term climate is too complex to nail, they all cry.

      • @Markus: Oh, that’s right, I forgot mate, You’re Aussie, so must be humanist.

        Not a reliable basis for distinguishing Chief Hydrologist and me. Try us with bourbon vs. a reputable 18-year-old scotch. He seems to have more experience with cowgirls than me though, unless he just never advanced beyond his teenage fantasies. ;)

      • Hydro, I’d guess that you have more *luck* with cowgirls than Prat, too.

      • @vaughan

        ‘Creativity expressed through cultural icons like dada, 12-tone row, existentialism, and heavy metal seems less climate relevant’

        I will search diligently through my archives to gather the collected wisdom of Lemmy from Motorhead and Ozzy Osbourne from Black Sabbath on the subject.

        But I fear, like climate change and peace studies, I may come up with only a limited (i.e zero) linkage.

        Can I have a grant please?

      • Vaughan,

        I agree that complex models are never without problems and simple models are in best cases vastly better.

        Comparing the case of atomic or molecular spectral lines with the Earth system is, however, grossly misleading. A large system that consists of such essential structures as continents and oceans cannot ever be described well with simple models. If some simple model turns ultimately to apply well to some global variables that’s not likely to be derivable from first principles but will remain a phenomenological observation.

        For this reason I don’t have much hope that any simple model can give a good overall description of the Earth system, except as an parameterization, whose approximate validity is confirmed over some limited range by comparison with empirical data that covers that whole range – or a complex model that has been validated well enough by some set of empirical data. In the best case a complex model may be validated by extensive data even when the data is sparse and diverse. (Whether this best case is within sight is another matter.)

    • Vaughan Pratt

      You differentiate between the climate conditions of the MWP and today, by saying that of the MWP was different because it was at an “equilibrium” state (while that of today is – in your opinion – not so).

      I’d say that using the word “equilibrium” when talking about our planet’s climate is totally silly to start off with.

      It’s like using the term “stabilized” when discussing “chaos”.

      The climate of today, as well as that of the MWP has changed chaotically from day-to-day and year-to-year as a result of factors, many of which we are totally unaware of today.

      To pretend otherwise is folly.

      Max

      • Max,

        You’re absolutely right: there is never equilibrium. There is only more or less disequilibrium.

        Any threshold for “disequilibrium” is purely arbitrary. So I need to be more precise in my language.

        I’m interested in long-term climate, by which I mean periodicities longer than quarter of a century. And when I use the word “disequilibrium” I’m referring to a rate of rise of at least a degree a century sustained over much longer than quarter of a century, say half a century for definiteness.

        The last half century has seen a rate of rise of 1.4 °C/century (0.014 °C/year) as shown here, well above my threshold for disequilibrium.

        If you have a different meaning for “disequilibrium,” substitute some other word like xyzzy for what I mean by “disequilibrium.”

        The climate of today, as well as that of the MWP has changed chaotically from day-to-day and year-to-year as a result of factors, many of which we are totally unaware of today.

        No argument there. But can you point to a half century at any time in the past millennium where fitting a trend line gives a slope (up or down) of anywhere near a degree per century (0.01 degrees per year or 0.1 degrees per decade)?

        If not then (regardless of what actual words we might use to describe it) there is a big difference between now and the MWP.

      • Vaughan Pratt


        The last half century has seen a rate of rise of 1.4 °C/century (0.014 °C/year) as shown here, well above my threshold for disequilibrium.

        You are arriving at a warming rate by arbitrarily selecting the starting year.

        If you select as your starting year 1880, the global warming rate is 0.6 deg C per century =>http://bit.ly/zzv2DL

        Is this “TRUE” warming trend above your “threshold for disequilibrium”?

        Don’t you think in order to remove the effect of oscillation in the global warming trend you need to consider the longest period possible?

      • Vaughan

        I can accept most of what you have written in response to my earlier post.

        But we now have your (somewhat arbitrary) definition:

        “disequilibrium” I’m referring to a rate of rise of at least a degree a century sustained over much longer than quarter of a century, say half a century for definiteness.

        Here you cite actual data

        The last half century has seen a rate of rise of 1.4 °C/century (0.014 °C/year) as shown here, well above my threshold for disequilibrium

        But you do not have such data on a global basis for any comparable time period during the MWP, because these data do not exist.

        Whether or not there were similar time periods of rapid increase is not known.

        So whether or not our planet’s climate was in disequilibrium according to your definition at some point during the MWP is also unknown.

        Am I right?

        Max

      • So whether or not our planet’s climate was in disequilibrium according to your definition at some point during the MWP is also unknown. Am I right?

        Max, without claiming to speak for those more familiar with MWP than I, I don’t have any data of my own that contradicts you. All I’m aware of are the various principal components that Mann and McIntyre have been beating each other over the head with. If either party has the correct PCA then there was no nowhere near such an increase.

        Do you have any reason why I should believe either Mann or McIntyre? If not then your point is well taken: temperature may well have changed faster than either of them give it credit for.

        At the same time however I would amazed to learn that 10 gigatonnes of CO2, from any source, had emitted into the atmosphere each year on a sustained basis during the MWP, or even one 1% of that, unless Ming the Merciless on Mars had been replacing the nitrogen he was stealing from Earth with CO2.

        So while I don’t have any upper bound on the rate of change of temperature during the MWP, beyond what we can deduce from the PCAs of Mann and McIntyre and their respective supporters, I would claim an upper bound on the rate at which CO2 was increasing back then. In particular I would claim based on paleoclimatology that it could not possibly have been increasing at even 0.1% per annum over half a century in those days.

        Furthermore I’m not aware of any phenomenon of atmospheric physics other than rapidly increasing greenhouse gases that could account for a rapid increase in temperature sustained over half a century. Such a phenomenon if proven correct would be a very important, not to mention dramatic, contribution to climate science.

        Sorry that wasn’t a simple yes-or-no answer to your question. Happy to replace it with a “no” if you prefer. ;)

      • You are arriving at a warming rate by arbitrarily selecting the starting year [1962]. If you select as your starting year 1880, the global warming rate is 0.6 deg C per century =>http://bit.ly/zzv2DL

        Girma, my sincere apologies if I failed to draw sufficient attention to my definition of “long-term climate” as “more than quarter of a century.”

        Had I used that as a precise boundary, that would take us back to 1987, which shows an increase of almost exactly 1.4 degrees per century. But then people like you would scream bloody murder that I was picking an arbitrary year.

        To fend off such accusations I deliberately doubled my threshold for “long-term” so people would not make that complaint. Do the math: 2012 − 50 = 1962. (Oddly enough, doubling from quarter of a century to half a century still gives almost exactly 1.4 degrees per century. Amazing how constant that’s been!)

        But still you complain: 1962 is an “arbitrary year.”

        Ok, Girma, two can play at that game. You proposed to replace 1962 by 1880. But whereas I had offered a reason for 1962, namely half a century ago, your suggestion of 1880 came with absolutely no rationale at all. You seem to have just picked it out of a hat.

        Had you picked a propitious date such as the birth year of the principal prophet of the religion of your favorite movie star, or the Big Bang, I could have understood your reasoning. But 1880? Did you throw a dart or what?

        At least throw the dart into a region where CO2 burning was seriously under way. This was not the case for either the Big Bang or the start of any well-entrenched religion.

        Girma, you’ve been trying for years to prove that CO2 doesn’t impact climate by giving undue weight to years where CO2 wasn’t changing significantly. You embarrass the 50% of climate skeptics that are above average in intelligence in their cohort by undermining their well-intentioned cause with your illogical methods. Where did you say you got your Ph.D.?

      • Vaughan Pratt

        We have been discussing whether or not there were prolonged (i.e. 50+ year) periods of change in global average temperature during the MWP, at warming rates, which were equivalent (or greater) than those seen in the second half of the past century.

        My point was simply that we do not know, as we have no hard data.based on actual physical observations.

        You do not disagree with my point but now throw in changes in atmospheric CO2 as evidence that the current rate of warming must have been greater than that occurring during some 50+ year period of the MWP, when there was no such increase in human GHGs..

        I think you realize that this is a postulation, which is not based on actual empirical data, but rather on theoretical deliberations supported the GH hypothesis, as it has been interpreted by IPCC models.

        The truth of the matter is, Vaughan, that we do not know whether or not there were significant periods of warming during the MWP that were as rapid as those of the latter 20th century.

        But we have already gone over that and apparently agree.

        Max

      • Vaughan Pratt

        Girma brings up a very good point, which may be a factor in our inability to see the past temperature record in the same light.

        I see this record and, like Girma, am hit in the face by the cyclical nature, with the periodic warming and cooling cycles with an apparent half-cycle time of ~30 years and an amplitude of around +/-0.25 degC, all on a tilted axis with a slope of 0.6 to 0.7 deg C per century. I read various reports regarding ocean current oscillations, solar influences and other possible natural contributors to these cycles. The most recent “lack of warming” despite CO2 levels reaching record heights appears to me to very likely be the start of a new multi-decadal cycle of slight cooling, as Girma has also suggested.

        You see this record and see the 1970-2000 period not as “just another indistinguishable (natural?) warming cycle”, but as evidence of GH warming. In fact, you probably do not even see a cyclical trend in the record, but rather see some sort of accelerated warming as CO2 levels have increased. But I may be wrong on that.

        So, to test my hypothesis here, my question to you would be:

        Do you think our planet’s global average temperature has gone through multi-decadal cycles of warming and cooling of about 30 years each over the past record?

        Girma has stated that this is his conclusion after analyzing the data, and I would agree – but it would be interesting to hear whether or not you would also agree.

        [This is not a trick question.]

        Max

      • The truth of the matter is, Vaughan, that we do not know whether or not there were significant periods of warming during the MWP that were as rapid as those of the latter 20th century. But we have already gone over that and apparently agree.

        Max, if some genie were to pop out of a magic lamp and claim full understanding of the greatest rate of change of temperature over any half-century period during the MWP, and were to tell me that if I guessed wrong as to whether that ever exceeded a rate of one degree a century I would be turned into a lump of coal for a future Christmas stocking, I would have no hesitation in saying no such rate had been achieved in any such half-century period.

        Many in the past hundred centuries have guessed wrong in such situations. Just picture me as one of them.

      • Do you think our planet’s global average temperature has gone through multi-decadal cycles of warming and cooling of about 30 years each over the past record?

        Yes, this is very clear from the record.

        What’s also clear is your focus on those cycles to the exclusion of the equally clear fact that there has been a rise between 1850 and now.

        A scientist would find both phenomena equally interesting. Your focus on the cycles gives you away as a climate denier.

        Just to be clear, I have only scorn for bloggers who neglect these cycles to the same degree you neglect the rise.

  125. Chief Hydrologist

    ‘The chaotic features may also be smoothened and effectively removed by stochastic influences. If stochasticity dominates over chaotic properties ergodicity is a better assumption over finite time intervals. If essential and effectively random transitions between significantly different attractors dominate, then ergodicity is not a good assumption and the problem is inherently more difficult.’ Pekka

    It is one of those slippery terms of which you are so fond and I never guess past the next exponentially diverging climate shift.

    It is all interconnected – winds, waves, currents, the twirling blue/green planet, the polar vortices spinning off storms, icebergs spiralling out of the polar regions, water plunging to the abyss and re-emerging a millennia later rich in nutrients in the eastern Pacific. If you don’t understand what climate is – how can you have a theory of it?

    • Rob,

      We all have our mental pictures. Some of them are much more complex and multifaceted than we can ever describe in blog comments. We may just hope that some of the readers can read the hints and be led to some understanding of what we are talking about.

      You list several phenomena in your last paragraph. By the dominance of stochastics I mean that they vary in a way that allows for stable averaging over reasonably short periods, while weak stochasticity might allow for the formation of peristent attaractors that make avaraging almost irrelavant over wide range of time scales.

      Lorenz type chaos is a phenomenon that is true only for simple systems of few dominating variables, which follow their dynamical equations with high precision without significant stochastic disruptions.

      • Chief Hydrologist

        Pekka,

        I don’t understand stochasticism in the Earth system. At some level cause and effect is always in play – so the system is always deterministic and stochastic averaging is merely a statictical convenience – when applicable – rather than a physical reality.

        Chaos is, however, is about observations of the system behaving in ways typical of the broad class of deterministically chaotic systems. ‘Slowing down’ and ‘noisy bifurcation’ especially. Technically – an increase in autocorrelation as the system approaches bifurcation and non-linear variability at bifurcation.

        Lorenz type chaos is true for the Earth system – but it is not of course a simple Lorenz attractor.

        http://arxiv.org/abs/1007.1376
        http://www.pnas.org/content/105/38/14308

      • Interesting papers

      • Rob,

        Let’s take an example on smaller scale. Formation of eddies has characteristics of an attractor. Eddies start to grow and reach a scale that is huge compared to molecular phenomena. Almost all eddies dissipate, however, soon and looking at a larger scale they form just stochastic disturbances and their combined effect can be approximately described by some deterministic turbulence model like k-epsilon model.

        A butterfly in Amazonas has been described as a potential initiator of major weather patterns on other continents. That’s non-sense, because effectively stochastic disturbances kill its effects very soon, perhaps within centimeters fronm the butterfly.

        That’s the fate of most local disturbances, but there are also collective modes that have the potential of causing long persistent and major variability in weather patterns and even in climate. That’s possible, but that’s not guaranteed to happen. Tomas discussed spatiotemporal chaos implying that it’s certain to have major effects (or at least I have interpreted him so). Although the existence of spatiotemporal chaos is more or less certain, it’s not certain that it will lead to persistent major effects. The other possibility is that all those effects will be killed soon by chaotic disturbances.

        A very small stochastic term will kill totally the Lorenz attractors. They are properties of simple and fully deterministic equations. Stochasticity will easily couple various paths so strongy that no separate attractors will ever be formed.

      • Concerning those papers I only note on the analogy with critical point and phase transitions.

        Normal phase transitions are approached without any major change in correlation length, but in the neighborhood of a critical point correlation lengths grow and even diverge. Also temporal autocorrelations behave similarly. This means that the restoring forces are weak allowing deviations from the average state grow larger and persist longer. A tipping point may well have similar properties.

        In spite of this analogy, the argument that the recent stability of the climate system would be a reason to worry seems farfetched.

      • I had a further rapid look at the papers. It turns out that they are discussing tipping points in a dissipative system, i.e. chaotic behavior has little or no role in the considerations while stochasticity is essential.

        There is some connections to chaos theory in the approach to the tipping point, but that plays very little role in the analysis that the papers present.

        Both papers have the word chaos only in the title of one reference. In both cases the title refers also to nonlinear dynamics, which may better reflect the reason of referring to those books.

      • Chief Hydrologist

        Pekka,

        ‘The Earth’s climate is what dynamicists would call a dissipative system, and for this the bifurcations that can be typically encountered under the variation of a single control parameter are classified into three types, safe, explosive and dangerous (Thompson et al., 1994; Thompson and Stewart, 2002).’ Op. Cit.

        A point on butteflies is that the metaphor comes from the visualistion of the bifurcated phase space but it is not a wrong metaphor at any scale.

        What you describe is dissipative rather than stochastic. In terms of turbulent eddies – as from the butterfly wings – that in a dissipative system can dissipate quickly or be reinforced by feedbacks to become a cyclone. The chances of a butteflies initiating a cascade of feedbacks leading to a cyclone seems remote – but certainly from larger scale eddies driven by Coriolis effects.

        The point of saddle node bifurcation in a dissipative system is that most eddies will dissipate but that occassionaly things will tip over into a ‘dangerous’ bifurcation.

        Neither of the paper I cited suggested that enough is known to predict a coming tipping point. They were investigating approaches to diagnoses rather than making alarmist predictions.

        Robert I Ellison
        Chief Hydrologist

      • Dissipation is due to stochastic phenomena.

      • Chief Hydrologist

        Pekka,

        We have some crossed replies. I can’t but to think that you are playing with words now. Both papers examine tipping points ‘assuming that tipping points may well be governed by a bifurcation in an underlying dynamical system, recent work looks for a slowing down of intrinsic transient responses within the data, which is predicted to occur before most bifurcational instabilities (Held and Kleinen, 2004; Livina and Lenton, 2007). This is done, for example, by determining the propagator, which is estimated via the correlation between successive elements of the time series, in a window sliding along the time series. This propagator is a measure for the linear stability. It should increase to unity at tipping.’

        Bifurcation is a characteristic of dynamically complex systems – and this dynamically is semantically equivalent to deterministicaly chaotic. It is a matter of terminology only. Chaos theory applies to many systems – population, economies – and not just simple Lorenz attractors.

        I don’t particuarly understand your point – that they use ‘slowing down’ and ‘noisy bifurcation’ rather than some unspecified other property of chaotic systems?

        The appraoach by Tsonis uses modern indices and is conceptually very similar. It uses the language if syncronicity and extreme events for an increase in autocorrelation followed by a noisy bifurcation. It is the lnguage of dynamical complexity – which as I say is semantically equivalent to deterministic chaos.

        We get the language of ‘dragon-kings’ from Sornette – which are extreme events associated with the region of a bifurcation. The simple idea of Lorenz attractors is not of much use because the attractors can’t be defined with any rigour. Instead there are properties that may be more useful in diagnoses.

        I don’t know what your point is. We are talking about bifurcation which is a classic property of strange attractors in chaos theory.

      • Tomas Milanovic says Tsonis approach is consistent with spatio-temporal chaos, which is not the same as deterministic chaos (and synchronization is different from bifurcation). Tomas previous posts on this are here:
        http://judithcurry.com/2011/03/05/chaos-ergodicity-and-attractors/
        http://judithcurry.com/2011/02/10/spatio-temporal-chaos/

      • Rob,

        The papers are discussing tipping points that occur deterministically in a dissipative system, when forcings reach certain level. Nonlinearity of the system is essential, but there’s very little common with typical chaos, which occurs in a system with very little dissipation that affects the controlling internal variables and which is closely related to “ill-posed initial value problem”.

        The chaotic system can alternate between different attractors. The tipping points discussed with most emphasis in these papers are irreversible with hysteresis. (They can be reversed only through a major opposite change in forcings.)

      • Chief Hydrologist

        Dissipation is like breaking a set in snooker. The outcome is the result of classical mechanics – it is all force and vector – rather than statistical mechanics. You are thinking like a physicist not an engineer.

      • Chief’s Navier-Stokes has a diffusion term. Diffusion is a model of random walk. Random walk is a stochastic property. Therefore a significant fraction of what Chief is rebelling against doesn’t make a lot of sense.

        Many times the phenomena observed are due to a behavior called stochastic resonance. What happens here is that certain trajectories in the state space get randomly amplified, and because of that amplification become observable above the underlying steady state behavior.

        This concept was originally put forward in the seminal papers by Benzi, where they used it to address recurrent ice ages, and also Nicolis about the same time.

        The point is that stochastic properties provide a more fundamental foundation than Chief is asserting.

      • Chief Hydrologist

        Hi Judith,

        Tsonis and colleagues used a network model to determine the ‘distance’ between the indices of major modes of NH climate variability. The distance is minimal at certain times – sychronisation -and then diverges again. At these times we have extreme ENSO events – ENSO dragon-kings in 1976/77 and 1998/2001 – after which climate settles into a new trajectory. So we have slowing down – autocorrelation increases in the ‘distance’ parameter – and noisy bifurcation.

        It is not of much use to compare the Earth system to simple Lorenz attractors. The Earth system has a far more complex state topology and an infinite phase space. To say that it is not a simple Lorenz attractor which readily move between two attractors and it is therefore not chaotic is illogical. There are much complex systems that shift from state to state – and are ergodic in the sense that they will return to an original state if given enough time. Not a terribly usefull concept over time frames of anything but acdemic interest.

        Tomas – I believe – distinguised between temporal chaos as embodied by models – and the spatio-temporal chaos of the Earth system. Intrinsically different dynamically complex systems. But they are still share properties with simple Lorenz atractors in the broad class of dynamically complex sysems. They are systems that are non-linearly sensitive to small changes in control variables.

        I think Tsonis deserves a Nobel Prize for quantitatively showing how this works in modern climate records.

        Robert I Ellsion
        Chief Hydrologist

      • Pekka, I think you are wrong about eddies. If you take the curl of the Navier-Stokes equations you will see that vorticity is conserved. Thus, vorticity can diffuse, but it cannot disappear except at the boundaries of the atmosphere. In a system like the climate, where forcing is continuous, you of course get constant generation of vorticity by the earth’s boundary layer and the Coriolis force. The weather models are correct in displaying the butterfly effect. They include most of the important dynamics even though they probably have far too much numerical dissipation, but this should stabilize the integration, not result in spurious reactions to small changes in initial conditions. You can only argue this based on some molecular dissipation that I doubt is physical. Conservation of mass, momentum, and energy pretty much describe the whole system at least for practical purposes. The current thinking in fluid dynamics, for which there is ample theoretical and computational evidence is that the Navier-Stokes equation if solved accurately are very accurate for a wide range of flows including turbulent flows.

      • Chief Hydrologist

        ‘In physics, the Navier–Stokes equations, named after Claude-Louis Navier and George Gabriel Stokes, describe the motion of fluid substances. These equations arise from applying Newton’s second law to fluid motion, together with the assumption that the fluid stress is the sum of a diffusing viscous term (proportional to the gradient of velocity), plus a pressure term.’

        So while containing a diffusion term – the behaviour of the equations of fluid motion are chaotic in the sense of Lorenz attractors. These equations describing turbulent flow are demonstrably – to a degree to which there is no possible doubt – non-linear.

        ‘In the last 30 years the scientific community did learn many different features for complex systems, starting with the pioneering works of Lorenz, Ruelle, Mandelbrot and others. In many cases, new ideas and tools have been introduced in order to “measure” complexity in an appropriate way. These tools can be used to reveal different features of underlying physical or biological mechanisms. In some sense, stochastic resonance is also a tool because it allows us to understand whether or not non linear effects can act in a cooperative way with the complex and chaotic behavior of a given system. Note that stochastic resonance is a mechanism in the full meaning of the word because it allows to get large effect from a small amplitude perturbation.’

        Stochastic resonance seems to be a particular application of sensitive dependence – and I must read Benzi with more care. The idea is that the signal must have a specific magnitude – not too big and not too small – to induce non-linear changes in the complex system. He takes care in a 2010 paper to defend the name as something more than an historic accident.

        These are still dynamically complex systems – the essential physics and maths of which you still stubbornly fail to grasp. In the case of glacials (note – not ice ages to distinguish the quaternary cycles from other periods) signals perhaps from orbital eccentricities triggering runaway snow and ice feedbacks.

        Pekka doesn’t deny dynamical complexity – or spatio-temporal chaos – in climate. Just the extent to which it is relevant. I just don’t know what to do with you Webby.

      • Chief, I’m not denying the existence of stochastic phenomena, just saying that they are almost certainly described by the Navier-Stokes equations. And the Navier-Stokes equations imply the butterfly effect. Pekka seems to be saying that some mysterious molecular dissipation will kill disturbances. Perhaps I don’t understand his post, but that clearly is not the case.

      • If we knew everything that is knowable, and were limited only by quantum uncertainty, I think it might be possible to derive a reasonably accurate description of climate behavior based strictly on mathematics. This would include the volume of phase space occupied by chaotic attractors, the consequent amplitude of temperature fluctuations, and the timescale over which any averaging might occur.

        Because we’re more ignorant than that, it seems to me that we need constantly to compare our range of possibilities to observed climate behavior in combination with basic principles of physics, to see which possibilities are consistent with the evidence and which are not. An interesting example is a model exercise simulating the twentieth century trajectory of temperature anomalies that might have ensued exclusively from the rise in greenhouse gases – Response to GHGs. Note that the three model runs exhibited substantial up and down fluctuations and divergences from each other, while nevertheless showing an overall warming trend that was not at all dissimilar to the observed warming, although with an excess of warming attributed to the lack of an aerosol cooling forcing term. The differences between model runs reflected chaotic behavior rather than the intervention of other climate agencies, because anthropogenic aerosols, solar changes, and volcanism were all omitted, and ENSO was smoothed out. The observational record, which of course was affected by these other variables, showed a greater divergence from a smooth curve during some intervals, but the difference from the simulations was not extreme. To me, these results, in aggregate, convey an idea of the timescale over which the chaotic fluctuations probably dominated (relatively short intervals), and the longer timescale over which the forced response to the GHGs was dominant. (A second implication is that pre-1940 warming included a significant GHG component even if it was not as important quantitatively as the declining volcanism from early to mid-century in determining the total warming).

        The notion that over the longer timescales, forced responses dominated (at least for the second half of the past century) is reinforced by data on Ocean Heat Uptake since 1955. Here again, the curves are not smooth, and exhibit temporary peaks, dips, and static intervals, but the overall increase that exceeded 10^23 joules is significant. This long term rise must inevitably have been a forced response, because there is no internal compartment in the climate system that could have transferred that much heat into the oceans.

        I know of no reason why the apparent averaging out of short term fluctuations over longer timescales dominated by forcing was an inevitable result that could have been derived without reference to the actual data. Mathematically, warming during the second half of twentieth century might have simply reflected long term unforced variability. The evidence tells us that was not the case (or only a small part of it), but in other centuries, with less forcing (or less persistent forcing), the relative contributions of internal and forced variability may have been different, and may be different in the future. Nevertheless, I don’t see evidence that the unforced variability of the kind we are aware of is particularly strong on multidecadal or centennial scales, even though its effects on temperature are large enough to play some role.

      • Chief Hydrologist

        David – I agree entirely – all there is is conservation of mass and momentum. Where is there room for the random? There is a diffusion term but it is in terms of velocity and viscosity – not statistical but dynamic.

      • We have some crossed replies. I can’t but to think that you are playing with words now

        You’re projecting, CH. You and I maybe, but I’ve never seen Pekka play with words.

      • @DY: Pekka, I think you are wrong about eddies. If you take the curl of the Navier-Stokes equations you will see that vorticity is conserved.

        Agreed (with the second sentence).

        Thus, vorticity can diffuse, but it cannot disappear except at the boundaries of the atmosphere.

        If the net vorticity was zero to begin with, the Navier-Stokes equations promise it will still be zero at the end.

        Eddies are local phenomena. The vorticity in the vicinity of an eddy can be nonzero even though the net vorticity for the whole system is zero.

        What your argument neglects is that two opposite vortices can annihilate.

      • David,

        Vaughan gave the most essential part of the answer (annihilation of vortices). Another point is that Navier-Stokes equation is an idealization that’s not fully accurate. It’s a very good equation, but still only an idealization that fails at some point. Our ability to solve the equation for turbulence fails first, but the failure of the equation itself is not far behind due to its nonlinear structure that amplifies the consequences of the small errors.

        The butterfly effect may be true in an idealized model, but it’s not significant in the real world. Saying the same in other words: There occur billions of butterfly effects every second. They cancel and amplify each other stochastically leading to an outcome, where any single one of them is totally insignificant.

      • This link is a relatively clear espose of the concept of chaos as applicable to weather and to climate.

        http://www.skepticalscience.com/the-chaos-of-confusing-the-concepts.html

        In spatio/temporal chaos however, initial conditions would seem to render the simple Lorenz type system as only marginally useful.

      • Chief Hydrologist

        Vaughan…old buddy…I am fading fast…the Jack Daniel’s is dropping faster than a drovers pants in a bawdy house…my blue horse…Stochasticity out of Vorticity…has been let out to pasture…the laptop is warmed up and I am thinking of writing a maudlin to a strong cowgirl I once knew….what was that scansion scheme again?

        Never would I treat Pekka with anything but the utmost respect. It was a mild remonstrance at denying the equivalency of what can be called a phase transition, a bifurcation, a catastrophe (in the sense of Rene Thom), or a tipping point.

        You on the other hand I feel free to treat with the utmost disdain and when I play with words you will freakin’ know it.

        You haven’t quite got that third law right yet. It goes for every activist there is an equal and opposite reactionary in a Bentley hitting the brakes too hard and turning static friction into kinetic friction. Whoops. So while Newton didn’t explicitly mention friction in the 3 laws – it does come into play when deciding whether or not you need a lawyer

        ‘Ohhh I spent all my tin
        in a shanty drinkin’ gin
        across the western plains I must roam
        ’til i find a home in my little darlin’s arms
        across the western plain I must wander

        Now this is a couplet in iambic tertrameter with a triplet in iambic pentameter. Note in the latter the soft half rythmes characteristic of a cowboys lament.

      • Chief Hydrologist

        If I keep educating you Vaughan – old buddy – you might become a half decent human being one day.

    • Exchanges between Pekka and CH are always informative and interesting

      I particularly commend Pekka on his accuracy and succinctness of language, even though his grammar sometimes demonstrates that English is not his 1st language – his constant precision shames many native English posters here. His posts, and the exchanges with CH, are admirable for their clarity

      I am yet to be convinced that stochasticity always over-rides chaos, however, but I do hope that such exchanges keep occurring. They present a genuine learning experience

      • I agree with everything that you wrote ianI8888, especially because the view that chaotic bifurcations are generally small in size and over-ridden by stochasticity would have no basis in reality that I can see.

      • @ianl8888 I am yet to be convinced that stochasticity always over-rides chaos

        Me too, but also of the opposite. Anyone care to explain?

        @PD the view that chaotic bifurcations are generally small in size and over-ridden by stochasticity would have no basis in reality

        Can’t argue with that since I don’t live in reality myself but in the dictionary world of definitions, where chaotic bifurcations are most definitely over-ridden by stochasticity regardless of their size.

        Objections couched in English are not raised in the real world but in the dictionary world. How dare the real world object to the dictionary world? ;)

      • :) :)

      • I haven’t claimed that stochasticity always over-rides chaos, I have only stated that that may be the case for much of the Earth system.

        Where I would look for essential chaotic phenomena is in very large scale very slow processes like those related to variability in ocean circulation. The very large scale and slowness are likely to reduce dissipation essentially leaving open the possibility of essential chaotic behavior.

    • Chief Hydrologist

      Fred,

      We are of course talking about two entirely different things. You are talking about internal climate variation which is code for rearranging energy between ocean and atmosphere. I am talking about fundamental shifts in Earth’s energy dynamic involving cloud, snow and ice. These shifts involve predominantly the Southern and Northern Annular Modes and the variability involving upwelling of cold water in the eastern Pacific – both ENSO and the PDO. These are the major modes of global climate variability over decades. Over longer times are mysterious episodes involving orbital eccentricities – much like yourself – thermohaline circulation and runaway ice and snow feedbacks. It could get freakin’ cold where you live within a decade. – http://www.whoi.edu/page.do?pid=12455

      Your comments leave me scratching my head because really I am not building models in math space in some abstract exercise but talking about observed natural variability. Storms and cloud spinning off the polar vortices into lower latitudes – the changes in sea surface temperature over vast areas of the Pacific. It strikes me as incomprehensively odd that clouds should be expected to remain constant with striking natural variability in the climate system.

      What evidence we have suggests that they don’t. It can be seen in the CERES, ISCCP-FD and ERBE records. Here is Figure 7 from Wong 2006 plotting ocean heat content and ERBS net flux. The warming is all in the short wave offsetting cooling in the infrared.
      http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view&current=Wong2006figure7.gif

      We have had this discussion before – so what do you consider to be stronger? Where anywhere in the satellite records is there any evidence that clouds don’t change? Until we get some I am leaning towards not believing a word of it.

      Robert I Ellison
      Chief Hydrologist

      • Here I’m in the same camp with Rob. The most natural type of long term variability is in my view based on slowly varying changes in ocean circulation, which doesn’t necessarily involve major transfer of heat from one place to another but influences cloudiness and other large scale weather patterns and through that the net energy flux of the Earth system. In one phase of such variations the whole Earth is cooling in another it’s warming. (Cooling and warming can be also relative to an overall warming trend caused by CO2 and other forcings.)

        The long term variability may be quasiperiodic, it may be due to irregularly spaced transitions, or it may be a combination of both. Similarly it may have a chaotic nature or lack that almost totally. I believe we are still totally ignorant of all the details mentioned in this paragraph.

      • Indeed, Robert, and Pekka, natural variability extends beyond ocean/atmosphere energy transfer, and not all of it is necessarily chaotic (there is still debate as to the chaotic contribution to ENSO). My main point above was to refer readers to the empirical evidence that tells us that the variations can not have accounted for more than a small fraction of the long term temperature trend – at least since mid-century – and that the latter was predominantly forced. Of the forcing modalities, GHGs appear to have been the very dominant component.

        As you mention, much of this has been discussed before, including the principle that small natural variations can’t trigger a very long term upward trend in temperature and ocean heat uptake in the absence of a climate sensitivity value that is astronomically high (Swanson and Tsonis have made essentially the same point about long term trends). In addition, there is no plausible mechanism I’m aware of that could link such variations to observational data in many sampled regions and at various times showing that the warming was accompanied by an increase in downwelling longwave radiation (DLR). This has not necessarily been true during all sub-intervals, and so natural variations have undoubtedly modified the trends, but it is an attribute that tells us something about the long term record. (Note, however, that to the extent that positive cloud feedbacks on GHG-mediated forcing mediate a reduction in cloud cover, the amplification will substitute some SW effects for LW effects due to the reduced cloud greenhouse warming and increased warming from a lower albedo).

        Many possibilities are theoretically possible, and some may be realized over intervals much longer than the recent centuries (e.g., orbital forcing), but the empirical data make it hard to attribute a major contribution to these during much of the previous century relative to forced trends. Here, I am referring to global trends, because regional trends will be more susceptible to some of the weather/cloud/circulation variables mentioned above.

      • Now the chief is including induced features.

        If a turbulent flow started to slosh around and open up some door (albedo) which then changed the influx of energy, then that is an induced feature which is only coincidentally related to a chaotic causality. That is Pekka’s tipping point or a stochastic resonance. The review paper by Gammaitoni 1998 demonstrates this for both stochastic and chaotic sources. The chaotic version is just included for completeness because it could happen where the deterministic behavior is not removed by stochastic disturbances.

      • Chief Hydrologist

        Fred,

        There are two papers from Tsonis and colleagues – the 2007 paper – ‘A new dynamical mechanism for major climate shifts’ and the 2009 paper – ‘Has the climate recently shifted?’

        These are decadal shifts that are non-linear responses to small changes in control variables. So at points of climate shift – climate is exquisitly sensitive. Wally Broeker has compared compared CO2 emissions
        to poking a stick at a wild and unpredictable beast for this reason.

        DLR may be a symtom of warming – but the reason for warming is the energy imbalance at TOA. The trend in net power flux at TOA determines, with due regard to TSI variability, warming or cooling. The net is not constant but varies with natural variability.

  126. Good News

    Global mean temperature is now right on the 130 year trend line!

    http://bit.ly/zzv2DL

  127. There have clearly been alternating periods of long term warming and cooling in the vicinity of 500 years each between: Roman warming, Dark Ages cooling, Medieval Warming Period, Little Ice Age, 21st or 22nd century warming.

    We can still detect this long term trend in recorded data for the period since 1900 for example. To do so, we need to compensate for shorter periods of the order of 30 years of warming and cooling, as well as shorter solar cycles.

    All this can be done effectively using the concepts of derivatives and second derivatives in calculus. By calculating a 30 year trend every month on a moving basis we can firstly obtain a plot of the rate of change (gradient) and then we fit a logical “axis” trend to that plot. By taking this additional step we overcome “cherry picking” accusations because it makes very little difference whether the initial periods were 20, 30 or 40 years for example. The final trend line is similar.

    What this yellow line (at the foot of my Home page http://climate-change-theory.com ) . indicates is that around 1915 the rate of increase was about 0.06 deg.C per decade, but it has now declined to about 0.05 deg.C per decade. There is a reasonable probability that this yellow line will slope down more, heading for zero maybe within 200 years or so. That would indicate a long term maximum in the trend of only about 0.7 to 1.0 deg.C above the current trend, though there could be shorter-term superimposed maxima and minima maybe up to a degree above the trend. However, 500 years of cooling should follow.

  128. Chris Ho-Stuart (and Pekka)

    You are tossing out meaningless figures but avoiding the real issue.

    Hansen’s forecast for Scenario A was clearly stated:

    Scenario A assumes that growth rates of trace gas emissions typical of the 1970s and 1980s will continue indefinitely; the assumed annual growth rate averages about 1.5% of current emissions, so that the net greenhouse forcing increases exponentially.

    So Hansen’s baseline is “1.5% of current emissions”.

    What has been the actual trend since Hansen made this prediction in 1988.

    The CDIAC data show CO2 emissions (as GtC/year) increasing from
    5.968 GtC in 1988 (when Hansen made the prediction), to
    8.749 GtC in 2008 (last year of CDIAC data)
    8.749 / 5.968 = 1.5202
    1.5202^(1/20) = 1.02015 = 2.01% per year, compounded

    This is higher than Hansen’s figure of 1.5%.

    But let’s check Hansen’s figure for “the 1970s and 1980s”.
    4.053 GtC in 1970, to
    5.755 GtC in 1987 (last year for which Hansen had data)
    5.755 / 4.053 = 1.4199
    1.4199^(1/17) = 1.02083.= 2.08% per year, compounded

    So Hansen understated the actual exponential growth rate “typical of the 1970s and 1980s”, and this growth rate was actually almost identical to the subsequent growth rate from 1988 to today.

    [Using the preliminary number for 2010 of 9.1 GtC gives almost exactly the same rate.]

    So, Chris I have to admit that the actual growth rate remained essentially the same instead of increasing slightly as I stated earlier.

    But it does not change the fact that
    – Hansen’s Scenario A is the closest to the actual development on CO2
    – The actual warming rate was around half that of Scenario A.

    Hope we can agree now.

    Max

    • Max; to be blunt… your maths sucks rocks, and if this comes out as patronizing; then too bad. You’ve been rude and insulting; but the cold truth of the matter is that we are not on a level playing field here. I know what I am doing. You don’t.

      Before you dismiss me too quickly…

      Does it give you the slightest that Steve McIntyre also (correctly) identifies scenario B as the best match to what actually occurred? Here, again, is the graph he produced to try and explain this. (Thanks, JCH)

      Plot of forcings for scenarios A, B, C vs observations

      Have you looked at the actual input data to the models yet? Did you note it was atmospheric concentrations, not emissions? Did you read the paper itself which explained WHY the scenarios were given as atmospheric concentrations? The scenarios chose rates of growth of atmospheric concentrations based on simple notions of the rates of growth for emissions. The prediction of the paper is for climate based on the atmosphere; not prediction of the atmosphere based on emissions. Did you note it is more than CO2 only?

      Even so; I’m going to look at the CO2 emissions data here, because although it’s not the correct approach for finding the appropriate scenario, you are doing it incorrectly and doing it right would tend to indicate scenario B as well.

      The correct approach (as McIntyre does) looks at all the forcings, using atmospheric composition data. So don’t quote this as a justification for using emissions; I’m only doing this as a maths tutorial.

      Note that the models were not run on the day of publication. The model used scenario data from 1985 onwards, and observation data up to 1984. It won’t make a lot of difference, but it should still be noted.

      Second… you DON’T use end points to estimate trends. You can use regression; but apply the regression to the natural logarithm of the data. The slope of the regression gives the power law; the gain per year is e^x where x is the regression slope of natural logarithms against year.

      Alternatively, you can just plot an exponential trend line against the data, using excel, and get it to display the equation.

      For example. data from 1970… 1987 inclusive.
      4053,4208,4376,4615,4623,4596,4864,5026,5087,5369,5316,5152,5113,5095,5283,5441,5609,5755

      Take natural logs
      8.307212627
      8.344742754
      8.383890344
      8.437067147
      8.438799124
      8.432941639
      8.489616424
      8.522379718
      8.534443545
      8.588396950
      8.578476420
      8.547140268
      8.539541595
      8.536014946
      8.572249397
      8.601718146
      8.632127730
      8.657824321

      Get the regression slope. 0.017255877
      Raise to the power e: 1.017405619

      Rate of increase is thus 1.74%

      You got a value that was significantly too high because you worked from end points, and not the whole dataset. Plot an exponential trend line against that data and you’ll see that using end points greatly overstates the growth rate.

      If you go to 1984, which is the more correct comparison (since that is what data was actually used), you’ll find the rate of increase is a lit bit higher, at 1.80%.

      You make the same error when estimating 1988-2008. Do it correctly, and you should get 1.85%

      The proper comparison starts at 1984 (include one extra year of the last observation data point, because we are looking at a rate of increase from a starting point.) Do it correctly, and you should get 1.825% as the growth rate.

      The NEXT error is that this rate was not fixed. Plot the data, and you’ll see a big upkick in 2002. The rate of increase from 1984-2002 is only 1.36% Significantly LESS than scenario A.

      There’s a big increase following that… The rate from 2002-2008 is 3.77%! BUT the cumulative emissions (which is what contributes to the forcing) remains dominated by the earlier part of the series. If the rate is not constant, you CANNOT just take an average over the series as a good guide.

      The actual forcing comes from total loading in the atmosphere, which is related to the CUMULATIVE emissions. You can check this in your spreadsheet by summing the emissions from 1985 to 2008 inclusive (or start at 1988 if you prefer, you get the same thing.)

      The total is: 162296

      Now take the last data value…. 8749 in 2008, and extrapolate it back using 1.5% rate of increase, and sum THAT.

      You find that the observed cumulative emissions is significantly LESS that you would get presuming a 1.5% rate of increase up to 8749. In fact, even if you start only at 8000 in 2008, and extrapolate back with a 1.5% increase, and sum, you find the total to 1985 is STILL greater than what you get in reality.

      If you still think I am just “tossing out meaningless figures”, the screw you. You’ve been a total ass; insulting and incompetent. If you want to rethink, but can’t bring yourself to admit that I might actually know what I am doing and am doing it honestly, then go and look up the discussions of Hansen’s scenarios at Climate Audit. It’s not a blog I’d actually recommend overall, but for basic simple things like this, Steve McIntyre can do a good analysis.

      Goodbye and good riddance.

      • “Have you looked at the actual input data to the models yet? Did you note it was atmospheric concentrations, not emissions?”

        That’s BS. The scenarios are emissions scenarios. The atmospheric concentrations are calculated on the (incorrect) assumption that atmospheric change is entirely due to the anthropogenic assumption.

        When the cooling really gets going and the atmospheric CO2 growth decreases, are you gonna claim that it’s scenario C then, in spite of rapidly growing emissions?

        We are in the BAU scenario – actual atmospheric CO2 variation is another matter.

      • …due to the anthropogenic emissions.

      • Edim,

        You are following the standard practice of so many on this site.

        First make up your own rules, then tell that they are violated. Why should anybody care?

      • Pekka,

        What are you trying to say? What do you dispute? Do you disagree that the scenarios are emissions scenarios? That would be mind-bogling, because that’s what they’re called.

      • Hansen did not present well defined emission scenarios, he presented concentration scenarios. He did indicate imprecisely, what kind of emission scenarios would lead to the concentration scenarios, but not precisely enough to allow for real reconstruction.

        Even on this level they are not fully in agreement, and it seems clear that Hansen had some wrong ideas about the persistence of CO2 in the atmosphere. In hinsight it’s, however, totally pointless to concentrate on that vague part of his papers, when his real work started from the concentration scenarios.

      • Pekka, I just want to endorse this. All the analysis refers to atmospheric concentrations. The scenarios makes some stabs at atmospheric concentration growth rates based on emission growth rates; and it is the atmospheric values that are the model input and the focus of all calculation and discussion.

        Retrospective evaluation of the prediction needs to consider the scenarios as inputs to the model — which means the atmospheric levels.

        That’s what Steve McIntyre does also. Looks for the scenario that gives an input to the model that closest matches the subsequent reality.

      • Chris
        You know that “emissions numbers” or estimates are of such a low accuracy as to be meaningless. You also know that the rate of change in temperature predicted by Hansen has not occurred (or has been at the bottom of the estimated range).

        How have the models used to forecast other future conditions been doing when compared to observations? Don’t you have to agree pretty poorly? Certainly not well enough to base a worldwide change in energy practices.

      • Wow, it is indeed mind-bogling – the emission scenarios are not really emission scenarios – they are concentration scenarios. How deceiving. Hansen can’t be wrong, whatever happens, it’s either emissions or concentrations, who cares. It will fit somehow. How convenient.

      • Rob, yes, the emissions numbers are only approximate, of course.

        The scenarios in the 1988 paper were calculated as atmospheric concentration numbers. These are known more accurately, and serve plenty well enough to test the skill of the model used back then, and contrast it with more modern models.

        Models still have lots of room for improvement; but they have indeed come a long way since 1988, with significantly better skill.

        You don’t need perfect knowledge of the future to take actions to reduce the risks and costs we do know about. That’s a separate discussion of course; but the idea of merely ignoring the climate associated costs of energy use until they are quantified in full detail is, in my view, ludicrous.

        My main interest continues to be the basic science of the subject, rather than the policy. I do find questions like the one you posed to be quite extraordinary.

      • PS for Rob. I should have added to the above… you say:

        You know that “emissions numbers” or estimates are of such a low accuracy as to be meaningless.

        In fact, that’s not remotely the case. Their accuracy is limited, but certainly not meaningless. Good grief.

      • The skeptics don’t seem to understand that atmospheric co2 concentrations are less than emissions and will have a different growth law.

        Concentration is the convolution of the emissions forcing function with the CO2 impulse response adjustment time.

        Since this is a transient response, it is less steep a slope than the cumulative emissions. I have the complete calculation documented if you need to understand it in more detail.

        Bottomline is that this can lead to around a factor of two on absolute error, or half a power in slope, if you mistake one for the other.

      • Web

        Your comment @ 2:11 pm is nonsense.

      • It’s not nonsense as that is standard practice of climate scientists and physicists. Physicists often refer to the convolution of an impulse response as a Green’s function method.

        Bottomline, my explanation trumps your empty assertion. I suppose you operate completely on a gambler’s hunch?

      • Chris Ho-Stuart

        You just won’t stop this silliness, will you.

        1. Hansen made a forecast

        2. It was exaggerated by 2:1

        The actual temperature trend was essentially the same as Hansen’s “Scenario C”, which assumed total phase-out of GHG emissions from 1990 to 2000

        All the rationalizations in the world aren’t going to change that.

        Max

        PS Some advice: get your emotions and manners under control.

    • Max,

      It’s time to stop your non-sense. The point has been argued by so many, not only me and Chris, but you just don’t want to listen to anybody.

    • Manacker, I don’t buy it any longer when you write out all these numbers and tables. What happens is that we go through them and we find that you make up one of the numbers and it tilts the whole analysis. I got burned the first time I did a V&V of one of your tables of numbers. My suggestion is that you work the way a real scientist would convey information; in that case create a graph overlaying your numbers with the numbers that you are complaining about. I realize it is more work for you, but it is the best way to gain back your lost credibility.

      • Pekka (and Chris)

        Do you “listen” better than I do?

        This is a silly argument, Pekka – especially coming from a scientist.

        Chris and I were discussing the failed 1988 Hansen forecast.

        Like Hansen’s co-worker, Gavin Schmidt, tried to do on RealClimate, Chris attempted to show that the failed forecast was OK, because the actual temperature record was close to Hansen’s Scenario C (no CO2 emisions after 2000) or Scenario B (dramatically reduced CO2 emissions).

        The bases for all three “scenarios” were clearly spelled out by Hansen at the time, and the one that comes closest on CO2 emissions is “Scenario A”, with the highest projected rate of warming.

        This turned out to be exaggerated by a factor of 2.

        This was all I pointed out.

        But Chris was not interested in “listening” (as you put it), but rather in trying to rationalize away grossly exaggerated forecast that turned sour.

        Let’s not waste any more time on this topic.

        It is clear to anyone who looks at the Hansen 1988 paper and compares it with the actual record.

        Max

      • Web

        Whether you “buy it” when I cite “numbers” isn’t going to cause me a lot of sleep loss.

        Max

      • Max, I don’t mind if you give honest disagreements; what makes me angry is the persistent lies about my own input directly visible in the thread. My “emotions” by now are simply cold contempt. You’re a flat out nut; discarding what could be valid criticisms in favour of a naive failure to understand the models or forecasts you purport to critique.

        If you applied the same principles to Steve McIntyre as you do to me, you ALSO be saying that he’s trying to protect Hansen. Which, of course, is just absurd.

        Max says:

        Like Hansen’s co-worker, Gavin Schmidt, tried to do on RealClimate, Chris attempted to show that the failed forecast was OK, because the actual temperature record was close to Hansen’s Scenario C (no CO2 emisions after 2000) or Scenario B (dramatically reduced CO2 emissions).

        I call you out on that. It’s a flat out lie. I’ve not commented on the temperature record AT ALL in this discussion of scenarios.

        All I have done is comment on the forcings, in a futile effort to explain where you are going wrong in identifying the appropriate INPUT.

        What Gavin has done as far as scenarios are concerned is the same as what Steve McIntyre has done; which is to contrast the forcings of the three scenarios, and identify the one closest to actuality.

        AFTER you get this elementary first step right, THEN you go on and look at the forecast.

        Another elementary error people are making in the thread is to consider which scenario produced has the closest temperature to actuality, and effectively treat that as a prediction of the forcing. That’s a new low of stupid in the thread.

        What is so silly is that a genuine consideration of the forecast would show the 1988 model as giving an over-estimate of warming. As Gavin identified as well. If you based ongoing expectations on existing and more sophisticated models, then you expect that the 1988 model was a bit too sensitive.

        On the other hand, if you prefer to consider the hypothesis of Tsonis et al, then existing models ALSO continue to be wrong in the variations above and below the non-periodic forcing that is heating up the planet. And — ironically — their hypothesis is that the recent slowdown in warming is from improperly modeled quasi-periodic effect; not an underestimate of sensitivity. Indeed, they suggest that the sensitivity of existing models is too low, and it should be higher. Like Hansen’s 1988 model, ironically. The lack in the models, in this case, would be missing the mid-range variation; rather than inflating the non-periodic AGW trend.

      • Chris Ho-Stuart

        “Cold contempt?”

        Ouch!

        You criticized my quick analysis of Hansen’s 1988 forecast on CO2 (using first and last data points and calculating an exponential rate between the two).

        I will agree that this only gives an approximation.

        I have plotted the entire 1970-2008 CO2 emissions record based on CDIAC data (as Girma also did) and have drawn the exponential rate of increase for the 1970-1987 portion (= Hansen’s forecast basis) and for the ensuing 1988-2008 portion.

        On this basis, the early segment showed an exponential rate of 1.73%/year (Hansen had mentioned a rate of 1.5% per year), and the later period showed an exponential rate of increase of 1.84% per year, or slightly higher than Hansen’s baseline period or his forecast basis.

        If you want me to run you through this calculation, I’ll be more than happy to do so.

        I will admit that it is only for CO2, the principal human GHG and not for all GHGs, but I do not believe that this will change the overall conclusion that Scenario A represented the closest fit to actual GHG emissions, not some other Scenario.

        And that, quite simply, was my point from the start.

        But don’t get all emotional about it. It’s truly not worth it.

        Max

    • manacker,
      Believers throught the ages have defended their prohpets long after they were shown by the actual events to be wrong.
      Chris and gang are simply keeping that ancient tradition alive.
      AGW has the same relationship to cliamte science as eugenicshad to evolutionary science.
      Believers in each were a wee bit touchy as their respective faith structures melted away.
      The feigned outrage by Chris Ho is particularly cute, by the way.
      Keep up the good work.

    • You know this is a meaningless argument. As shown recently on RC, the actual record is below scenario C by quite a bit. We will see if this continues but the current evidence is that Hansen was wrong even giving him the benefit of the doubt on emissions vs. concentrations. I note that RC’s AR4 forecast vs. actual data is contradicted on the blackboard. The rationale for the choice of zero level is quite clearly explained there. It is not explained at RC. In any case, Hansen has a record of exaggeration and looking for the worst and finding the worst. His recent redo of his sea level predictions is amusing for its obvious grasping at the straw of “tipping” points.

      And this is the problem in climate science as Muller points out. You cannot trust people to honestly report all the data clearly and honestly.

      • David Young

        You bring uop a very good point.

        Here is Hansen’s description of “Scenario C” in his original 1988 paper:

        Scenario C drastically reduces trace gas growth between 1990 and 2000 such that the greenhouse climate forcing ceases to increase after 2000.

        This obviously did not happen.

        So if the actual temperature was close to “Scenario C” (which it was), this shows that all the CO2 that has been emitted since 2000 plus half of what was emitted between 1990 and 2000 have had no impact on global temperature at all.

        Not a very strong argument for AGW, I’m afraid.

        Max

      • It’s temporarily below Scenario C. WhoopTdoo

      • JCH

        How “temporary” is “temporary”?

        Sounds to me like a “just wait’ll next year!” rationalization.

        Max

  129. Max –

    I think there is a lot more mileage in a) Hansen’s Congressional testimony because he specifies a BAU and what will happen if there are no emissions curbs [subtly but profoundly different to his ’88 paper] ans b) the 1990 FAR because it made clear it’s prediction of 0.3C per decade with again, a specified BAU.

    Excuses for Hansen [see SkS] are that he’s only a poor scientist so what would he know about emissions? The IPCC had ‘2500’ of the most relevant people in the world (allegedly) and so their BAU deserves to be treated as such. Even the limits of their uncertainty were put at 0.2C per decade. As we know, even with Pinatubo, they are struggling after 22 years. And it makes their 0.5C (the other end of the uncertainty limit) look truly ridiculous.

    • I don’t understand your certainty that BaU was the real world scenario we followed. It seems to fly in the face of the facts: The Montreal Protocol, the Kyoto Protocol, countless local and national voluntary and incentivised energy efficiency schemes. The 90s was the first decade in the Mauna Loa record in which average CO2 growth rate did not exceed the previous decade. Pinatubo and the collapse of the Soviet Union undoubtedly contributed but there was a flattening of emissions growth in other parts of the world too.

      If literally nothing has changed in this regard since the late 80s, what is Jeremy Clarkson whining about every week?

      • Paul

        There was a major economic downturn and technology advancement in hybrid automotive technology that impacted human emissions. What there was not is any dramatic reductions in emissions by any country that were due to a change in government policies.

        Policies were implemented in the EU that did virtually nothing to reduce emissions. The trend is lower in the US due to new technologies becoming more popular. Developing countries did not raise their emissions faster due to a lack of funds. Look at the rate of growth in China and apply that to the balance of the developing world.

      • I don’t know about Hansen but the IPCC weren’t insisting that changes can only happen due to government policy. Pursuit of large energy efficiencies was explicitly an element that differentiated Scenarios B-D from BaU.

      • Paul

        The world generally followed a business as usual approach to CO2 emissions. If you can’t point to a single country that lowered its emissions levels due to some policy having been implemented—-doesn’t that mean by defination that that BAU approach was generally followed?

      • Rob,

        It’s easy to point out active energy policies in numerous countries that have lowered energy consumption and emissions. Specific climate poilcies may have had less influence, but the policies are related. Reductions in oil consumption has often been the primary goal, but there have also been policies and other opposition that has influenced construction and use of coal fired power plants.

        One must be really blind to avoid noticing these policies.

      • Pekka

        Perhaps I am blind or poorly informed. Can you you point out the policies that you reference that would be different than business as usual? I can think of some policies that had very minor impacts, but nothing significant

      • Pekka

        Are you referencing the Germans plan to go away from nuclear power to cleaner coal fired power plants? Perhaps you are meaning the Irish fuel tax that raised revenue and din’t lower consumption.

      • They are all around. None is very strong by itself, but the combined effect is significant.

        You can pick essentially any industrialized country and find such policies, and that includes U.S. as well.

      • Pekka

        I respectfully disagree there there has been much implemented beyond what would be considered as business as usual. Things like the US federal government raising the required mpg on US cars was being done regardless of whether we were concerned about CO2.

        In spite of my disagreement with Webby on the linking of AGW and fossil fuel resource depletion, the US policies seem generally geared towards lowering fossil fuel use for economic reasons not climate related.

      • Rob;
        “US policies seem generally geared towards lowering fossil fuel use for economic reasons not climate related.”

        Not to forget the Crackpot-in-Chief’s determination to bankrupt coal producers and power plants, with the sanguine observation that of course this will “skyrocket electricity prices”. Never underestimate the depth of destructive delusion of the Greens.

      • What I had in mind concerning U.S. is not policies of the Federal Government but by some States.

        Although I maintain that the combined effect of all the policies of different countries is significant, I don’t claim or think that those would have had any dramatic influence on the energy use or emissions. Neither do I think that any effective and cost efficient actions are in sight.

        I have been following actively (professionally until retirement three years ago) energy issues since 1980. Energy research has received special funding all that time resulting in pitifully limited results.

        The world has changed in many ways very much, but energy technology is only a little more developed than 30 years ago.

      • Peeka
        I am very familiar with the policies enacted by the state of California and they have not lowered emissions so far at all.

        My original point to Paul was that I could not think of policies that were effective in lowering CO2 emissions over the last few years that would not have been done as a part of business as usual. Can you name a country other than the US (which I am most familiar with) where CO2 was lowered do to specific policies being implemented?

        LOL—I am not as blind as you 1st thought am I?

      • Rob,

        As I said, there’s no requirement that changes must come through government policy. If new technology increases energy effiiciency that’s a change from Business as Usual. If there is a trend for organisations unilaterally deciding to amend their practices to increase energy efficiency that’s a change from Business as Usual.

        I think CO2 has ended up being probably pretty close to a late-80s idea of Business as Usual, though I don’t think the path it took to get here has been particularly BaU (collapse of Soviet Union, widespread small emissions reductions in the 1990s, massive industrial growth in the Far East, emergence of China as the world’s largest CO2 emitter).

        The main divergences from Business as Usual are actually in other greenhouse gases, rather than CO2.

      • Well, I would argue the EPA’s insistence on slower speed limits in major urban counties, and many contingent counties, is not BAU, but my wife is currently doing 75 mph (and being passed) in a 60.

        However, if it was a 75, she’d be doin’ 85 (and still being passed).

  130. Max & Chris (On Hansen et . 1988)

    What is the observed exponential carbon emission growth rate that was forecasted to be 1.5% in Hansen et al., 1988?

    Scenario A assumes that growth rates of trace gas emissions [NOT CONCENTRATION!] typical of the 1970s and 1980s will continue indefinitely; the assumed annual growth averages about 1.5% of current emissions, so the net greenhouse forcing increases exponentially.

    The observed carbon emission curve is shown in the following graph.

    http://bit.ly/mBXivS

    From the above data, the approximate annual global carbon dioxide emission in G-ton from 1970 to 2007 = 3.67*4.3*e^(0.0164*(year-1970))

    As a result, the annual exponential growth rate is 1.64%, a bit higher than the 1.5% assumed by Hansen et al, 1988.

    CONCLUSION:

    The observed exponential carbon emission growth rate is about 1.64%, which was forecasted to be 1.5% in Hansen et al., 1988. As a result, among the three scenarios, scenario A is closer to the reality.

    Here is the comparison of the three forecasted scenarios with observation (GREEN).

    http://bit.ly/iyscaK

    • What is the observed exponential carbon emission growth rate that was forecasted to be 1.5% in Hansen et al., 1988?

      Hansen did not forecast an emission growth rate. He considered three different scenarios. At the time he did indicate scenario B was the most likely; but this was not a forecast. He gave THREE forecasts, corresponding to three different scenarios.

      There was no one observed emission growth rate. The emissions observed show roughly two rates of growth; about 1.35% up to 2002, and then a big kick up to well over 3.7% thereafter. The net effect of that on the model input was closest to scenario B; the initial longer slower part of the emissions dominates the total effect. In fact, scenario B is just a bit too high. But it’s certainly the closest.

      I’ve explained the calculation above. It’s not ambiguous. It’s not special pleading. It’s the only correct answer, even though it’s perfectly clear you’ll never use it.

      • Chris

        Here is the data for carbon emission in million metric tons from CDIAC

        http://1.usa.gov/oFD7Or

        1975=>4596
        1985=>5441
        1995=>6422
        2005=>8086

        Multiplying the above values by 3.67*10^(-3) gives the CO2 emission in Gt

        1975=>16.87
        1985=>19.97
        1995=>23.57
        2005=>29.68

        The exponential fit for the emission data for the period 1970 to 2007 is as follows:

        http://bit.ly/mBXivS

        Which gives the equation:

        CO2 emission in Gt from 1970 to 2007 = 15.78*e^(0.0164*(Year-1970))

        This equation shows the CO2 emission growth has been 1.64%.

        Here is the CO2 emission estimate from the above equation:

        For 1975=>15.78*e^(0.0164*(Year-1970)) = 15.78*e^(0.0164*(1975-1970)) = 15.78*e^(0.0164*5) = 15.78*e^(0.082) = 15.78*1.085 = 17.13

        1975=>17.13 Gt
        1985=>20.18 Gt
        1995=>23.78 Gt
        2005=>28.01 Gt

        And for 2010, the calculated value is 30.4 Gt, and this value is nearly equal to the actual emission of 30.6 Gt (http://bit.ly/m7SeXb).

        As a result, the exponential growth rate of CO2 emission since 1970 is 1.64%.

        Scenario A (1.5% emission growth) of Hansen is closer to the reality.

      • Girma

        Thanks for your analysis on CO2 emissions.

        I was beginning to get a headache from the jumble of numbers that Chris was throwing out, but you’ve cleared it up very well.

        Max

      • Girma is doing to the fit to emissions correctly. Now he can try doing it for the atmospheric concentrations used as inputs to the model for the three scenarios, and then (at last) you’ll have a sensible basis for evaluating the skill of the model; let the chips fall where they may.

      • It is getting to the point that even if Girma said something sensible, no one would care. Girma was the one that didn’t understand the simple concept of percentage increases a few moths ago.
        I recall this thread as comedy gold:
        http://judithcurry.com/2011/10/25/best-of-the-best-critiques/#comment-127907
        some choice comments:

        “Sadly, Girma is a fully qualified engineer with a publication and an advanced degree.

        Which boggles the heck out of me.

        Either he’s pulling our legs for some Socratic purpose, or there is something very very wrong going on.”

        and

        “What are you trying to prove here, Girma? We all know how to lie with statistics. You’re just getting started, you’re nowhere near the level of Don Easterbrook or Harry Huffman. Your time would be more usefully spent studying their methods.”

        and my contribution:

        “Engineering school takes the sensible approach and they flunk out half the class, realizing that some people will never be competent at math. How you got through, I haven’t a clue.”

        BTW, to do the CO2 accounting correctly, see
        http://theoilconundrum.blogspot.com/2011/10/temperature-induced-co2-release-adds-to.html
        and my tome.

    • Girma – what you need to find is the Trace Gas Information Analysis Center.

      Lol.

    • So how on Earth do you think Steve McIntyre picked scenario B? He should get you and Max to help explain maths to him.

      • Good grief, C Ho-S, if that were to happen the world could end.

      • This is really stupid. McIntyre picked that scenario B to comment on the GISS model (which uses actual concentrations, not emissions), which was VERY much too sensitive as shown by Schmidt’s paper on the new GISS model (sensitivity 2.6K). Hansen’s 1988 model had 4.2K. However you slice it, Hansen was pretty badly wrong, not matter how much obfuscation and special pleading there is. Hansen was also wrong about the portion of emissions remaining in the atmosphere and about sea level. Stop being a defense lawyer!

      • David, I have not commented on the forecasts, except briefly to acknowledge that the model runs a bit hot. I’m not a lawyer for or against the forecast, but more of an expert witness concerning the backdrop.

        Yes, the Hansen model runs hot when you look at the forecast itself. Putting that down to sensitivity is not particularly likely. Scaling back the sensitivity doesn’t actually make a lot of difference on the time scales being considered here.

    • Chris

      Hansen et al, 1988 explicitly states that they we referring to greenhouse “Emission”, not “concentration”, in their paper as shown here: http://bit.ly/zEUrvx

      • Chris Ho-Stuart

        It’s like you just stop reading the paper when you hit that one word. Or that you don’t understand anything in the paper, and just stop at that point.

        The heading of the section you quote is “forcings”. All the calculations and rates described in that section are expressed in ppmv, which is the measure of atmospheric composition. (Read the rest of the section you’ve highlighted.) All the graphs and figures relating to forcings, not emissions.

        The emissions were used only to get a ball park for the rate of growth of the atmospheric composition increment. And they did get i the right ball park. You’ve apparently got the paper; read it for heavens sake!

        They worked out a scenario for changes in the atmosphere for a number of trace gases (not only CO2, and the others do matter, as the discussion in the paper emphasizes).

        Emissions are not used anywhere in the models or methods for generating temperature. The only way they were used was to pick out a growth rate in atmospheric composition. Even the paragraph you’ve circled in your figure does’t do anything except show that

        That’s why the serious evaluation of the model uses the atmospheric composition; or (as Steve McIntyre’s graph does and as the journal figures do) looks at the forcings for the three scenarios and the forcing in reality. The paper gives some basic formulae for getting the forcings from the concentrations, so to pick the right scenario it’s okay to compare atmospheric compospition with subsequent observations and slightly better to compare the forcings.

        THAT is what tells you the scenario to use with the model. Predicting atmospheric composition from emissions is not part of the model. That’s all just setting up the scenarios. Emissions was only a rough ball park method to define scenarios in terms of atmospheric composition; and for that it works fine. How they generate scenarios is not what matters. The prediction in the paper; the whole reason for the paper; was the generation of temperatures from those atmospheric composition scenarios.

  131. Chris
    I greatly doubt that the emissions numbers can be said to be much better than +/- 25% which is pretty meaningless in my view. Concentration numbers are pretty accurate, but we really do not know what percentage of the total is from human emissions. Non human missions and absorption rates vary with conditions that are not fully understood.

    Chris writes: “You don’t need perfect knowledge of the future to take actions to reduce the risks and costs we do know about.”
    I don’t think any reasonable person expects models to be perfect, but reasonable people do expect models to be evaluated based on their ability to accurately forecast within a stated margin of error on specific criteria. The long term “climate” models simply do not meet this test very well.

    Imo, it comes down to the specific actions recommended for implementation Chris. So could you please describe what specific actions that you think make sense to be implemented by whom and why you think they make sense?

    • No, I can’t do that; because I am not an economist or a policy maker. What I know about is the subject of scientific investigation of climate — which is also the topic of this blog post.

      • Chris

        It is refreshing that you, as a self-described specialist of “scientific investigation of climate”, do not want to make any specific suggestions for actionable proposals, which could lead to a reduction in global warming, and rather leave that to “economists” (who presumably, however, have no notions about the “scientific investigation of climate”)…

        A a matter of actual fact, there have been no actionable proposals made by either group to date, which would have a perceptible impact on our future climate.

        All we have had are some highly dubious broad-brush global impact studies (e.g. Stern, etc.) and hollow pledges by politicians to “reduce CO2 emissions to X% of the levels of year Y by year Z” or (even more ludicrous) to “hold global warming to below 2 degC”.

        As a specialist in the “scientific investigation of climate”, why do you think there have been no actionable proposals?

        Is is possibly because we are unable to change our planet’s climate no matter how much money we throw at it, as some have suggested?

        I know this is not your specialized field, but think about it.

        Max

      • Is is possibly because we are unable to change our planet’s climate no matter how much money we throw at it, as some have suggested?

        No, that is not possible. It’s scientifically nonsense.

        We certainly can and do cause highly significant changes to our environment in many ways. Climate is just one of them. Denial of this basic fact of life is, I think, a deliberate distortion of science by people with a vested interest in avoiding rational policy decisions.

        The rational policy approach for all political or economic or social perspectives is to take into account what we have learned and are learning about the various impacts arising from human activities. Impacts on global climate, local climate, natural environment, social circumstances, health, and so on.

        My personal main interest continues to be explaining and studying the science of causes and effects in climate; which is a matter of straight physical and empirical science.

      • Chris, Those are stirring words that you wrote. It is really a learning experience, and gleaning information from the environment is all that we can depend on in the long term. I am constantly amazed by what we can dig out from the stores of knowledge and data that we have collected in recent years.

        What I find funny is that all the impediments and naysayers that comment here may not be as much of an impediment as they think. Within the past week or two, they have let it slip a great piece of scientific software, Eureqa, and then info on a very comprehensive MET archive. These helped me quite a bit with my own work.

        What the agenda-driven skeptics don’t realize is that the points that they most try to deny are actually the soft-underbelly, primed and ready for some fresh analysis. They constantly give away the store. So hang in there, and you may just pick up some astonishing insight from an unlikely source.

      • Chris Ho-Stuart

        Not to belabor a point, but there is a MAJOR difference in claiming that mankind has changed or is changing our planet’s climate locally and regionally (and hence, by cumulation) globally over historical times and claiming that we are able to change our planet’s global climate at will by some some sort of concerted action.

        The first statement is very likely true while the second is far from substantiated.

        Until you (or anyone else) can show an actionable proposal, which can be demonstrated to directly result in a substantive change in future global average temperature, I will continue to be of the opinion that we are unable to change our planet’s future climate willfully, no matter how much money we throw at it.

        Max

        PS Chris, there have been a few such proposals (very few, as a matter of fact). They ALL have two things in common: they would be extremely expensive to implement and they would have no perceptible impact on our climate.

        But, hey, if you are aware of any such proposals, let’s hear them.

  132. GOOD NEWS

    Global mean temperature back to its value of the 1940s!

    http://bit.ly/zSJdua

    • In the next couple of years, the global mean temperature will go below its 1940s values, and that hopefully will bring the deserved burial of the AGW theory.

      • Girma,
        AGW is not a theory in the science sense of the word. It is a social movement that gloms onto climate science but is driven by the political prejudices and bigotries of its believers and the money available to those who can flog the dead horse forward.
        AGW is a manifestation of something lacking in the believers. it is not really about climate science.

    • Thanks for the truly good news, Girma.

      I can sleep much better now.

      Max

    • If anyone wanted proof that you are just here to try and sow confusion and disrupt serious discussion, we now have it with that graph and comment.

  133. Chris

    You don’t need perfect knowledge of the future to take actions to reduce the risks and costs we do know about.

    You don’t waste money on a non-existent problem and create misery by raising cost of energy while there are billions who live in poverty right now. A climate problem only exists when there is change in the climate (global mean temperature) pattern. As I have posted a millions time (http://bit.ly/Aei4Nd) there is no change in the global mean temperature pattern, and the global warming rate is only 0.06 deg C per decade, not 0.16 deg C as claimed by the IPCC. If people assumed a continued global warming (as the AGW promoters do now) rate of about 0.15 deg C in the 1940s, they would have been found to be wrong now as shown in the following graph.

    http://bit.ly/w5P3c9

  134. Rob Starkey says:

    Web

    “Your comment @ 2:11 pm is nonsense.”

    I agree with Rob here.

    Web says:

    “The skeptics don’t seem to understand that atmospheric co2 concentrations are less than emissions and will have a different growth law.”

    Nice projection. That’s the point which warmists ignore. Concentrations have a different growth law and are less than emissions. I would add that how much less is tightly correlated with global temperatures (on decadal and annual timescales). Temperatures lead, concentrations growths lag. That’s very remarkable. On the other hand, there’s no evidence of correlation between emissions and concentrations or their growth on annual and decadal timescales.

    “Concentration is the convolution of the emissions forcing function with the CO2 impulse response adjustment time.”

    Incomplete. The concentrations growth is dependent on the global temperature and any concentration is the sum of all the growths plus the initial value. Where’s the global temperature in your convolution? Your equations wouldn’t catch any temperature-dependence. What are they good for, when there’s good evidence for the temperature-dependence of the concentration growth?

    • @webbie

      Long time since I spent a few years studying reaction kinetics. But in the immortal words of Jim Steinman ‘its all coming back to me now’.

      And unless my memory is very faulty, Arrhenius’s rate equation refers to the *rate* of a reaction varying with temperature. It doesn’t say anything about what the reaction does or the starting and end points.

      So I am hard pushed to understand how you have worked from the reaction *rate* a la Arrhenius to the *concentrations* of gases before and after. Similarly you appear to have pulled the term ‘activation energy = 0.3 eV’ out of the air. What relevance is it and where does your value come from?

      Would be delighted to see your worked example to fill this lacuna in my knowledge.

      But (as ever) I’m not holding my breath.

  135. “Incomplete. The concentrations growth is dependent on the global temperature and any concentration is the sum of all the growths plus the initial value. Where’s the global temperature in your convolution? Your equations wouldn’t catch any temperature-dependence. What are they good for, when there’s good evidence for the temperature-dependence of the concentration growth?”

    eDim: The obvious issue is that the temperature of the ocean alone could not raise the concentration of atmospheric CO2 from 290 to 390 PPM. If one uses an activation energy of around 0.3 eV, then according to the standard rate law, the temperatures would have to raise closer to 10C to raise the partial pressure by a factor of 1.4 !!!! Try it yourself using the Arrhenius rate law: ~exp(-E/kT)

    But we obviously don’t have an increase of 10C, otherwise you skeptics would be frying in your boots.

    That’s not to say that there isn’t a slight dependence. I calculate it as more like 10PPM for a 1C temperature change, and I used a conservative 20PPM when I did the full convolution on the blog post last year:
    http://theoilconundrum.blogspot.com/2011/10/temperature-induced-co2-release-adds-to.html
    The rest is due to anthropogenic CO2.

    Here is where you might want to eat your words, Edim. Say that the temperature does start to increase by more than a few degrees in a few decades. Then we will really get get that nasty CO2 outgassing feedback that you mistakenly think is happening now.

    • Then we will really get get that nasty CO2 outgassing feedback that you mistakenly think is happening now.

      No way. “Outgassing” would require equilibrium between ocean and atmosphere to be the reverse of reality.

      Carbon models all indicate that the ocean will continue to be a net sink for CO2 for a long long time into the future on the scale of centuries or even millennia.

    • (Sorry for double post, but my original got misplaced. Hope this is right now)

      @webbie

      Long time since I spent a few years studying reaction kinetics. But in the immortal words of Jim Steinman ‘its all coming back to me now’.

      And unless my memory is very faulty, Arrhenius’s rate equation refers to the *rate* of a reaction varying with temperature. It doesn’t say anything about what the reaction does or the starting and end points.

      So I am hard pushed to understand how you have worked from the reaction *rate* a la Arrhenius to the *concentrations* of gases before and after. Similarly you appear to have pulled the term ‘activation energy = 0.3 eV’ out of the air. What relevance is it and where does your value come from?

      Would be delighted to see your worked example to fill this lacuna in my knowledge.

      But (as ever) I’m not holding my breath.

      • A couple of further reflections.

        1. To clarify about what Arrhenius tells us. It tells us a lot about how fast our reaction is occurring, but nothing whatsoever about our start and end points. It is a general purpose equation for the rate of any chemical reaction that has an activation barrier. But you seem to be able to extend it to tell us about end points. How?

        2. Even if dissolution of carbon dioxide in water were to be adequately described by Arrhenius, where do you get an activation energy of anything other than zero from? Usually activation energy is needed to (for example) create some higher energy reaction intermediate that then decays to the end products. Analogous to pushing a ball up a small slope so that that it can eventually fall down the deep cliff on the other side? The reaction speed is determined by how often you can push the ball up the slope, not by the depth of the cliff.

        But I’m struggling to understand where such an energy would be needed here. Surface tension?

        Please clarify.

      • Don’t act so clueless Latimer. Everyone understands the partial pressures with temperature. Your chemistry is so rusty it has completely flaked off. I spent years in semiconductor processing using ultra-high vacuum equipment and the concept of outgassing is crucial to every aspect of growth. Maybe you are deliberately being clueless?

      • Looking forward to seeing your worked example so that I can learn from your superior wisdom.

        Because ‘everybody understands’, followed by an appeal to your own authority isn’t really very helpful for those of us without your insight.

        You brought up Arrhenius’s equation. Please show how it applies in this case.

      • Look up partial pressure, Henry’s law, etc on Wikipedia. Do your own homework,

        The easy one to remember is a doubling of CO2 partial pressure for every 16C increase in temperature of seawater. Look up Takahashi 2002 for verification of this number against measurements. That is an activation energy equivalent. This research also shows the biological effects.

      • @webbie
        Thank you for your rpley. Given, as ever, in the typical WHT style.

        I’m tolerably familiar with partial pressures, solubility of gases and Henry’s Law. Indeed I even vaguely remember answering a question about them in my Chemistry Honours Physical Chemistry paper. Since I passed with a good grade, I’ll assume that I got them about right.

        But my question is not about those. It is about your application of Arrhenius’s law (which refers to the rates of reactions) to a question about the quantities concerned, They are, to my memory, different things.

        I tried to find the paper you referred to, but many guys called Takahishi published in 2002 and it is not obvious which one has the nugget of information you refer me to. Can you give a more detailed description? A title maybe, or even the name of the journal?

        And even if he does say what you claim about activation energy, how do I use that information to say anything new about CO2 concentrations via Arrhenius? Or is the introduction of Arrhenius to this situation just a red herring like in Agatha Christie?

        Thanks again.

      • Latimer Adler,

        Arrhenius law applies to reaction rates but through that also to the equlibrium concentrations which are determined by the relative rates of the reaction and its inverse. At low temperatures the equlibrium is strongly on the side of the lower energy state, but this effect is not true to the same extent at high temeratures, because the ratio of of the exponentials in the Arrhenius law moves closer to unity with increasing temperature.

      • What Pekka said. You will find that non-chemists like myself and my colleagues will use the term Arrhenius to refer to any interesting temperature effect involving gasses. We could also use Clausius-Clayperone, Antoine, van’t Hoff, etc but just settle on Arrhenius, because its just a stupid exponential, wouldn’t you know.

    • Web said, “That’s not to say that there isn’t a slight dependence. I calculate it as more like 10PPM for a 1C temperature change, and I used a conservative 20PPM when I did the full convolution on the blog post last year:”

      With the Antarctic concentration lower than the equatorial, doesn’t that give you some idea of the rate of ocean uptake? If the tropics temperature reduced by 1C and the Antarctic remained the same of reduced somewhat less, ocean uptake of CO2 would increase by ~10 PPM initially for some unit of time until it eached some equilibrium.

      • I think that is what Takahashi did.

        What is also needed perhaps is a dispersional form of the activation energy. So we smear the effective temperature over a range, arctic to equator to Antarctica, and then look at the effective partial pressure delta rates of change. This falls under the “stretched” exponential category and probably won’t be that significant but will look into it.

  136. Chris Ho-Stuart

    This is probably a point where you and I agree: namely that human CO2 emissions have a correlation with atmospheric CO2 levels on a multi-annual basis, even though the relation is not clear on a year-to-year basis, where between 15% and 90% of the emitted CO2 “remains” in the atmosphere., with this averaging out to around 50% over the longer tem.

    I do believe the data show that there is a temperature component, however. The percentage “remaining” in the atmosphere seems to correlate well with the annual change in global average temperature compared to the previous year, with years of relative warming showing higher %-age of the emitted CO2 “remaining” in the atmosphere.

    “Carbon models” may “indicate that the ocean will be a net sink for CO2” (as you write), but, inasmuch as the natural carbon cycle is so much greater than the human emissions, we are talking about a small difference between large numbers. For example, a 10% increase in plant photosynthesis alone would cancel out the total human emissions. Measurements of ocean alkalinity are so rudimentary and dicey, that we cannot say with any certainty where the “missing” CO2 is going quantitatively.

    Colder water is constantly absorbing CO2 while warming water is again releasing it in a natural cycle, but there is no doubt that some of the “missing CO2” is most likely being absorbed by the ocean on a net basis, as you write.

    Max

    • Max,

      I believe that it has been pretty well demonstrated that the year-to-year variability of the CO2 concentrations in mainly due to variability in the amount of carbon stored in the land vegetation and perhaps surface soil. The most important factor in that seem to be tropical rain forests.Variation in the global weather patterns are the driving factor for these effects. As the tropical rain forests don’t store permanently any significant quantities of carbon, these effects average out over the period of a few years.

      I don’t remember references for the above, but the main arguments to support it include isotope ratios and geographic variability of the CO2 concentration.

  137. Web, Chris,

    Sometimes I am not sure if you (and other warmists) are deliberately misleading and deceiving or you are just incapable of understanding this simple matter. It could be a little bit of both. It DOESN’T take any net CO2 outgassing from the oceans in the case that the atmospheric CO2 growth is caused to a significant degree by warming climatic factors – there’s MORE than enough human input to achieve the equilibrium between ocean and atmosphere. Only a fraction of the human input ends up in the atmosphere – the rest goes into reservoirs (oceans and co). Again, the fraction ending in the atmosphere (the annual CO2 increment) is CLEARLY driven by global temperature. In 2011 the concentration growth was 1.79 ppm/year (391.57 – 389.78, the average for the past decade was 2.07 ppm/year, MLO). If the temperature decreases in this decade, the concentration growth will decrease further, despite rapidly growing human emissions.

    Apart from this, there IS a net outgassing from oceans (and other reservoirs) every year from Sept/Oct to May, when the atmospheric CO2 concentrations increment is about 7 – 8 ppm. From May to Sept/Oct, it reverses and the concentration decrement is about 5 – 6 ppm, in only 4 – 5 months! The decrement curve is steeper but shorter than the increment one.

    • My above comment applies also to your message.

      • My message applies to your comment too. The total CO2 growth over a time frame is the sum of the year-to-year variabilities. There’s no other component. Year-to-year variabilities correlate with global temperatures.

      • The variability correlates with weather variability, but that occurs for carbon storages that have no growth trend.

        It’s true that also the CO2 content of oceans grows when the atmospheric CO2 increases and warms the climate. Thus we have a correlation, but the reason is in the CO2 releases that lead to the growth of all these factors. The annual variability is, however, essentially independent as it has another mechanism without trend.

      • EDIM said:

        My message applies to your comment too. The total CO2 growth over a time frame is the sum of the year-to-year variabilities. There’s no other component. Year-to-year variabilities correlate with global temperatures.

        Yes, they correlate with global temperatures because of seasons and the fact that the southern hemisphere has more water surface area than the northern hemisphere. So when summer hits in the southern hemisphere, one sees more CO2 in the atmosphere than when it is summer in the northern hemisphere. As Pekka says, you can add land effects in the same way.

        The end result is that the Mauna Loa measurements develop a yearly ripple on the CO2 profile. This is beyond well understood and only Edim, Latimer, and many of their colleague skeptics can’t figure out that the ripple does not extend to the overall upward trend apart from a second-order effect due to the gradual global warming signal.

        I wrote a few blog posts going over this last year, so comment over there if you want to get a total beat-down

  138. Tomas Milanovic

    It is annoying to see that many people use the word ergodicity without knowing what it means and how it should be interpreted.
    It is not annoying because it is mostly wrong but it is annoying because those readers who are not intimately familiar with these concepts, will get confused and ultimately get farther from understanding rather than nearer.
    I have been playing with the idea to write a pedagogical post dedicated to ergodicity but will take the opportunity to do it in this thread because there are (too) many references to ergodicity.

    So what is and what is not ergodicty and why it matters in physics?

    The ergodic property is a very general mathematical property of measurable sets.
    A measurable set can be defined as (X,µ) where X is some set (f.ex the standard R^n cartesian space) and µ is a measure.
    For purists, I left out the sigma algebra, it is not necessary to understand the rest :)
    Now let’ take some transformation T : X -) X , T is a map, a function
    And request that T preserves the measure µ , e.g µ (T^-1 (A)) = µ(A)

    OK that’s everything we need in matter of definitions – you need a space X , a metrics µ , a transformation T and want this T to preserve the metrics.
    The ergodic property of this triplet is simply the statement :
    If for any A a subset of X we have T^-1(A) = A then µ(A)=1 or 0
    What does that mean in words ?
    If there are some subsets A (imagine a set of points) that are left invariant by the transformation T, then their measure (size) is either 0 or the measure of the whole set X (because µ(X)=1).
    Even more crudely, an ergodic transformation doesn’t get “stuck” in some particular part of the space.

    Now an ergodic transformation has a property which is expressed by the ergodic theorem which is why we are actually interested in ergodicity at all.
    Here we must first define an iteration average and a space average on X for some function F.
    The iteration average of F is Fia = lim (n-oo) 1/n Sum over k [F(Tk(x))]
    This rather heavy formula is just saying that you take some point x in X and make it move by applying T on x then T on T(x) etc.
    If you take F of each of these points , and make an arithmetical average, you obtain the iteration average.

    The space average is easier :
    Tsa = [1/µ(X)] Integral over the whole space X [F.dµ]

    Ok and the ergodic theorem says simply that :
    If T is ergodic then for any F , Tia=Tsa almost everywhere.
    Why “almost” ? Well remember that the measure of subsets left invariant by an ergodic transformation T is 0. This means that there may be isolated invariant points but they don’t matter because they are only here and there.
    So the ergodic theorem is saying that if you make an F average on a trajectory constructed by applying T again and again starting in some point X0 then this average is equal to the simple average of F over all points of X in the infinite limit of iterations.

    And that’s all we need to know as far as the maths go. There are many interesting additional results and consequences but the real maths are not easy and most readers would probably stop following. The point of this short but rigorous introduction was to demonstrate that ergodicity is not some fog that could be interpreted by anybody as it suits his particular view.

    Now why does that matter in physics in general and in the atmospheric system in particular?
    Well as we have a rigorous and very general mathematical theory, we can now take some particular cases with physical meaning.
    So here we go :
    X is a cartesian finite dimensional space and its points are states of a dynamical system (each state is defined by N coordinates). X is also called then a phase space.
    µ is a probability (yes a probability is (also) a measure)
    T is the dynamical law that our system obeys
    T(x) are then trajectories (orbits) of the dynamical states in X. Of course “iterating T” means following the system’s states in time.
    Suppose T ergodic (please note and I stress that I wrote suppose).
    It is not a surprise that all this looks like Hamiltonian mechanics because it IS Hamiltonian mechanics.
    Follows that all those who are not familiar with statistical mechanics and for whom KAM is some chinese abbreviation should abstain talking about ergodicity.
    Well and now we can apply everything we already know from above on this particular case.
    First there are no subsets of the phase space where the system stays “trapped”. It roams almost everywhere. Besides one can easily demonstrate the non intersection theorem which shows that teh system never pases twice through the same state.

    We also know with the ergodic theorem that if we follow the states of the system for an infinite time (remember? infinite iterations of T?) and take the time average of some parameter of the system, then this average will be equal to the (probability weighted) average of this same parameter over the whole phase space.
    This is very interesting because we are mostly interested in the latter while we can experimentally measure only the former.

    What is even more interesting is that it also follows that there exists an invariant PDF of µ which doesn’t depend on initial conditions. Of course it depends on T (e.g on the form of the dynamics) but not on the initial conditions.
    And THAT is very rich!
    It means that by taking some (any) initial condition and following the trajectory of the system for a (very) long time, you will obtain empirically ONE PDF but thanks to ergodicity you know that you don’t need to redo it for the rest of the infinity of initial conditions because your PDf is the one unique for the whole system.
    Caveat : when I say infinite time, I mean it seriously. You have to observe really for a long time and it would take a whole new post to discuss how long one has to observe to apply the ergodic theorem in practice.

    So now I hope everybody understands how confused a discussion can become when people mix up stochasticity, randomness and ergodicity.

    Last let us make a HUGE qualitative leap and cross from finite dimensional X’s (aka Hamiltonian mechanics) to infinite dimensional X’s (aka field theories).
    From the mathematical side not much changes – the formulation of the measurable set theory doesn’t prescribe any particular X’s, µ’s and T’s.
    But physically everything changes – our points become functions, the measures base on square integrable fields and trajectories can no more be geometrically visualised.
    Navier Stokes and by extension weather and climate belong to this category.
    Of course one can also talk about ergodicity but it takes considerably more skill and training and no easy analogies to statistical mechanics or thermodynamics work anymore.

    And what has chaos to do with all this?
    Well chaos theory is interested by a particular category of T’s (remember? T defines the dynamical laws). Namely T’s with sensibility to initial conditions which represent a very large sub set of possible T’s in physics.
    These T’s have an important property – they don’t vist the whole phase space (exercice: show why this is not in contradiction with the ergodic property).
    The reason is intuitively easy – as the orbits diverge exponentially, they would finish in infinity what is physically impossible.
    That’s why there is (and must be) a physical mechanism which restricts the system to a finite part of the phase space.
    This mechanism is energy dissipation and the restricted volume where the system must live forever (or at least untill its dynamical laws don’t change) is called attractor.
    So when one studies a chaotic system, one can and must restrict the phase space to the allowable finite subspace which is the attractor.
    Once this done, and provided the dynamics has the ergodic property (a chaotic system may be but has not to be ergodic), one can apply the regodic theorem and study the probabilities of the different states.
    Don’t forget that “states” here happen in the infinite dimensional framework of field theories.
    Of course there are no mysterious “stochastic” perturbations which make go away the fundamental features of behaviour of chaotic systems – Navier Stokes is deterministic and chaotic all the way down to the quantum scales.
    I would like to describe now how ergodicity can be used for these particular cases but I am afraid that the post is already too long anyway.

    Not related P.S for David Young
    I agree with 99% of what you write but it is not true that vorticity (=curl(V)) is conserved for Navier Stokes.
    It is conserved only for inviscid 2D flows. But as the real world is 3D viscous flows, the many amusing toy models of 2D inviscid flows are irrelevant for what we are interested in.

    • Tomas, thank you very much. I will elevate this to its own post

    • Tomas, You may understand this stuff in your own head, but you can’t explain yourself out of a cardboard box. If it takes you that long to explain something that is truly intuitive to anyone that has studied any statistical physics, you won’t gain any admirers.

      Here is ergodicity in a nutshell: If a behavior showed a probability dependence that followed some declining envelope, then if you wanted to see the 10^-8 tail cumulative of that probability, you would want to gather at least 10^8 samples. If you think that is the ergodic behavior but it doesn’t happen, then ergodicity has been violated for some reason.

      Here is another case that is beyond intuitive for anybody that has used a scientific calculator. There is a random() function key. If the algorithm that underlies this function does not eventually give you a good uniform sampling of all the random numbers between 0 and 1 according to the number of significant digits, it is not ergodic,.

      Now, if you don’t like the way I phrased this and you have problems with the way I stated it, you better give me a different name for this kind of property, because it is needed. A practical definition is needed to explain to software engineers on how to evaluate a random number generator, to an experimental physicist when he is looking at how many experimental measurements to take, or a statistician to figure out how many Monte Carlo runs he should execute. As far as I can tell, there is no other word that fills this role other than ergodic.

      The ultimate problem is that the definition of an ergodic process doesn’t help much when we need to determine whether or not a particular process is ergodic. You need examples, and that is just the way it is. Another good one is that a stationary random process is ergodic if any average that we want to find is calculated from any sample function of the process by a long enough average over time.

      The practical consequences of this are whether we will ever see that huge natural variation temperature “burp” that will completely swamp out the anthropogenic one we are experiencing right now. You can argue about this in statistical terms but you can also think in terms of energy barriers and likelihoods of surpassing these via internal spontaneous means. All of these chaotic systems you are desribing do not change the internal energy of the system and will only increase entropy. The only possibility left is if the variability kicks in a tipping point and changes the albedo (or something similar) to allow more energy into the system, as Pekka is patiently explaining.

      I have plenty of other examples but before I present them, I want to know if this is anything more than an academic exercise that Tomas is working out.

  139. Vaughan Pratt & WebHubTelescope

    I don’t respond to your personal attacks.

    However, I offer you below my interpretation of the global mean temperature data so that you can attack it instead (play the ball, not the man).

    Here is the global mean temperature (GMT) data => http://bit.ly/Aei4Nd

    The most important observation in the above data is that the upper GMT boundary line passes through most of the GMT peaks, the lower GMT boundary line passes through most of the GMT valleys, and these lines are parallel. Also, the line that bisects the vertical space between the two GMT boundary lines is nearly identical to the long-term global warming trend line of 0.06 deg C per decade for the data from 1880 to 2010. This result indicates, since the GMT record begun, the GMT behaved like a stable pendulum with the two GMT boundary lines that are 0.5 deg C apart as the end points of the pendulum’s swings, and the long-term global warming trend line of 0.06 deg C per decade as the pendulum’s neutral position.

    From the above graph, the GMT has a single pattern that consists of a warming rate of 0.06 deg C per decade with an oscillation of 0.5 deg C every 30 years.

    In the above graph, a shift in climate to an accelerated global warming would have been indicated if the upper GMT boundary had been a curve with increasing positive slope with increasing years. As this has not been the case, there is no evidence of human emission of CO2 affecting the GMT.

    As a result, there is no evidence of Anthropogenic Global Warming (AGW) so far.

    Future evidence of AGW would be if the GMT lies in the red region in the following graph.

    http://bit.ly/oembwR

    • Instead of the above interpretation, the IPCC and WebHubTelescope (http://bit.ly/wFTjQL) ignored the 1880s & the 1940s GMT peak and fit their “hyperbola” and they were exposed to be wrong with the current start of global cooling ( http://bit.ly/nz6PFx ). Here is what an IPCC insider wrote about the validity of their interpretation:


      Indeed, in the verification period, the biggest “miss” was an apparently very warm year in the late 19th century that we did not get right at all.”
      http://bit.ly/r3npAd

      AGW is probably the greatest scientific blunder (I am being extremely fair here) and scare mongering in the history of science!

      What the climate science community must do is predict the GMT trend for the next two decades, demonstrate to the world that that prediction matches observation and then only advocate policy. Before application of a scientific theory, it must first be validated.

      • Advice from Richard Feynman

        We’ve learned from experience that the truth will come out. Other
        experimenters will repeat your experiment and find out whether you
        were wrong or right. Nature’s phenomena will agree or they’ll
        disagree with your theory. And, although you may gain some
        temporary fame and excitement, you will not gain a good reputation
        as a scientist if you haven’t tried to be very careful in this kind
        of work. And it’s this type of integrity, this kind of care not to
        fool yourself, that is missing to a large extent in much of the
        research in cargo cult science.

        http://bit.ly/CHGmZ

  140. Max & Chris

    Another evidence that Hansen et al 1988 were referring to CO2 emission instead of concentration is that in that article it specified the emission rate for the 1970s to be 1.5%.

    http://bit.ly/mye5j0

    In contrast, the current exponential growth rate of CO2 concentration is only 0.5%, and for the 1970s, this value is even less.

    Clearly, the 1.5% must refer to the CO2 emission, not concentration.

    • Chris Ho-Stuart

      Emissions were used as a basis for estimating atmospheric composition changes that were used in the scenarios; and they did pretty well at that role.

      As it turns out the scenario B defined on that basis ended up being pretty close to reality, though a bit too high. The scenario, if you read the paper, was given as atmospheric compositions. Growth rates for atmospheric composition were estimated using emission growth rates, since there is a strong correlation. But that’s not the prediciton of the paper.

      The predition of the paper was for temperatures, calculated from three different atmospheric composition scenarios.

  141. Chief Hydrologist

    ‘Now the chief is including induced features.

    If a turbulent flow started to slosh around and open up some door (albedo) which then changed the influx of energy, then that is an induced feature which is only coincidentally related to a chaotic causality. That is Pekka’s tipping point or a stochastic resonance. The review paper by Gammaitoni 1998 demonstrates this for both stochastic and chaotic sources. The chaotic version is just included for completeness because it could happen where the deterministic behavior is not removed by stochastic disturbances.’ Webby

    From Sornette 2009 – Dragon-Kings, Black Swans and the Prediction of Crises

    ‘We emphasize the importance of understanding dragon-kings as being often associated with a neighborhood of what can be called equivalently a phase transition, a bifurcation, a catastrophe (in the sense of Rene Thom), or a tipping point.’

    I ordinarily talk about dynamical complexity but the terminology is unimportant. The reality is obfuscated by these terms – stochastic, ergodic, deterministic, etc that seem sometimes to be randomly applied and discussed in the abstract.

    The non-linearity in the Earth system occurs as a result of interactions between sun, ocean, atmosphere, biology, ice and cloud. It is not just about turbulent flow. It is a complex system – it has been described as tremendous energies cascading through powerful mechanisms. The non-linearity emerges in abruptly changing rainfall regimes, in glacials and interglacials, in the sudden changes in temperature trajectories in the instrumental record. None of these things are random – they all have cause and effect – they are all deterministic. So when someone says that determinism is removed by stochasticism – I throw my hands up in horror.

    I don’t think Pekka is quibbling about the physical reality of the planet – just whether this can be usefully defined as chaos in the Lorenzian sense. Perhaps the most useful approach is not to mention chaos at all – it seems just to confuse the children Do as Tsonis did – look at real world data and call it a new dynamical mechanism for major climate shifts.

    Robert I Ellison
    Chief Hydrologist

  142. Is there any validity to the thought that the flat periods are times when heat is being mixed into deeper ocean waters and hence doesn’t show up in the ocean surface and hence in the atmosphere while during the periods of rising temperature deep mixing is less and hence we see more surface temperature rise.

    • As I understand the matter, this is the most credible notion, in the light of basic physics and limited available empirical data. The main issues are with measuring the way heat and water flows through the ocean — especially the deeper ocean.

      • That’s convenient for you but is it true?

      • I have no stake in this at all, sonvenience doesn’t come into it. The only sensible tools we can use for approaching a truer understanding of the natural world are the tools used in science.

        A couple of recent papers are relevant.

        (1) Using physics: Palmer et at (2001) Importance of the deep ocean for estimating decadal changes in Earth’s radiation balance, in GRL Vol 38, L13707, doi:10.1029/2011GL047835
        (2) Using observations: von Schuckmann et al (2001) How well can we derive Global Ocean Indicators from Argo data?, in Ocean Science Discussions, Vol 8, pp 999-1024, doi:10.5194/osd-8-999-2011

        The latter is from a new journal which is, I hope, an indication of where scientific journals are heading. It is open access, published using creative commons license, interactive reviewing. Very cool. Here’s a link to the paper I’ve cited: von Schuckmann et al (2001); from there you can easily read all kind of other interesting papers on ocean science.

        The amount we don’t know about the ocean is enormous; so there’s lots of interest in this. Be that as it may, the thought William mentions does seem to be one of the best supported hypotheses around so far, both empirically and theoretically. But stay tuned.,

  143. Climate turning points => http://bit.ly/GJwbfR

    Do you agree?

  144. I am very interested in your last paragraph. “IMO, the standard 1D energy balance model of the Earth’s climate system will provide little in the way of further insights; rather we need to bring additional physics and theory (e.g. entropy and the 2nd law) into the simple models, and explore the complexity of coupled nonlinear climate system characterized by spatiotemporal chaos.”
    The energy balance model tells us whether the atmosphere, surface and oceans are gaining or loosing internal energy. There is little doubt that CO2 has added roughly 1.6 watts/m2 to the climate system since 1950 or about 1500×10^21 joules since 1950 (Murphy et al. 2009). This number is independent of the water vapor feedback and requires knowledge only of how much CO2 has been added, where it ended up and the temperature profile of the atmosphere. This added energy raises temperatures, melts ice, evaporates water. It seems that it also might alter circulation. Of course, IPCC acknowledges that it is uncertain what the climate system will do with this energy. Their climate sensitivity varies from values small enough to reduce concerns about a doubling of CO2 to values that are quite scary. However, it seems clear the that the basic radiation physics says that you need to find a persistent negative feedback to balance the 1.6 watts/m2 to avoid any accumulation of large amounts of energy in to the climate system. (Dr. Lindzen has tried to find that feedback, but his paper was strongly criticized and his submitted rewrite was rejected. Based upon what was available on the web, I guess that the reviewers felt that his revised draft actually did not address the objections raised to the original paper.)
    I would like to know what new physics you want to see included in the discussion and what is the role of the second law? Isn’t this primarily a 1st Law problem? Do you expect the ARGO floats to find the energy necessary to balance the books? Or do we need to go deeper still? Is your complaint that we do not have sufficient physics to know how climate will use the energy?
    Regardless, the magnitude of the climate-externality will not be known with sufficient accuracy to permit classical cost-benefit analysis. Yet we know that about half of emitted CO2 stays in the air for hundreds of years and that its tendency is to warm. Are we not most of the way to agreement that it is time to start “no regrets” policies? The false dichotomy in your slide show in GB was to oppose successful avoiding of the worst aspects of warming to the collapse of the economy in trying to avoid the worst aspects. There are many way stations between these alternatives.

  145. People seem to assume that the only way to falsify or “improve” the certainty of the models is to wait for them to be validated. We are told that’s a 20 or 30 year period. We always find ways in science to circumvent such limitations. Waiting another 15 years with baited breath seems like a ridiculous option. There must be many more provable or disproved elements of these theories that we can use.

    Energy balance is one. It seems to me that the oceans haven’t warmed as much as would be needed to store the energy because they haven’t warmed hardly at all. That’s consistent with the atmosphere and from what I remember there is a rough calculation that ocean 0-700m temps should be about 1/2 the rise of air temps. Well, 0/2 = 0 so that’s roughly in alignment from the charts I have seen. So, it seems to me that this is a serious issue with the theory. Could the energy be tied up in potential energy of state changes in the ocean or atmosphere? I saw an article that the lower humidity in the upper atmosphere observed is “contracting” the thermosphere and maybe other high altitude clouds that is reducing the thermal blanket. I saw a published paper that said this would allow more energy to escape to space. Could that be happening?

    Each of the models and the theory makes assumptions about the relationship between various PKI for the environment, temperature, relative humidity at various altitudes and lattitudes, there is cloud types and cover, there is ice coverage, solar radiation fliux of various types. Can’t experiments be tried in a closed environment to simulate these behaviors to see if they react that way in real world?

    Lastly I am confused by the longer term trends and unknowns still in the system. The IPCC and books I’ve read suggest many aspects of our understanding are still poor. How would eruptions, cracks or leakage from lower magma into the ocean affect temperatures of the ocean and circulate? Could a giant underground eruption put enough heat into the ocean that over a hundred years temperatures became warmer in the ocean and subsequently in the air and land? Do we really know the magnitude and impact of undersea events and temperatures, flows to understand how this all works? How do we know solar influence is so minimal? It seems amazxing to me the correlations Ive seen between cosmic rays and historic temperature. There seems to be no theoretical understanding of how there could be such a relationship at the level we’ve seen. It implies there is something missing in our physics understanding of the relationship of the suns radiation and our planets ecosystem. The 1000 year cycle that is now established from what I’ve read over the last 3-8000 years … I have seen no explanation for that cycle. it could be ocean or solar origin. Either way it has a huge impact on climate over long periods.

    Without understanding the above how can we be certain of the impact of co2 versus these other things? I believe that our low understanding and inability to explain lots of these things leaves the analysis that says that we “ascribe” this heating to CO2 depends intimately on our ability to say these other things are NOT significant but since we have such low understanding of those things and unexplained phenomenon then it becomes impossible to say the “science is settled” or that there is 95% certainty of this or that.

  146. I have a confident synthetic attention for the purpose of details and can foresee troubles just before they take place.

  147. I have an enthusiastic synthetic eye regarding details and may foresee troubles
    prior to they take place.