Climate model simulations of the AMO

by Judith Curry

What are the implications of climate model deficiencies in simulating multi-decadal natural internal variability  for IPCC’s climate change detection and attribution arguments?

Two recent papers raise some significant concerns in this regard:

The Atlantic Multidecadal Oscillation in twentieth century climate simulations: uneven progress from CMIP3 to CMIP5

Alfredo Ruiz Barrados, Sumant Nigam, Argyro Kavvada

Abstract (excerpts) Decadal variability in the climate system from the Atlantic Multidecadal Oscillation (AMO) is one of the major sources of variability at this temporal scale that climate models must properly incorporate because of its climate impact. The current analysis of historical simulations of the twentieth century climate from models participating in the CMIP3 and CMIP5 projects assesses how these models portray the observed spatiotemporal features of the sea surface temperature (SST) and precipitation anomalies associated with the AMO. The structure and evolution of the SST anomalies of the AMO have not progressed consistently from the CMIP3 to the CMIP5 models. While the characteristic period of the AMO (smoothed with a binomial filter applied fifty times) is underestimated by the three of the models, the e-folding time of the autocorrelations shows that all models underestimate the 44-year value from observations by almost 50 %. Variability of the AMO in the 10–20/70–80 year ranges is overestimated/underestimated in the models and the variability in the 10–20 year range increases in three of the models from the CMIP3 to the CMIP5 versions. Spatial variability and correlation of the AMO regressed precipitation and SST anomalies in summer and fall indicate that models are not up to the task of simulating the AMO impact on the hydroclimate over the neighboring continents. 

Published in Climate Dynamics [link].

Excerpts from the Introduction provides some background:

Decadal climate prediction has taken a prominent role for the first time in the experiments of the Coupled Model Intercomparison Project Phase 5 (CMIP5). The need for useful decadal predictions has been made not only from scientific papers, but also from the impact of climate-related events like the current melting of the Greenland glaciers, the ongoing drought in northern Mexico and central US, as well as past decade-long droughts over the same region in the recent twentieth century and over western Africa. Therefore, if one aspires to have reliable decadal predictions, climate models have to properly incorporate the processes that give rise to decadal variability in specific components of the climate system, in addition to the mechanisms through which these processes impact the surface climate affecting human societies.

Phenomena with defined decadal variability that climate models must properly include are the Pacific Decadal Oscillation (PDO) and the Atlantic Multidecadal Oscillation (AMO). Decadal control of hydroclimate from the AMO over North America and Africa is one of the main reasons to worry about having this phenomenon properly incorporated in climate models. Multi-year, summer and fall droughts over North America and Africa have been observationally linked to decadal SST variability in the Atlantic.

Excerpts from Concluding Remarks:

Decadal variability in the climate system from the AMO is one of the major sources of variability at this temporal scale that climate models must aim to properly incorporate because its surface climate impact on the neighboring continents. If climate models do not incorporate the mechanisms associated to the generation of the AMO (or any other source of decadal variability like the PDO) and in turn incorporate or enhance variability at other frequencies, then the models ability to simulate and predict at decadal time scales will be compromised and so the way they transmit this variability to the surface climate affecting human societies.

The key point from the paper is this:

Variability of the AMO in the 10–20/70–80 year ranges is overestimated/underestimated in the models and the variability in the 10–20 year range increases in three of the models from the CMIP3 to the CMIP5 versions.

This is more completely stated in the text (note:  Fig 3 is the key figure IMO, which I couldn’t figure out how to extract):

It is clear that both sets of CMIP3 and CMIP5 models underestimate low  frequency variability in the 70–80 and 30–40 year ranges while overestimate variability in the 10–20 year range. Variability in the higher 10–20 year range increases from CMIP3 to CMIP5 in three of the models surpassing the variability in this range from observations.

Implications for detection and attribution of climate change

While the Barrados et al. paper focuses on the implications related to regional hydroclimates, I think perhaps a more significant implication of their analysis is on the fitness-for-purpose of climate models for detection and attribution of climate change over the last century.  IPCC AR4 defines detection and attribution in the following way:

‘Detection’ is the process of demonstrating that climate has changed in some defined statistical sense, without providing a reason for that change.[T]he methods used to identify change in observations are based on the expected responses to external forcing, either from physical understanding or as simulated by climate models. An identified change is ‘detected’ in observations if its likelihood of occurrence by chance due to internal variability alone is determined to be small.

[W]hen fingerprints from Atmosphere-Ocean General Circulation Models (AOGCMs) are used, averaging over an ensemble of coupled model simulations helps separate the model’s response to forcing from its simulated internal variability.

Detection does not imply attribution of the detected change to the assumed cause. ‘Attribution’ of causes of climate change is the process of establishing the most likely causes for the detected change with some defined level of confidence . As noted in the SAR and the TAR, unequivocal attribution would require controlled experimentation with the climate system. Since that is not possible, in practice attribution of anthropogenic climate change is understood to mean demonstration that a detected change is ‘consistent with the estimated responses to the given combination of anthropogenic and natural forcing’ and ‘not consistent with alternative, physically plausible explanations of recent climate change that exclude important elements of the given combination of forcings’.

Both detection and attribution require knowledge of the internal climate variability on the time scales considered, usually decades or longer. The residual variability that remains in instrumental observations after the estimated effects of external forcing have been removed is sometimes used to estimate internal variability. However, these estimates are uncertain because the instrumental record is too short to give a well-constrained estimate of internal variability, and because of uncertainties in the forcings and the estimated responses. Thus, internal climate variability is usually estimated from long control simulations from coupled climate models. Subsequently, an assessment is usually made of the consistency between the residual variability referred to above and the model-based estimates of internal variability; analyses that yield implausibly large residuals are not considered credible (for example, this might happen if an important forcing is missing, or if the internal variability from the model is too small). Confidence is further increased by systematic intercomparison of the ability of models to simulate the various modes of observed variability, by comparisons between variability in observations and climate model data and by comparisons between proxy reconstructions and climate simulations of the last millennium.

Model and forcing uncertainties are important considerations in attribution research. Detection and attribution results based on several models or several forcing histories do provide information on the effects of model and forcing uncertainty. Such studies suggest that while model uncertainty is important, key results, such as attribution of a human influence on temperature change during the latter half of the 20th century, are robust.

The approaches used in detection and attribution research described above cannot fully account for all uncertainties, and thus ultimately expert judgement is required to give a calibrated assessment of whether a specific cause is responsible for a given climate change. The assessment approach used in this chapter is to consider results from multiple studies using a variety of observational data sets, models, forcings and analysis techniques. The assessment based on these results typically takes into account the number of studies, the extent to which there is consensus among studies on the significance of detection results, the extent to which there is consensus on the consistency between the observed change and the change expected from forcing, the degree of consistency with other types of evidence, the extent to which known uncertainties are accounted for in and between studies, and whether there might be other physically plausible explanations for the given climate change. Having determined a particular likelihood assessment, this was then further downweighted to take into account any remaining uncertainties, such as, for example, structural uncertainties or a limited exploration of possible forcing histories of uncertain forcings. The overall assessment also considers whether several independent lines of evidence strengthen a result.

In summary, the IPCC’s detection and attribution arguments do not seem to account for the substantial underestimation by climate models of natural internal variability on time scales of the main multi-decadal variability of the AMO and PDO, this general result was known at the time of AR4, see figure 9.7.   I have discussed this issue at length in previous posts: Overconfidence in IPCC’s detection and attribution:  Part I, II, III, IV.

Further, the flat blade of the hockey stick, which arguably under represents variability on multi-decadeal time scales, is used to lend credence to the climate model simulations.

Von Storch on climate models and natural internal variability

Hans von Storch et al. have a recent paper Can climate models explain the recent stagnation in global warming?   From the abstract:

In recent years, the increase in near-surface global annual mean temperatures has emerged as considerably smaller than many had expected. We investigate whether this can be explained by contemporary climate change scenarios. In contrast to earlier analyses for a ten-year period that indicated consistency between models and observations at the 5% confidence level, we find that the continued warming stagnation over fifteen years, from 1998 -2012, is no longer consistent with model projections even at the 2% confidence level. Of the possible causes of the inconsistency, the underestimation of internal natural climate variability on decadal time scales is a plausible candidate, but the influence of unaccounted external forcing factors or an overestimation of the model sensitivity to elevated greenhouse gas concentrations cannot be ruled out. The first cause would have little impact of the expectations of longer term anthropogenic climate change, but the second and particularly the third would.

From the main text:

What do these inconsistencies imply for the utility of climate projections of anthropogenic climate change? Three possible explanations of the inconsistencies can be suggested: 1) the models underestimate the internal natural climate variability; 2) the climate models fail to include important external forcing processes in addition to anthropogenic forcing, or 3) the climate model sensitivities to external anthropogenic forcing is too high,.

The first explanation is simple and plausible. Natural climate variability is an inevitable consequence of a slow system (climate) interacting with a fast system (weather) (10). The forcing of the slow system by the (white noise) low-frequency components of the fast system produces a “Brownian motion” of the slow system, represented by a red variance spectrum – in qualitative agreement with observations. However, the details of the response depend strongly on the internal dynamics of the slow system in the time scale range of interest – in the present case, on decadal time scales. It is long known, from successive reports of the Intergovernmental Panel on Climate Change(4), that contemporary global climate models have only limited success in simulating many such processes, ranging from the variability of the ocean circulation, ENSO events, various coupled ocean-atmosphere oscillation regimes, to changes in sea ice, land surface, atmospheric chemistry and the biosphere. The inability to simulate the statistical internal climate variability may have been artificially compensated in the past by tuning the models to prescribed external forcings, such as volcanic eruptions and tropospheric aerosols.

This would explain why simulations with historical forcing by different GCMs tend to be very similar and follow closely the observed record. This artificial “inflation” of forced variability at the expense of unpredictable natural variability works, however, only in the period of tuning, and no longer in the post-tuning phase since about 2000. The net effect of such a procedure is an underestimation of natural variability and an overestimation of the response to forced variability.

Fitness for purpose ?

While some in the blogosphere are arguing that the recent pause or stagnation is coming close to ‘falsifying’ the climate models, this is an incorrect interpretion of these results.  The issue is the fitness-for-purpose of the climate models for climate change detection and attribution on decadal to multi-decadal timescales.  In view of the climate model underestimation of natural internal variability on multi-decadal time scales and failure to simulate the recent 15+ years ‘pause’, the issue of fitness for purpose of climate models for detection and attribution on these time scales should be seriously questioned.  And these deficiencies should be included in the ‘expert judgment’ on the confidence levels associated with the IPCC’s statements on attribution.

343 responses to “Climate model simulations of the AMO

  1. Are we there yet, Judy?
    ============

    • Where is there?
      The models are off or CO2 has no effect?

      We will never get the initial conditions for any model to exactly match the analog model, so a model matching the current temperature trend is not a necessary nor sufficient condition for falsifying any model.

    • David Springer

      So the models aren’t fit for the purpose of prediction on decadal time scales. What’s that leave them fit for and how is that fitness, if any, supposed to be validated?

      The pause killed the credibility of the models and the credibility of the model makers and model cheerleaders along with it. I love being right. Kudos to me.

    • Waal, shore! Dincha read that ” key results, such as attribution of a human influence on temperature change during the latter half of the 20th century, are robust.” What more could you want?

      Unless your appetite for unalloyed hooey is impaired.

  2. This seems to validate the idea in an earlier post that the dog has a long leash and CO2 forcing is a much smaller effect than currently touted.

  3. While some in the blogosphere are arguing that the recent pause or stagnation is coming close to ‘falsifying’ the climate models, this is an incorrect interpretion of these results

    Kudos.

    • Kudos? What happened to “evidence”? Don’t you need no “evidence” this time? What about tribalism? Is that it-just “Kudos”? What happened to the smarminess? Now post that picture, again.

      • Joshua

        Strangely enough, we seem to agree.

        I also liked this post (Barrados et al., Von Storch, etc.), and, in particular, Judith’s summary:

        While some in the blogosphere are arguing that the recent pause or stagnation is coming close to ‘falsifying’ the climate models, this is an incorrect interpretation of these results. The issue is the fitness-for-purpose of the climate models for climate change detection and attribution on decadal to multi-decadal timescales. In view of the climate model underestimation of natural internal variability on multi-decadal time scales and failure to simulate the recent 15+ years ‘pause’, the issue of fitness for purpose of climate models for detection and attribution on these time scales should be seriously questioned. And these deficiencies should be included in the ‘expert judgment’ on the confidence levels associated with the IPCC’s statements on attribution.

        Sounds like the “uncertainty monster” in action here. Let’s see how IPCC decides to “frame” this.

        Max

      • I agree with Joshua (I usually do) but I prefer the whole quote that Max has provided. It places everything in its correct context: The models are simply not adequate for prediction purposes nor for any form of advocacy of policies.

        +1

      • David Springer

        The comment with “that picture” was deleted.

      • Joshie has Peter Davies fooled. How do you agree with joshie’s kudos cherry-picked out of context quote, and prefer the quote in context, at the same time? Don’t you see that joshie is only interested in defending the models? Not that he has a clue what a model is. It just suits his agenda.

    • “kudos”
      So that’s it Joshua? You find the one sentence you like and forget the rest? Your tunnel vision, motivated thought processes, and deeply ingrained biases do not serve you well, to say the very least. Maybe someday you’ll see this, but as things stand you’re like a hometown sports fan, blindly rooting for laundry.

      • She had been a slave down South and she remembered “having seen the troops marching—-a mess o’ blue, den a mess o’ gray.” “What,” my mother asked her once, “were they fighting about?” “Dat,” said Mrs. Robertson, “Ah don’t know.”

        Words of wisdom from the Thurber family laundry lady.
        ===================

      • Well, that was bait, but I’ve run out of beer and have to go home. Climate warriors mess here and mess there, responding to edicts from above and from heartfelt beliefs; hoi polloi, about whom it is all about, gazes on obliviously. I prefer the analogy with the Russian adventure in Afghanistan with stats seeking missiles dropping helicopter articles like can’t flies. Note, I particularly like ‘helicopter articles’.
        =====================

      • Oops, found a half empty in an attempt to regain the natural, but very difficult, terrain.
        ================

      • You find the one sentence you like and forget the rest?

        Actually, I “liked” the entire post. Seemed quite reasonable to me.

        But that part in particular I thought noteworthy because it is more focused on the debate about the debate, and that is my primary focus here (I’m interested in the science itself also, but it isn’t something I’m as interested in commenting on). It is good to see Judith point out the fallacious reasoning often seen in the “skept-o-sphere.” I wish she’d do it more often. I suspect it would enhance her bridge-building skillz. Hence the kudos.

      • David Springer

        “Joshua” is a “joke”. “Give” that some “kudos” my “brothers”.

        “Rolling” on the “floor” “laughing”.

    • If the issue is “the fitness-for-purpose of the climate models for climate change detection and attribution on decadal to multi-decadal timescales,” then NOT being fit for the purpose sure looks, smells, tastes, and quacks like failure.

      • In ‘Climate Science’ the term ‘falsifiable’ means something quite different from all other scientific fields.
        Climate scientists can generate a model, find that it does not match reality, but as the people who constructed feel good about their achievement in building the model in the first place, not matching reality isn’t important and isn’t really what the whole thing is about anyway.

      • Fitness-for-purpose begs the question “what is the purpose?”.
        If all models are wrong but some are useful, then continued funding is clear evidence of a model being useful, if only to the people who created it.

    • Well and good, but then you have to accept the flip side of that same 45 – that all the hooting and hollering in the ’90s about the world being on fire is equally unsupported.

    • Amazing – what just one word can inspire.

      Kind of like a face that launches a thousand ships.

      Powerful obsession is powerful.

      • You are confusing obsession with rejection/revulsion. We don’t like you. What do you call your need to bombard comment threads on everyfreaking thing? You are hogging up space here. Check yourself. And don’t pretend that it is not noteworthy, shocking, worthy of comment, that you have actually uttered one word of approval towards Judith. You are incessantly barking at her heels. For your “evidence”, see last few years of Climate Etc. comments.

    • Steven Mosher

      +1

      see that didnt hurt.

    • It seem Joshua is willing to kiss his sister with this article. Oh, how the mighty have fallen. Joshua is thrilled that this doesn’t disprove AGW, it only disproves the ridiculous confidence levels of the “expert” judgement.

    • I agree, Joshua. The GCM’s are not false per se, but it is becoming quite obvious they are not fit for detection/attribution, especially not prediction.

      I appreciate seeing a positive comment from you as well.

  4. Fig 3 is the key figure IMO, which I couldn’t figure out how to extract

    Here:

    http://plazamoyua.files.wordpress.com/2013/03/fig-3-ruiz-barradas.png

  5. So what do we know? Can we pick out an anthropogenic signal from the mix of observed and modeled data? What is the confidence level that we have included all forcings and natural cycles? How well do we understand AMO, PDO, and ENSO? Can we actually have any confidence in our ability to forecast future climate? My perception is that we know that a doubling of CO2 can raise the global temperature by 1d C, ceteris paribus, and little else with much confidence.

    • Eric, you write “My perception is that we know that a doubling of CO2 can raise the global temperature by 1d C, ceteris paribus, ”

      No we dont. One cannot do controlled experiments on the earth’s atmopshere, so we cannot measure climate sensitivity. Any estimate like your 1d C for a doubling nof CO2 is little more than a guess. Such little empirical data as we have gives a strong indicationn that the climate sensitivity of CO2, however defined, is indistinguishable from zero.

      • Jim,
        I understand what you are saying. But doesn’t a doubling of CO2 result in an increased radiative forcing of 3.7 watts per square meter, which, with everything else held equal, would create approximately 1d C of warming? I understand that this is theoretical as we cannot observe this under a controlled experiment, but is based on accepted and proven physical properties.

      • Eric, you write “I understand that this is theoretical as we cannot observe this under a controlled experiment,”

        I suggest you go into the details of precisely how the warmists go from 3.7 Wm-2 to 1 C. There are several unsubstantiated assumptions that go into this estimation; the main one being that the change of 3.7 Wm-2 is compensated for by radiation effects alone. I suspect that if ALL the ways the atmopshere has of compensating for the change in radiative forcing, conduction , convection and the latent heat of water, are taken into account, the 1 C just about completely disappears.

      • Jim,
        My statement included the phrase “ceteris paribus”, which in this case would mean a 1d C warming would occur in the absence of any feedbacks (or at least that was my intention). If we agree on the 3.7 watts of forcing per doubling of CO2, then I don’t think we are in disagreement, just a different perspective on what I meant to say.

        Cheers, Eric

      • Eric, you write “which in this case would mean a 1d C warming would occur in the absence of any feedbacks”

        I am not talking feedbacks. I am talking about the way the 1 d C was estimated. I suspect you have not, as I have, delved into the precise physics of how this estimation has been done. I cannot prove that it is wrong, as no-one, and I mean no-one, understands the physics of how the atmosphere responds to a change in forcing. But I seriously doubt that the warmist estimate is right. And since it cannot be measured, it must be little more than a guess.

      • Ships passing in the night both headed for the crux of the argument; what sign has the net of feedbacks over theoretical ‘estimations’?
        =============

      • Jim,
        You are correct, I haven’t gone that far into it. I still have much to learn.

      • Amount of joules of heat needed to warm 1 kg of water from 4 C to 5 C is
        4.204 Kilojoules [4204 joules]. One square meter which is 1 meter deep
        [a cubic meter of water] has 1000 kg of water. And requires 4.2 million joules to raise temperature by 1 C.
        If you had a source of radiant energy that warmed a cube meter of water
        by 100 joules a second [a 100 watts per square which converted all radiant energy into heat] it requires .042 million seconds to raise temperature by 1 C [.042 million seconds equals 11.6 hours] for cubic meter of water. And likewise a square meter which 1000 meter deep requires 42 million seconds [11666.6 hours- 486.1 days].
        And Earth’s ocean have average depth of 4.3 km [4,267 meters]-
        http://oceanservice.noaa.gov/facts/oceandepth.html

        A 100 watt incandescent light bulb is the amount of electricity this light bulb uses. It’s said to be inefficient because they make somewhere around 5% of the 100 watts of electricity into visible light. Most of the radiant energy they make is infrared light. Most of this radiant energy in the infrared spectrum is in the Near infrared light- Infrared light close to visible light in the electromagnetic spectrum of all radiant energy.
        Our Sun is about 40% efficient in terms of making visible light. The Sun is more efficient than a 100 watt incandescent light bulb because it’s hotter than the filament of a incandescent light bulb. The sun’s temperature at it’s photosphere is about 6000 K. Or the sun surface does not normally exceed 6000 C. And light bulbs normally don’t exceed
        3200 C.
        Wiki:
        “The total amount of energy received at ground level from the sun at the zenith is 1004 watts per square meter, which is composed of 527 watts of infrared radiation, 445 watts of visible light, and 32 watts of ultraviolet radiation.”
        So one could say sunlight reaching surface is 44.5% efficient in terms of visible light. And most of the “527 watts of infrared radiation” of sunlight is in Near Infrared radiation. Any incandescent lightbulb [100, 1000, or 10,000 watt lightbulb] is similar to sunlight but dimmer. If you use enough lightbulbs to get 445 watts of visible light per square meter you will get far more than the 527 watts of infrared radiation that sunlight makes per square meter- just a guess around 3000 to 4000 watts of infrared radiation.
        Btw, roughly, since studio lights for TV programs are trying to light the the people in front of a camera with light levels somewhere close to brightness of the sun, news reporter in studio are experiencing around the heat of the sun one could experiences at Venus distance.

        Anyways, take rectangular box, have surface highly reflective, put tons of insulation surrounding it. And have dimensions of box 1 meter by 1 meter and 1001 meter deep. Fill it with 1000 meters of water, and put a 100 watt lightbulb in the 1 cube meter of air space- hang lightbulb 1/2 above water and 1/2 meter below the reflective ceiling.
        Have water be 4 C, beyond the thick insulation, and it surrounded rock or water which is always kept at 4 C. We know from above that it
        will take more than 486.1 days to warm the water up by 1 C. We know the lightbulb makes about 5% of the 100 watts into visible light, we know it produces far more infrared energy than 5% of the 100 watts.
        And we know that this 100 watt light bulb would heat the water faster than any effect vaguely imaginably possible by CO2 of any concentration or any concentration of any and all greenhouse gases of any concentration.
        In practical terms one probably could never insulate this well enough
        for the 100 watt light bulb to heat the water by 1 C within a decade.

        Though merely heating 1 cubic meter of water would be a lot easier to insulate the heat. So do a 1 meter by 1 meter by 2 meter reflective rectangular box with lots of insulation and surrounded material at 4 C. So adding 100 joules of heat per second takes 11.6 hours. And one could probably heat it by 1 C with 100 watt lightbulb within one day. But could heat it with 10 watt lightbulb within 10 days? Here one has same problem in terms of trying to insulate it enough- the longer time
        to heat it, allows more heat to seep out. But with 10 watt lightbulb and 1 by 1 by 2, has a great advantage over the 1 by 1 by 1001 meter box in that the is less total surface area for heat to seep out.
        And if doubling of CO2 adds 3.7 watts per square meter the 10 watt lightbulb in the box would be doing more heating of the water than adding 3.7 joules of heat per second.

        Now suppose one had the task of warming Earth. How do you do it?
        Heating up the atmosphere doesn’t require much energy, so one make the world feel warmer to humans beings by warming up the air.
        But it’s not a good way to keep Earth warm- you will lose a lot of heat quickly. So long term the best way to heat of Earth is to heat the ocean. It will require a vast amount energy to increase the ocean temperature from 3 C to 4 C, but also requires a very long time for ocean to cool from 4 C to 3 C. If you heat the air by 1 C, you lose it in less than a day, whereas it takes thousand years to lose 1 C temperature in the ocean. If you were to heat up atmosphere by 1 C, practically none of this heat would heat the ocean. Though if heat up the atmosphere by 1 C and this reduces clouds in the tropics or elsewhere [it could just as easily increase them- depending on details] this would allow more sunlight to be stored in the oceans [the sunlight would heat the oceans].
        If you heat the entire ocean by 5 C, you will increase the air average temperature by 5 C. Of course increasing the entire ocean by 5 C
        will quickly get rid of all polar sea ice. It does not rid of all glaciers on the land, but the year round polar sea ice and ocean 5 C warmer can’t exist at same time. The existence of polar sea ice, proves that Earth
        is in a ice box climate- if you were in need of this additional evident.

        So, if you had massive amount energy available- say you were to drill massively large hole(s) into the Earth mantle [using all available nuclear bombs might work- if done right]. And you were using this heat to heat the oceans, before you made the oceans 5 C warmer, the polar sea ice would have melted. Melting polar sea isn’t going to help you warm earth [your heat loss will increase], it’s just an unavoidable consequence of warming the ocean. So heating the ocean by 5 C is a sure way of causing earth to get warmer. But you are probably going to get a lot of rain- temperate zones with be similar to the tropics [a lot of rain and warm].

        So when Earth’s oceans were 10 C warmer than they are today, it had to be a warmer world than ours, and one isn’t going to get a warmer world without the ocean warming up significantly.

      • GBaikie,

        Good for you!

        Now, given that the oceans were 10C hotter than now, follow it through to when the oceans were 0.01 C hotter than now.

        This would logically be closer to the present, given that the Earth has cooled for the last several billion years, on average.

        The Earth, therefore, managed to cool that 0.01C all by itself, to our present temperature, and then continued to cool another 33C. This is the supposed effect of CO2, as scientists say that the Earth would be 33C cooler wihout the CO2.

        It is interesting to speculate how this cooling came about, if true, and what evidence there is that the world was ever in this condition.

        Add to this, the fact that to stop the Earth cooling, (ie losing energy), you would have to prevent the core from falling in temperature (if it cools, so does everything surrounding it.)

        So the atmosphere has now turned into a perfect insulator. This of course will prevent the Earth from cooling, but being a perfect insulator, also prevents any any external energy from reaching the Earth.

        Does anybody seriously believe his to be the case? If I have committed a logical misstep, please correct me. But please, please, please no stupid analogies or Holocaust denier comparisons.

        Live well and prosper,

        Mike Flynn.

      • Eric H., you were right about the 1 C no-feedback response. Jim Cripwell was just blowing smoke. Of course a radiative imbalance can’t be opposed by convection. We’re talking about earth having to lose extra energy to space. Convection doesn’t cut it as an explanation. Cripwell believes some of the weirdest things.

      • David Springer

        Everything else isn’t held equal, Eric. That’s the whole problem. Alarmists believe that one of the things that isn’t held equal is water vapor and that water vapor trebles the ostensible warming from CO2 alone. Others believe that clouds have a negative feedback that negates CO2 warming. All we know for sure is that the models incorporating the consensus climate science have failed to predict the past 15 years of no significant increase in global average temperature. The models failed and with them the underlying bandwagon science. Write that down.

      • “Now, given that the oceans were 10C hotter than now, follow it through to when the oceans were 0.01 C hotter than now.”

        Don’t know. It could be within 10,000 year. It probably was the case in last interglacial period, Eemian:
        http://en.wikipedia.org/wiki/Eemian
        “Sea level at peak was probably 4 to 6m (13 to 20 feet) higher than today (references in Overpeck et al., 2006), with much of this extra water coming from Greenland but some likely to have come from Antarctica. Global mean sea surface temperatures are thought to have been higher than in the Holocene, but not by enough to explain the rise in sea level through thermal expansion alone, and so melting of polar ice caps must also have occurred. ”
        I think it’s possible that in next century or two, that oceans could warm by as much as 0.01 C.

        “This would logically be closer to the present, given that the Earth has cooled for the last several billion years, on average.”

        Millions- tens of millions. In term climate, generally about 1/2 billion is about the limit to saying anything very accurate about it.
        Due to plate tectonic, we had quite a different world 250 million years ago.
        From 650 million:
        http://www.youtube.com/watch?v=bubUYPc0KRQ

        Starting about 50 million years ago, with India crashing in Asia and Antarctic moving to south pole, the global tectonic plate movement has brought us to our “modern” climate.
        50 million years ago is around the beginning point of our current ice box climate.
        See NOVA:
        http://www.pbs.org/wgbh/nova/earth/cause-ice-age.html

        There is thought to be other ice box climates in the billions of years, some theorize Earth was nearly completely frozen- Snowball Earth- I don’t buy this idea, but I do accept that within the last billion years or so Earth has be as cold and perhaps colder than we are presently.

        “The Earth, therefore, managed to cool that 0.01C all by itself, to our present temperature, and then continued to cool another 33C. This is the supposed effect of CO2, as scientists say that the Earth would be 33C cooler without the CO2.”

        I agree such changes in global temperatures do cause serious problems for idea of +33 C due the greenhouse effect- only due to greenhouse gases- and whacky idea of runway warming.

        “It is interesting to speculate how this cooling came about, if true, and what evidence there is that the world was ever in this condition.”

        NOVA article above does good job of explaining it.

      • gbaikie,

        So, Australia joins up with China. That will shorten the shopping distances for our coal in 250 million years from now. :)

      • gblaike,

        There is thought to be other ice box climates in the billions of years, some theorize Earth was nearly completely frozen- Snowball Earth- I don’t buy this idea

        I accept Snowball Earth. There are thick beds of tillite in the Flinders Ranges. They were deposited about 620 million years ago when the location was at latitude 7 degrees north. See Figure 4 here: http://www.stratigraphy.org/bak/ediacaran/Knoll_et_al_2004a.pdf
        The foreground in the picture is the tillite.

        The following is O/T, but interesting in my opinion: The link makes interesting reading, not only about the history of the planet and the beginning of life. But also because of the politics involved. It is an example of how politics is in science. The representative geological bodies from countries around the world took decades to decide which exposure should be made the type section for the Ediacaran Period. The Ediacaran is the period when multicell life emerged and burst forth (as a result of snowball Earth). Eventually ‘all’ the representative bodies settled on the section in the Brachina Gorge in the Flinders Ranges. Well not quite all. Russia and China didn’t agree and presented dissenting cases. Russia argued the ‘scientific’ reasons why the location in Russia was better and China presented the ‘scientific’ reasons why a location in China was better.

        Other points of interest: This is the first type section to be designated in the Precambrian, the first in the southern hemisphere (all others are in Europe (mostly UK) and Northern America), and the first new one to be designated for about 100 years (roughly from memory, I might have that a bit wrong – bonus points for who gives the correct figure!) :)

      • gbaikie, the You Tube video which predicts plate tectonics 250 million years into the future — that’s quite some model, do you believe that prediction?

      • “Diag | August 17, 2013 at 3:53 pm |

        gbaikie, the You Tube video which predicts plate tectonics 250 million years into the future — that’s quite some model, do you believe that prediction?”

        No.
        There is a lot to know about Earth.
        I think predicting the plate movement millions of years in the future is a bit like predicting climate in 100 years. Or like predicting where a Hurricane will go in 2 weeks.

        Scientific acceptance of the theory of plate tectonics is quite recent. It’s similar to scientific acceptance that Earth presently is constantly hit with space rocks.
        And I think impactors are perhaps related to plate movements. And within 250 million Earth in scheduled for another dinosaur level impactor.and thousands of smaller impactors. Of humans probably will stop such impactors- they we mine them and go to the stars using them.

    • I am going off topic if anyone minds,

      One cannot do controlled experiments on the universe, so we cannot measure Hubble’s constant. Any estimate like 67.80 +/- 0.77 (km/sec)/megaparsec is little more than a guess. Such little empirical data as we have gives a strong indication that Hubble’s constant, however defined, is indistinguishable from zero. Thus the universe is not expanding and the big bang theory is nothing but a pal reviewed conspiracy to garner and misuse grant fund moneys.

      • What has two arms, two legs, and is made out of straw?

      • This is actually a variation on the Dreaded Doctor Analogy.

        If some scientist/doctor has been perceived as correct about something at some point in time, then AGW is true.

        Andrew

      • Bob,
        Thanks for replying. So are you saying that the present day forecasts of catastrophic climate change attributable to man kind are on the same scientific footing as Hubble’s constant?

      • Bob, you write “Such little empirical data as we have gives a strong indication that Hubble’s constant, however defined, is indistinguishable from zero.”

        What empirical data indicates that the Hubble constant is indistinguishable from zero? I have quoted Beenstock et al, and other information to suggest the empirical data relating to CS

      • Phat boy,
        No straw man argument, it was a response to the argument that since we can not do controlled experiments on the atmosphere thus we can not measure climate sensitivity.

        Bad Andrew,
        You missed my point, it wasn’t whether or not something is true or not, it is whether or not one can test a scientific hypothesis without a control.

        Eric H,
        I would say that if we double CO2 levels the Greenland Ice Sheet and a good portion of the Antarctic ice sheets are history. I’ll let you judge if that is catastrophic or not.

        Jim Cripwell,
        Beenstock et al?
        Have you digested this?
        http://www.earth-syst-dynam-discuss.net/4/219/2013/esdd-4-219-2013.pdf

      • Bob, you write “Have you digested this?”

        Of course someone has published a counter to Beenstock et al. The question is, which paper has the right science?. I cannot answer that question. But the fact remains that no-one has measured a CO2 signal in any modern temperature/time graph.

      • . Thus the universe is not expanding and the big bang theory is nothing

        the missing mass of the standard model is an open problem.The existing paradigm often needs to be refreshed to prevent ossification,that is the evolutionary role of diversity.

        http://arxiv.org/abs/1303.6878/

      • Jim Cripwell

        Don’t let Bob Droege fool you with his reference to the Hendry & Prentis rebuttal to Beenstock et al..

        Hendry & Prentis DO NOT demonstrate a robust statistical correlation between the long-term temperature and CO2 record (there is none, as Beenstock states).

        Instead they simply state that there are flaws in the statistical methodology and bases used by Beenstock et al.

        They then state that such a correlation exists in a “simple bivariate plot” “of the second period” (1978-2011)

        Duh!

        Sure it exists for the cherry-picked “second period”. But it does NOT exist for the long-term period studied by Beenstock et al.

        And, without a robust long-term statistical correlation between temperature and CO2, there is very little argument for causation.

        It’s just that simple.

        (And I think Bob knows it, if he’s honest with himself.)

        Max

      • David Springer

        Unless someone calls on policy makers to make sweeping changes in the way our industrial civlization operates based upon the Hubble Constant who gives a phuck if we can measure it or not? Duh.

      • “Eric H,
        I would say that if we double CO2 levels the Greenland Ice Sheet and a good portion of the Antarctic ice sheets are history. I’ll let you judge if that is catastrophic or not.”

        Double what 260 or 400?
        520 ppm or 800 ppm?
        It seems it will take quite a while before we get to 520 ppm [assuming we do].
        I don’t think Greenland or Antarctic ice caps are going anywhere within next 2 centuries. I think we might get something similar to what we had in previous interglacial period:
        Eemian:
        “Sea level at peak was probably 4 to 6m (13 to 20 feet) higher than today (references in Overpeck et al., 2006), with much of this extra water coming from Greenland but some likely to have come from Antarctica.
        http://en.wikipedia.org/wiki/Eemian

        So by 2100, at most 1 meter rise [higher than IPPC estimate] and I think most likely closer to 12 inches.
        Then maybe another 2 meters by 2200, but likely less than 1/2 a meter or less.
        So won’t get Eemian levels within to two centuries, and predicting beyond 50 years regarding what human will be doing and how, is mostly a fool’s game.
        How many times have people within the last 50 years who are suppose to be scientists and “experts” have predicted we would all be dead before 2013? Make good news copy- and that about explains it.

      • Max, you write “Don’t let Bob Droege fool you with his reference to the Hendry & Prentis rebuttal to Beenstock et al..”

        I didn’t. It was inevitable that “The Team” would ensure that a rebuttal to Beenstock et al would be pal reviewed and rushed into print. Then when people like ourselves quote Beenstock, there is a rebuttal for people like Bob to quote in reply.

      • Helicopter article downed by a stat seeking missile. The remains will be flown home in a flag covered coffin.
        ===============

      • The Beenstock paper is utter BS.

        Proving that CO2 isn’t a greenhouse gas using statistics.

        Ridiculous. Skydragons have more credibility in THEIR arguments than that.

      • lolwot, “Proving that CO2 isn’t a greenhouse gas using statistics.”

        Beanstock didn’t do that. All they did was to show that “global” surface temperature anomaly doesn’t show a statistically significant CO2 response. There is another study using similar methods that shows that surface stations well removed from ocean and lake heat sources are mainly noise. Both tend to agree with Pielke Sr. that oceans and specific heat capacities are important considerations and that “noisy” surface temperature data can lead to false conclusions.

      • Just out of curiosity – is there mutual agreement among the “skeptics” here that the following statement is true?:

        But the fact remains that no-one has measured a CO2 signal in any modern temperature/time graph.

      • Joshua, “But the fact remains that no-one has measured a CO2 signal in any modern temperature/time graph.”

        That is fairly accurate. Of the original 3C mid range “sensitivity” up to 2/3 was supposed to be water vapor feedback. The purely CO2 portion with 400ppmv would only be about 0.4C of the ~0.8 C of “Global” warming. The accuracy of the pre-industrial temperature measurements is on the order of +/- 0.25C, so the margin of error is pretty large for identifying a definitive CO2 signature. Since atmospheric forcing responds to natural and “other” causes of surface warming, the best SWAG is about 1/3, 1/3 and 1/3 WMGHGs, “other”, and natural variability. One “other” that appears to have been underestimated is albedo changing factors like, Black Carbon, Erosion (sand storms) and agricultural snow removal/crop albedo.

      • “All they did was to show that “global” surface temperature anomaly doesn’t show a statistically significant CO2 response”

        Who claimed it did? How can it? That’s a claim of correlation = causation.

      • lolwot, “Who claimed it did? How can it? That’s a claim of correlation = causation.”
        So you are a skeptic then. Since the IPCC says that “most” of the warming is due to “CO2” then not being able to verify the “most” of the warming is due to “CO2” makes the skeptical point. Up to 50% of the warming is due to CO2 would be a more rational statement with ~1/3 of the warming due to CO2 a best estimate. It was your team that put “most” of the eggs in the CO2 basket.

      • “So you are a skeptic then. Since the IPCC says that “most” of the warming is due to “CO2″”

        Based on physics, not statistics.

        You can’t attribute using statistics. That’s why Beenstock is a bunch of BS from the start.

        There’s no way you can take surface temperature and CO2 records and determine that CO2 didn’t cause the warming. It’s just ridiculous to think that can be done.

      • lolwot, “You can’t attribute using statistics. That’s why Beenstock is a bunch of BS from the start.”

        And why the model projections have been BS from the start. The models projected warming using the “average” surface temperature anomaly as the metric. “Average” surface temperature is an “average” of Tmax and Tmin readings about 2 meters above the surface for 29% of the “Globe” and a temperature measured anywhere from 10 meters below the sea surface to the sea surface. The noise in the land portion of the “average” “Global” mean temperature anomaly is too noisy and covers too small a region to produce statistically significant correlations to CO2 forcing.

        Which came first? The IPCC models predicting “average” “global” “surface” temperature anomaly or Beanstock calling it BS?

      • Physical models are not statistics.

        Global average surface temperature is an output of the models, not an input.

        The IPCC didn’t take GSAT and CO2 records and run some half-baked statistical gibberish on them to conclude one caused the other.

      • lolwot, “The IPCC didn’t take GSAT and CO2 records and run some half-baked statistical gibberish on them to conclude one caused the other.”

        Now you are completely ducking the issue. The IPCC compiled data that used “novel” statistics to produce GAT estimates of past climate to infer CO2 impacts on climate. Climate itself is a statistical construct.

      • Max, the argument for causation is not statistical at all. Even though there is a long term statistical correlation between CO2 and temperature.

        It has to do with the solutions to hamiltonian equations, that’s the theoretical part.

        Then there is the infared spectroscopy, that is the empirical part. Which confirms the theoretical part.

        True CO2 is not the only thing that affects global temperature, but if you deny that both CO2 and temperature have increased since the industrial revolution, there is a seat in the clown car for you.

      • lolwot

        It is quite apparent from your comment that you

        a) have not read the Beenstock et al. paper or
        b) that you did not understand it, if you did read it

        It does NOT make the claim that CO2 “is not a greenhouse gas”.

        It simply shows that there is no robust long-term statistical correlation between atmospheric CO2 and global temperature.

        From this one can conclude that where there is no robust long-term statistical correlation, the argument for long-term causation is weak.

        It does show a short term correlation, however (which the Hendry & Prentis rebuttal also show).

        Some advice, lolwot: read the papers you cite more closely before you shoot off silly comments about them.

        Max

      • Bob Droege

        You state that the argument FOR a long-term causation between atmospheric CO2 and global temperature is not based on a “statistical” analysis of the observations, but based on theoretical physics.

        And Beenstock states that the argument AGAINST a statistically robust long-term correlation between the two is based on a statistical analysis of the long-term observations of the two; ergo the case for long-term causation is weak.

        So we have a case where the theory and the actual physical observations are not in agreement.

        Max

      • Bob Droege

        if you deny that both CO2 and temperature have increased since the industrial revolution, there is a seat in the clown car for you.

        No one denies this, Bob.

        Beenstock et al. have simply shown that there is no robust long-term statistical correlation between atmospheric CO2 and global temperature.

        And, where there is no robust long-term statistical correlation, the case for long-term causation is weak.

        The rebuttal simply shows that there is a short-term correlation, which Beenstock et al. also concede in their paper.

        Max

        PS A lot of things beside CO2 and temperature “have increased since the industrial revolution”. For example, the average human life expectancy at birth. Should we conclude from this that the slightly warmer temperature has been the cause of the increased life expectancy? Or possibly the slightly higher atmospheric CO2 level?

      • Let’s, for the sake of removing natural variability, take a 60 year running global surface temperature from a couple of sources.
        http://www.woodfortrees.org/plot/gistemp/mean:720/plot/hadcrut4gl/mean:720
        Does it look like anything? I would suggest this.
        http://www.woodfortrees.org/plot/esrl-co2/mean:12
        Any questions?

      • Further, there is a functional relationship predicted by AGW which is that T-Tref=CS*ln(CO2/CO2ref)/ln(2) [CS is climate sensitivity) and in fact Vaughan Pratt showed this works very well as a connection between the curves (his miilikelvin fit). Too much of a coincidence, I would suggest.

      • Jim D

        “Too much of a coincidence, I would suggest”

        A weak argument, Jim.

        And don’t look at only 60 years out of a 160+ year record.

        Beenstock et al. have simply shown that there is no robust statistical correlation between the long term temperature and CO2 observations, that’s all.

        From this they have concluded that <strong<where there is no robust long-term statistical correlation, the case for long-term causation is weak.

        The rebuttal showed a short-term correlation, which Beenstock also concluded.

        .

      • manacker, the only contender would be if someone could suggest another forcing that grows like CO2 did in the past century. None have been suggested yet. Then you have to explain why CO2 didn’t affect it, even though by pure coincidence it looks the same, which is even harder to do.

      • Max,
        Just a second,

        you said

        “no one denies this”

        But Beenstock just did exactly that. or did they

        and I quote

        “panel c clearly shows that there is a positive relationship between temperature and the change in the anthropogenic anomaly once the warming effect of solar irradiance is taken into consideration.”

        but then again, right before that quote is this

        “panels a and b both show that there is no relationship between temperature and the anthropogenic anomaly once the warming effect of solar irradiance is taken into consideration.

        As for statistical relationship all you have to do is plot temperature vs CO2 and perform normal regression analysis and the results speak for themselves.

      • Bob Droege

        You wrote:

        if you deny that both CO2 and temperature have increased since the industrial revolution, there is a seat in the clown car for you.

        I responded

        No one denies this, Bob.

        And then pointed out:

        Beenstock et al. have simply shown that there is no robust long-term statistical correlation between atmospheric CO2 and global temperature.

        And, where there is no robust long-term statistical correlation, the case for long-term causation is weak.

        The rebuttal simply shows that there is a short-term correlation, which Beenstock et al. also concede in their paper.

        So let’s summarize, since you appear to be confused:

        1. Beenstock et al. do NOT deny that both CO2 and temperature have increased since the industrial revolution (as you claim)

        2. Beenstock simply show that there is no robust long-term statistical correlation between atmospheric CO2 and global temperature.

        3. Beenstock add that where there is no robust long-term statistical correlation, the case for long-term causation is weak.

        4. Beenstock do show a short-term correlation, as did the rebuttal.

        Got it now?

        Max

    • “I accept Snowball Earth. There are thick beds of tillite in the Flinders Ranges. They were deposited about 620 million years ago when the location was at latitude 7 degrees north. See Figure 4 here: http://www.stratigraphy.org/bak/ediacaran/Knoll_et_al_2004a.pdf
      The foreground in the picture is the tillite.”

      Can give me the Cliffs Notes on this pdf so say a I read it.

      So, first we have glacial in the tropics at moment and we are +10,000 years into a warmer interglacial period.
      Is not possible that the tropics instead having an average temperature of around 25 C, could be instead around 10 C?
      If our tropics were 15 C cooler, the existing glacier could exist at lower elevation- easily more than 1000 meters lower.
      I believe that glaciers have strong affect upon local temperatures and if have a larger glacier it makes a larger region cooler. So lower the glacier elevation one get large glacier, this give more local cooling effect.

      Or without glacier nearby, in tropics one has average temperature of 10 C at sea level and 2000 meters up in elevation it’s average temperature is below freezing. Add mountains and a big glacial, instead being below freezing at 2000 meter, it now become at below freezing 1000 meters elevation. And in such situation it seems one could glacier flowing into the ocean.
      So something like the Glacier Bay Basin.
      “The bay has a cool wet, coastal temperate rainforest climate. Three climatic zones have been discerned in the Glacier Bay; the outer coast along the Gulf of Alaska, which records mild temperatures and higher precipitation. However, it experiences less snowfall, the upper Glacier Bay where it is much colder and heavy snowfall occurs, and the lower Glacier Bay, which experiences heavy rainfall throughout the year.”
      And:
      “Annual precipitation is in the form of rainfall and snowfall. On an average, 228 days are recorded as precipitation days per year, and precipitation is in the range of 70–80 inches (180–200 cm) including 14 feet (4.3 m) of annual snowfall; the highest recorded snowfall is of the order of 100 feet (30 m)”
      The difference with Glacier Bay Basin is doesn’t have the seasonal difference. Plus in tropics it could more rainfall [and snowfall]. So instead 100 feet being rare it could a yearly average.

      Next we have some large mountain ranges on Earth. It seems reason that in the past we could have even larger mountain ranges and they could be near or at the equator. Or like in Antarctic and Greenland the mountain range could be made even higher with miles of snow packed on them. Combining high elevation [antarctic *average* elevation is about 2500 meters and south pole is 2835m.

      So say have something the size of Texas and with average elevation of 3000 meter. Give it a wet lapse rate of 3°F/1,000 ft (5.5°C/km)
      So 16.5 cooler due to elevation difference of sea level to the average elevation. Tropics have some seasonal temperature variation so with right conditions, it seems one could have average tropical temperature of say 20 C. With 5 C cooling when the rainy season comes, and higher peaks building glaciers, glacials lower local temperatures. And say you get an average of 100 feet of snowfall for a century. What do you get?

      It seems it could be largely a matter of topgraphy and weather patterns.
      But it nothing to do with the oceans freezing in tropics.

      • I can make this simpler. I believe if one had oceans of water on Mars.
        Mars, planet which receives 60% less solar radiation as Earth.
        And on Mars with it’s ocean of H2O, there would locations with liquid ocean of water.
        And many scientists study Mars, assume this was the case in past- maybe within a couple billions. Oh, they say 3.8 billion:

        “The Mars ocean hypothesis states that nearly a third of the surface of Mars was covered by an ocean of liquid water early in the planet’s geologic history. This primordial ocean, dubbed Oceanus Borealis, would have filled the Vastitas Borealis basin in the northern hemisphere, a region which lies 4–5 km (2.5–3 miles) below the mean planetary elevation, at a time period of approximately 3.8 billion years ago. Evidence for this ocean includes geographic features resembling ancient shorelines, and the chemical properties of the Martian soil and atmosphere. Early Mars would have required a denser atmosphere and warmer climate to allow liquid water to remain at the surface.”
        http://en.wikipedia.org/wiki/Mars_ocean_hypothesis

        Now, I will grant that the Mars crowd tend to be infected with CAGW garbage, but there is some evidence to support the idea that if there is enough water, one could have liquid oceans and/or lakes on Mars.

        And I believe a planet even further from the Sun than Mars could even have liquid water ocean.

        If it’s remotely possible that Mars could have a liquid ocean of water, how can one imagine Earth’s ocean’s were completely frozen or even half the oceans were frozen.

        i think if you put Earth at Mars distance, that Earth’s tropical oceans would not freeze. Nor even if you put Earth a bit beyond Mars orbit.
        And I think if put Earth at Venus distance to the ocean doesn’t boil away- even after a billion years. At Mars distance Earth could be cold enough so people would perhaps not be panicked about the place getting a bit warmer [though I fear to precisely predict the depths of human stupidity]. And Earth at Venus distance, Earth would be warm- snow skiing would be rare possibility in very selected locations and times.

        On earth we have this average temperature of 15 C. If Earth had average temperature of 0 C, it doesn’t mean the Earth would be frozen snowball.
        If the average temperature were 0 C would seem reasonable that the tropics would less able to warm the temperate zones. Hurricanes instead of a pattern, may be something require more unique weather conditions [but they could be bigger].

        So blackbody at Earth distance predicts Earth’s UNIFORM temperature will be 5 C. This is not average temperature.
        Nor does tell you the temperature at the tropics.

        In addition I believe the presence of H2O creates warmer conditions than *seems* to be indicated by blackbody models. H20 also creates cooler conditions then a blackbody model would seem to indicate- but most people are already well aware and accept this effect. Even Hansen assume it, as he assumed Venus started with a liquid water ocean like that which exists on Earth [bad assumption- but nevertheless he thought it was reasonable. But Hansen began his life before plate tectonic or knowing impactors were not confined to the great bombardment period {or theory that Proto-Earth may have collided with a Mars size object}].

  6. The first study raises questions about longer term fitness for purpose as well, which are only tangentially addressed by the pause and the second study. All the GCMs in both archives were “tuned” to past observed climate variables. I believe the instructions for CIMP5 were to stop at 2000. So if the AMO is poorly represented, then other tuned factors have to be more “out of wack”. And that means the further forward the models are run, the greater their “wackiness”.

  7. This post in Lucia
    http://rankexploits.com/musings/2013/von-storch-et-al-stagation-in-warming/
    and several others linked there in the comments (http://rankexploits.com/musings/2013/von-storch-et-al-stagation-in-warming/#comment-118443) discuss another set of work by Lucia, like Von Storch’s with similar results, and they’re a little more general (not just a fifteen-year window, for instance).
    I thought Michael Tobis’ presence in the comments there was interesting.

    • I think all of von Storch’s points are in action, and the third is practically demonstrated. I don’t mean ‘practically’ in the sense of ‘almost’, rather in the sense of ‘fit for policy action’.
      ================

    • miker613

      Von Storch concludes that the reason why the models failed to project the observed pause was either:

      -the underestimation of internal natural climate variability on decadal time scales…

      – the influence of unaccounted external forcing factors or

      – an overestimation of the model sensitivity to elevated greenhouse gas concentrations

      Barrados et al. present data, which points to reasons 1 and 2, which means (of course) that reason 3 cannot be ruled out.

      Barrados et al. also conclude that the models are not able to make projections at decadal or multi-decadal time scales, because of these shortcomings.

      IOW we can ignore the past long-term IPCC model-based projections for AGW and its effects (i.e. the CAGW premise as outlined by IPCC in its AR4 report).

      If the models can resolve these shortcomings in the future, we might again give their multi-decadal projections some credence.

      That’s how I see this.

      Max

  8. The climate models are not falsified. They are just in tatters. What will it take to falsify the models? If everybody knows the models don’t model, is that not pretty close to falsification, or at least a plausible perception among non-scientists of falsification?

    • John Vetterling

      Don,

      Technically it is the forecasts that are “in tatters.” To disprove a model you would have to first verify that all the inputs were correct, then demonstrate that even with correct inputs the model could not replicate the observations.

      That is a ways off.

      • John, the non-scientists among us (including most of the policy makers and the general public) don’t know from technically. Since the forecasts are in tatters, the models ain’t useful to us. Unless we are among those whose fame and fortune depend on models that support their dogma.

    • Hi Don,

      There’s no need to falsify the models. They are false because they’ve never been validated in the first place
      That people can actually buy into the converse simply represents the God hypothesis in religion, transposed to supposed “science” ( but really climate “Sceance” ), and repeated “appeals to authority” of the supposed whitecoats/scientists as a new “Priesthood” (and I do mean that literally!!)
      True believers in the supernatural argue that if one cannot positively, definitely “prove” that God doesn’t exist, then they are free to retain their beliefs, or at least will not relinquish them.
      True believers in Carbon Apocalypse similarly argue that if one cannot positively definitely rule out Carbogeddon, then they are free to retain their deeply emotionally held beliefs in catastrophe.
      The models have a limited utility in inching toward a better understanding of a very complex system about which the best state of our knowledge is probably characterized as profound ignorance.
      We’ve been through all this before E.G. Didn’t Trenberth formally propose “reversing the null” ? That has tacitly been the strategy of the carbophobics all along.
      On the other hand there have been (probably inadvertent) admissions such as at the following two links:

      Comment on the Nature Weblog By Kevin Trenberth Entitled Predictions of climate

      This is remarkable since the following statements are made

      1. IN FACT THERE ARE NO PREDICTIONS BY IPCC AT ALL. AND THERE NEVER HAVE BEEN.
      2.None of the models used by IPCC are initialized to the observed state and none of the climate states in the models correspond even remotely to the current observed climate.?
      3.Moreover, the starting climate state in several of the models may depart significantly from the real climate owing to model errors. I postulate that regional climate change is impossible to deal with properly unless the models are initialized.

      http://tinyurl.com/ycunacr

      Real Climate’s Agreement That The IPCC Multi-Decadal Projections Are Actually Sensitivity Model Runs
      http://tinyurl.com/yzpjg3y

      The Consistent-With Game: On Climate Models and the Scientific Method
      http://tinyurl.com/22rhrg

      Global Cooling Consistent With Global Warming
      http://tinyurl.com/4thuxx

      Don’t Rely On Computer Models To Judge Global Warming” – A Christian Science Monitor Op-Ed By Roger A. Pielke Sr. In 1994
      http://tinyurl.com/lv266fn

      David Hagen is spot on:
      http://judithcurry.com/2013/08/14/so-what-is-the-best-available-scientific-evidence-anyways/#comment-365022

      It should be pretty definitive. And yet in the face of this we have the once respected Royal Society as but one example of professional societies saying “Science is under attack”
      http://www.bbc.co.uk/programmes/b00y4yql

      The RS should more properly now be called the Royal Post Normal Society or even the Royal Post Modern Society.
      They are a disgrace and Lindzen is spot on in comment below where he recognises that there is an overt effort to mutate “science” from a hugely valuable method of inquiry, to a “source of authority”

      Richard Lindzen: The Perversion Of Science
      Foreword to Andrew Montford’s Nullius in Verba: The Royal Society and Climate Change

      Andrew Montford provides a straightforward and unembellished chronology of the perversion not only of The Royal Society but of science itself, wherein the legitimate role of science as a powerful mode of inquiry is replaced by the pretence of science to a position of political authority.

      http://www.thegwpf.org/richard-lindzen-the-perversion-of-science/

      all the best
      brent

    • Don, the models are only falsified if they fail at both forecasting and hindcasting.

  9. David L. Hagen

    Climate Persistence – Hurst Kolmogorov Dynamics
    von Storch et al. observe:

    1) the models underestimate the internal natural climate variability; . . .
    Natural climate variability is an inevitable consequence of a slow system (climate) interacting with a fast system (weather) (10). The forcing of the slow system by the (white noise) low-frequency components of the fast system produces a “Brownian motion” of the slow system, represented by a red variance spectrum – in qualitative agreement with observations.

    Why? (other than it iss easy for graduate student labor).
    On what evidence?

    Koutsoyiannis et al. quantify in numerous articles how actual climate persistence (Hurst-Kolmogorov Dyanamics) is strongly different from random processes. e.g., Random processes have a climacogram slope of 0.5 while real climate has a slope of 0.08. see Fig. 9.
    Markonis, Y., and D. Koutsoyiannis, Climatic variability over time scales spanning nine orders of magnitude: Connecting Milankovitch cycles with Hurst–Kolmogorov dynamics, Surveys in Geophysics, 34 (2), 181–207, 2013.
    Is assuming random “white noise” rather than climate persistence a foundational error, limitation, or bias in current models?

  10. A fan of *MORE* discourse

    Judith Curry argues  “While some in the blogosphere are arguing that the recent pause or stagnation is coming close to ‘falsifying’ the climate models, this is an incorrect interpretion of these results.”

    Yes, because of the simple, plain, inarguable scientific fact that no measure of global energy imbalance is presently showing a pause or stagnation.

    That is why James Hansen and colleagues are scientifically wise to focus upon measures of global energy imbalance as providing (what Judith Curry calls) “the best available scientific evidence” for assessing climate-change.

    That is why climate-change measures that are local and/or decadal (like the AMO) are *NOT* “the best available scientific evidence” … isn’t that plain common sense (as Hansen and colleagues take care to remind us)?

    So because the AMO is local and decadal, it follows that discussions of “climate model simulations of the AMO” can never be central to the global AGW debate, eh?

    This brings us to the simple easy-to-understand proposition:

    The Best Available Measure of AGW  We’ll know that global warming is over when the seas stop rising.

    That is why the inexorable acceleration of sea-level rise-rate over the last two centuries — an acceleration that is driven mainly by accelerating thermal expansion and *NOT* by ice-melting (as we know from ARGO/GRACE/TOPEX/JASON observations) — constitutes strong observational evidence of a sustained global energy imbalance.

    Conversely, climate-change skeptics can establish credibility by providing quantitative answers to the simple natural question “When will the seas stop rising?”

    Well skeptics? When will the seas stop rising?

    Isn’t that the key question of the AGW debate?

    \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

  11. Dr. Curry creeps slowly, almost imperceptibly, but inexorably closer to returning to her tribe.

    “The issue is the fitness-for-purpose of the climate models for climate change detection and attribution on decadal to multi-decadal timescales.”

    No, the issue is the fitness-for-purpose of the climate models as justification for decarbonizing the global economy.

    “Fitness-for-purpose” is the new “usefulness,” is the new “we don’t need to verify the models.”

    There is nothing new in the climate debate.

    • “returning to her tribe”

      She never left. Even if the AGW hoax dies, the tribe will remain… just chanting a new verse.

      Andrew

      • Andrew “She never left.” +100

      • Oh, ye men of little faith; Judy follows Nature.
        ============

      • Agree with kim on that one.

        “Just the facts, ma’am…” seems to be our hostess’ motto.

        She has openly shown her skepticism of the IPCC claim on AGW since 1950 and pointed out that the greenhouse effect, per se, is known to climate scientists, but the magnitude of greenhouse warming in our climate is “highly uncertain”.

        She has also stated under oath that greenhouse warming is not likely to become an “existential threat on the time scale of the 21st century even in its most alarming incarnation”

        She has told members of Congress under oath that:

        It seems more important that robust policy responses be formulated rather than to respond urgently with policies that may fail to address the problem and whose unintended consequences have not been adequately explored.

        So she is not in favor of rushing into mitigation actions.

        And she has gone out of her way to open a clear, impartial dialogue on the issue of AGW, including setting up Climate Etc. The posts here (incl. guest posts) are sometimes controversial, sometimes biased in favor of the “mainstream consensus” and sometimes simply psycho-sociological babble; their intent is to elicit an open debate (and it seems to be working).

        But being a learned, respected, full-fledged member of the climate science flock, she cannot (and should not IMO) “foul the nest”.

        Her way of not doing this is to stick to the facts in her own opinions and comments.

        You don’t have to agree with every word she says or writes to accept that she is playing an important role in trying to cut through the CAGW BS that’s out there.

        Max

      • Precocious, that child.
        =========

      • kim and max,

        Here is what prompted my comment (in this post at least).

        “While some in the blogosphere are arguing that the recent pause or stagnation is coming close to ‘falsifying’ the climate models, this is an incorrect interpretion of these results. The issue is the fitness-for-purpose of the climate models for climate change detection and attribution on decadal to multi-decadal timescales.”

        Dr. Curry is just too eager to grasp the latest reframing of the same issues that have been central to the climate debate since its beginning.

        Now the issue is not “falsification,” but “usefulness.” Sorry, “fitness-for-purpose.” (Anybody wanna take a shot at explaining the difference?)

        max, your comment to Mosher elsewhere makes the same point. The issue is the “usefulness” of “fitness-for-purpose” of the models for the purpose for which they were argued – decarbonization of the global energy economy.

        And it is NOT just the modelers who are overstating the certainty/accuracy/validation/falsifiability of the models. It is the activists, like the IPCC. They have all drunk the kool-aid on models. No matter how often they deny it, those models are the very core of their political movement.

        I have never questioned Dr. Curry’s honesty, her good intent, or her continuing valuable contribution to the climate debate. But the slow but steady movement I see is in her eagerness to usually accept the latest reframing, to see it as evidence that the consensus is somehow “changing.” When it has not changed a whit yet.

      • Patience, my Dear.
        =========

      • Steven Mosher

        “Now the issue is not “falsification,” but “usefulness.” Sorry, “fitness-for-purpose.” (Anybody wanna take a shot at explaining the difference?)”

        usefulness is less descriptive. “fitness for purpose” tells you that models are tools. Tools built for a user who has a purpose. a reason to use the model.
        in science that purpose might be “knowing the truth” in policy that purpose might be “supporting decisions” with our best knowledge however incomplete that may be

        An example:

        build me a model of how far my car can travel on the gas remaining in the tank. I want to use this model to prevent running out of gas.

        Note my model says nothing about matching reality. and trust me your model of how far my car can travel will always be falsified. it will never match reality but it will probably fit the purpose of keeping me from running out of gas.
        It will be good enough for that purpose. matching reality is that the purpose?

        So imagine a policy maker who asks you ” predict the damage from a asteroid impact on new york city” and you show him your model. Do you think he will ask you if you tested it by having an asteroid impact new york? if your model predicted total destruction and in reality only 75% of the city was destroyed would people care that your model was wrong by 25%? nope. Nobody would wave a popper bat at you and question gravity. They would say, now you have more data, please make your model better. Or they may say.. wow for that complex problem with almost no data to build from you came really close. good job.

      • The models are being used to support decarbonising policies and this seems to be the main problem with orthodox climate science today. If only climate scientists restrict themselves to the science and to the quest for greater understanding of climate change and leave the policy issues to the governments of the day!

      • Peter Davies

        +1

      • David Springer

        I think some in her tribe might disagree about her never leaving it.

    • Steven Mosher

      “Fitness-for-purpose” is the new “usefulness,” is the new “we don’t need to verify the models.”

      1. verifying a model consists of checking whether you built the thing you promised to build.
      2. validation consists testing whether you built the right thing.

      steve does nice work on this

      http://www.easterbrook.ca/steve/2010/11/the-difference-between-verification-and-validation/

      The question is what do you verify ( validate is the correct word) the models against?

      If you answer ‘reality’ then you dont understand validation.

      • Steven,

        It really doesn’t matter one whit what the models are capable of, designed for, limited to or should be validated against, what matters is what they were sold as being useful for and those 95% confidence levels on the pretty, glossy, policy advocating handouts that provide a default criterion for evaluation.

      • Steve, every thought of Rap instead of Korean?
        You see they have this whole East Coast/West Coast thing going. If one had a large North American collection of historical temperature reading, south of the Canadian board, one could look at the first 100 miles of coastal regions and see if one side follows the Atlantic Oscillation and the other the Pacific.

      • captdallas 0.8 or less:
        “…what matters is what they were sold as being useful for…”

        It seems the GCMs were marketed in a certain way. Are their customers more happy or less happy? I may have mentioned before, Sell your customers what they need, not what you have to sell. Do that and most likely you will have a long term customer. Don’t think short term, don’t think about this month’s sales. Think about sales 10 years from now.

        Nice point Captdallas.

      • Steven Mosher

        ‘what matters is what they were sold as being useful for and those 95% confidence levels on the pretty, glossy, policy advocating handouts that provide a default criterion for evaluation.”

        there are always some modelers who drink the koolaid and mistake models for reality. dont paint everybody with the same broad brush

      • Steve, “dont paint everybody with the same broad brush.”

        I don’t, but a spade is a spade. If any modeler is nuts enough to over-estimate their model certainty, give them a cheap pocket watch and move on.

      • Steven Mosher Or Judith,
        Interesting adden on PDO indices is Clim Dyn (2011) 37:1457-1468 DOI 10.1007/s00382-010-0920-1 . Also on open access at Springerlink.com This is PDO fingerprints vs AMO but both relevant.
        Scott

      • Steven Mosher

        The final test of any model’s validity and effectiveness is its ability to make projections which actually come true, based on the input parameters that have been fed in.

        This post shows that the climate models are unable today to make meaningful multi-decadal projections of our climate (Barrados et al. as well as von Storch).

        As a result, we can ignore the multi-decadal projections made in the past by these models (in the IPCC AR4 report). The model projection of 0.2C per decade for two decades has been invalidated by the pause. And there is no good reason to believe that longer-term projections will be any better until the identified shortcomings can be corrected.

        Von Storch simply tells us that there could be three reasons for the models’ failure to project the pause. Barrados et al. show data that point toward the first two reasons cited by von Storch, without precluding that the third reason could also be a factor.

        So the ball is back in the modelers’ court. Fix the problem. Make the models useful for multi-decadal forecasts (or stop making multi-decadal forecasts based on their outputs).

        Seem simple to me as an outsider. (But then things usually do look simpler to outsiders, since the devil’s in the detail.)

        Max

      • “The question is what do you verify ( validate is the correct word) the models against?
        If you answer ‘reality’ then you dont understand validation.”

        Nonsense. It’s easy to validate models against reality. You run the model for ten year predictions. Wait ten years, and see the result. Do it for 15 years. 20 years.

        A modeler brings you his model for constructing bridges in locations with extremely high winds. Several bridges have been built using his model, and all have failed.

        But he has validated his model against other models, all of which have also produced bridges that have failed due to wind stress.

        Do you:

        A. Accept his definition of the term “validation” and use his model to build your bridge.
        B. Take a poll of modelers about which of the failed models is best for building your bridge and use it.
        C. Laugh your ass off as you throw him out the door and build the bridge somewhere else.

      • von Storch’s third reason is the most likely and the most devastating to catastrophe.
        ==========

      • kim

        von Storch’s third reason is the most likely and the most devastating to catastrophe.

        True.

        Neither the first nor second reason preclude the third.

        And if they are both valid, then the third is also valid, by definition.

        [There have also been several recent independent studies, which all point to von Storch’s third reason.]

        But the second conclusion of the post is also devastating for the CAGW premise of IPCC: the GCMs are not able to make meaningful multi-decadal projections of our climate until they correct their shortcomings.

        So we can ignore the CAGW projections made by IPCC in AR4 using the flawed models and it’s “back to the drawing board” for the modelers.

        Look to me like de cat is outa de cat-a-strophe bag.

        Max

      • Steven Mosher

        Gary

        “Nonsense. It’s easy to validate models against reality. You run the model for ten year predictions. Wait ten years, and see the result. Do it for 15 years. 20 years.”

        wrong.

        Suppose your job is to model whether a building will withstand a hurricane force wind. You do not build the model, construct the building, and test the model by waiting for a hurricane.

        Now, clearly there are cases where we can test models against reality.
        But validation happens when you compare the model to the spec.

        Models are validated AGAINST their spec.

        Like so:

        Lets say I am build a model of sea level rise.

        Requirements:
        1. the model shall use a statistical method.
        2. The model shall use historical data from 1850 to 1900
        3. The model shall predict sea level rise for every decade
        from 1910 to 2010.
        4. The model shall have an error of prediction no greater than 1 foot per decade

        Then the spec
        1. the model will use a statistical method.
        2. The model will use historical data from 1850 to 1900
        3. The model will predict sea level rise for every decade
        from 1910 to 2010.
        4. The model will have an error of prediction no greater than 1 foot per decade

        Now how do we verify the model. we verify that the model does what it promises to do. we check that we built it right.
        that it uses a statistical model, that it uses the right data that it makes predictions.

        Now how do we validate the model:

        we check whether the error meets the specified requirement. 1 foot per decade. we check ( using the test plan) that it meets its specified performance criteria.

        What you want to argue is that matching reality should be in the spec.
        Once you realize that this is what you are arguing we will be on the same page. You are saying “matching reality perfectly” should be the spec.
        Who decides the requirements and spec? the user.
        But, you are not a user of the model. You dont get to decide how much error is acceptable. All models will have error. None will match reality. Some will be useful. useful for certain users who have certain questions. Those users, not you, get to decide how much error is acceptable.

        It may bother you that policy makers decide to use error prone models to make decisions. But they actually do get to decide how much wrongness is acceptable in their eyes. If you think users should demand more accuracy, then elect more rigorous users.

      • Steven Mosher commented on Climate model simulations of the AMO.
        in response to Steven Mosher:

        “It may bother you that policy makers decide to use error prone models to make decisions. But they actually do get to decide how much wrongness is acceptable in their eyes. If you think users should demand more accuracy, then elect more rigorous users.”

        Who told the policy makers that they could with their models, tell the policy makers how warm it was going to be in 2100?

      • Steven Mosher

        manaker

        ” The model projection of 0.2C per decade for two decades has been invalidated by the pause. And there is no good reason to believe that longer-term projections will be any better until the identified shortcomings can be corrected.”

        still not correct. I may decide as an user that I want a guard band.
        For example:

        Please predict the temperature for the next 100 years but do not underestimate the temperature on a decadal basis. Over prediction is fine, but in no case should the decadal rise ever be greater than .3C of the actual.

        In this case a projection of .2C rise would be fine even if reality came back with a .1c rise. That wrong prediction would be just fine for some users.

        Again, models are validated against their spec. You like others think that matching reality perfectly on a decadal basis SHOULD be the spec. Obviously the people paying for the models are happy with less accuracy. What you mean to say is this
        “If I wrote the spec, perfection would be required” but you dont write the spec.
        life is not burger king. you cannot have things your way.

      • Steven,
        The concern is that the models’ supposed usefulness was justified by their apparent ability to hindcast with good decadal accuracy. If the models then fail to forecast with any decadal accuracy, surely it is correct to question their usefulness?

      • Steven Mosher

        Steve, every thought of Rap instead of Korean?

        well my parents were musicians so I was kinda trained to enjoy all types of music. I like rap, but when it gets translated and de thugified through the filter of korean culture I like it even more. watching the east assimilate the west and the west digest the east has been a fascination of mine for a long time.

        but some things defy translation.

        http://www.youtube.com/watch?v=j5-yKhDd64s

        of course the choice of rap or korea is a false choice

        http://www.youtube.com/watch?v=hroOaLG2Qz8

        Also, strugling with the east coast/west coast issues.. hmm I’ll discuss later if you like.. In the post I have staged for Judith

      • Mosher says

        Suppose your job is to model whether a building will withstand a hurricane force wind. You do not build the model, construct the building, and test the model by waiting for a hurricane.

        The argument is about the AMO and its statistical and physical attributes , and whether the statistical artifacts have real properties which may be tested by both statistical and physical experiments ( the physicality of the latter validating the former) and if they relate to the physics.

      • David Springer

        If you don’t validate against reality then you validate against fantasy.

        In practice that’s invention or manipulation of data sets. You’re an expert at it. I didn’t expect such a candid admission from you though.

      • What is trying to be ascertained is the physical mechanism behind the statistics .ie by removing the ghost in the machines, the so called statistical artifacts.

        http://www.nonlin-processes-geophys.net/18/469/2011/npg-18-469-2011.html

        Here there is a demarcation between dynamics (the internal) and thermodynamics ( the external) and where the lab experiment is better then numerical simulations ie a physical case.

    • A fan of *MORE* discourse

      GaryM posts (poisonously)   “Dr. Curry creeps slowly, almost imperceptibly, but inexorably closer to returning to her tribe.”

      “Tribe?”

      Thank you GaryM for so plainly demonstrating to Climate Etc. readers the ugly and toxic rhetorical methods that denialists routinely deploy to poison public discourse!

      Gosh, “tribe” is a favorite term of WUWT/Watts too!. Along with “ilk”!

      Whereas James Hansen and his colleagues *never* find occasion (as far as I know) to use words like “tribe” or “ilk”.

      Isn’t that interesting, GaryM? Why do you think that is?

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • Because they prefer “industry funded science denier” or “fake skeptic”. The insults go both ways.

      • Somebody didn’t read the Climategate emails…

      • “Whereas James Hansen and his colleagues *never* find occasion (as far as I know) to use words like ‘tribe’ or ‘ilk.'”

        Would that be James “Godwin” Hansen you are referring to?

        “During his testimony before the Iowa Utilities Board in 2007, Hansen likened coal trains to ‘death trains’ and asserted that these would be ‘no less gruesome than if they were boxcars headed to crematoria, loaded with uncountable irreplaceable species.'”

      • A fan of *MORE* discourse

        GaryM asserts  “Hansen likened coal trains to ‘death trains’”.

        You are correct GaryM! And Climate Etc regulars are encouraged to read carefully the (signed, verbatim) interview that gives the scientific reasons why Hansen says so:

        Coal-fired power stations are death factories. Close them
        by James Hansen

        The climate is nearing tipping points. Changes are beginning to appear and there is a potential for explosive changes, effects that would be irreversible, if we do not rapidly slow fossil-fuel emissions over the next few decades.

        The greatest danger hanging over our children and grandchildren is initiation of changes that will be irreversible on any time scale that humans can imagine. If coastal ice shelves buttressing the west Antarctic ice sheet continue to disintegrate, the sheet could disgorge into the ocean, raising sea levels by several metres in a century.

        Such rates of sea level change have occurred many times in Earth’s history in response to global warming rates no higher than those of the past 30 years. Almost half of the world’s great cities are located on coastlines.

        Clearly, if we burn all fossil fuels, we will destroy the planet we know. Carbon dioxide would increase to 500 ppm or more. We would set the planet on a course to the ice-free state, with sea level 75 metres higher. Climatic disasters would occur continually.

        The trains carrying coal to power plants are death trains. Coal-fired power plants are factories of death.

        Young people are beginning to understand the situation. They want to know: will you join their side? Remember that history, and your children, will judge you.

        GaryM, can respond to Hansen’s cogent, quantitative, scientific/moral analysis — which refers to “death” in its most literal and morally relevant sense — by pointing to comparably cogent analyses, by climate-change skeptics or deniers, that deploy the moronically demagogic language of “tribe” or “ilk”?

        The world wonders, GaryM!

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • Matthew R Marler

        a fan of *MORE* discourse: Clearly, if we burn all fossil fuels, we will destroy the planet we know. Carbon dioxide would increase to 500 ppm or more. We would set the planet on a course to the ice-free state, with sea level 75 metres higher. Climatic disasters would occur continually.
        On the evidence to date, an increase of CO2 to 500 ppm will produce a slight increase in net primary productivity, possibly a global mean temp rise of 0.2 C compared to now, and not much else.

        Whereas James Hansen and his colleagues *never* find occasion (as far as I know) to use words like “tribe” or “ilk”.

        They prefer inflammatory words such as “denier” and the analogy of the train to Auschwitz, rather than judgment-free notions of group membership.

      • A fan of *MORE* discourse

        Matthew R Marler risably quibbles  “Words such as ‘[ilk]’ reflect judgment-free notions of group membership”.

        Hmmm … it will be mighty interested to see how Michael Mann and his ‘ilk’ fare in their lawsuit for malicious slander against the CEI, eh Matthew R Marler?

        After all, isn’t ‘ilk’ pathognomonic of the Ingroup/Outgroup Thinking that is a hallmark of denialist demagoguery?

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • Matthew R Marler

        A fan of *MORE* discourse: After all, isn’t ‘ilk’ pathognomonic of the Ingroup/Outgroup Thinking that is a hallmark of denialist demagoguery?

        Nice word, pathognomonic, but the answer to the question is “No”. Also “ilk” is totally irrelevant to Mike Mann’s lawsuit.

  12. Here’s a more useful model I made which correctly incorporates the AMO, the cloud amplified solar variability, the SOI as a proxy for ENSO and some co2 to keep the warmies happy. (Sceptics can call the black line ‘thermometer adjustments’ if they wish).

    My simple model replicates monthly HADsstV3 from 1974 to an R^2 value of 0.9

    http://tallbloke.files.wordpress.com/2012/10/sst-model1.png

    • Steven Mosher

      Nice, whats it say about sea level rise.

      If a policy maker in florida is interested in what sea levels will be like in 30 years, what answer will your model give him?

      At a minimum a climate model will want to answer questions about

      A. Hydrology
      B. Temperature
      C. Extreme events

      The goal of modelling is not to replicate reality. The goal of modelling is to provide decision makers with the information they need to make the decisions that they deem are important. replicating reality may be an implementation strategy, but its not the measure of a useful model.

      • Replicating reality is a way of validating models. A model that can’t hindcast is an invalid model. Policy based on invalid models is necessarily dumb.

      • Hi Mosh. Sea level will stabilise
        Rainfall will decrease a bit
        Temperature will fall a bit, as you can see from my forecast to 2050
        Extreme weather events will continue to happen, as they always have, and always will, with slightly varying frequency. Not that it matters much, since extreme weather events are localised, and unlikely to hit the same place with any regularity.

        Typo in my comment above model/data runs from 1874 not 1974

      • Steven Mosher

        ‘Hi Mosh. Sea level will stabilis

        1. to count as a scientific statement the proposition must be falsifiable in principle. That means you must be able to specify in advance the conditions under which your proposed statement would be false.
        In short. please assign
        A) a time span using dates
        B) a definition of stability using numbers.

        #############################
        Rainfall will decrease a bit

        A) describe what you mean by rainfall. total amount?
        b) define ” a bit”
        #################################
        Temperature will fall a bit, as you can see from my forecast to 2050

        Yes, see how when you have a measureable time and a quantifiable amount you have said something for which you can be held to account.
        if you are wrong about the temperature you can change your model.
        If you are wrong about rainfall NOTHING in your model can be changed to fix that. That makes your predictions about rainfall un falsifiable and unverifiable.

        “Extreme weather events will continue to happen, as they always have, and always will, with slightly varying frequency. Not that it matters much, since extreme weather events are localised, and unlikely to hit the same place with any regularity.

        Typo in my comment above model/data runs from 1874 not 1974

        ###########

        Now the funny thing is that I looked at your model and i drew the Opposite conclusions

        According to my interpretation your model

        predicts: more extreme weather, floods, rapidly increasing sea levels and bad breath.

        now clearly if I think your model predicts A and you think your model predicts NOT A, that you have some explaining to do so that I can get the correct picture. You merely telling me so wont do as I would like to replicate your science. right now, your model is giving me the opposite answers on rainfall than you describe

      • I would want a model that predicts, that is answers questions about, future climate that are accurate enough to use to plan. I would say in order to spend tax payer money, the model should within 90% accuracy out to 50 years give us answers to those sorts of questions – the ones you have outlined. We don’t have any models that do that.

      • Mosh, sorry but I think BEST is WORST among temperature datasets. It’s longer, and wronger than all the others.

      • Steven Mosher

        really Rog,

        Longer and wronger. I see it breaks your model.

        Lets see how do we test wronger? we take each method and compare it to the truth.

        Opps, we dont have access to the truth. we only have a sample of the truth.

        I know, lets see how a blind test does. I will make up a synthetic earth
        and populate it with temperatures that behave like real temperatures.

        Then I’ll calculate the average. I’ll call this the true average of the synthetic data

        They I will take all three methods and apply them to the synthetic data
        but they only get to sample a portion of the whole.

        This will be cool since I know what the true average is of this synthetic world.

        Now, if a method performs WORSE on synthetic data, do we have any reason to believe it will perform better on real data?

        http://static.berkeleyearth.org/memos/robert-rohde-memo.pdf

        PS. what does your model say about arctic amplification from 1900 to today?

    • It would be interesting to see how your model holds up to future actuals. Very cool.

    • A fan of *MORE* discourse

      Tallbloke predicts (from his models)  “Hi Mosh. Sea level will stabilise”

      Tallbloke, that is an impressive prediction!

      Do your (essentially curve-fitting) models accurately retrodict the past 150 years of accelerating sea-level rise-rate?

      If your models can’t reliably explain the past, and moreover have no fundamental grounding in thermodynamics and/or transport theory, then why should we believe that these same models can reliably predict the future?

      Conclusion  Empirical curve-fitting models can never provide the “best available” climate science.

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • Steven Mosher

        one could also ask

        1. How does you model perform over the same period of curve fitting
        But looking at spatial subsets
        a) look at the NH and SH
        b) look at land versus ocean

        2. over the curve fitting period only
        a) what changes in seasonality do you see
        b) what arctic amplification
        c) what changes in diurnal range

        The point being this, curve fitting to a lower dimenensional metric ( like the global average ) doesnt necessarily invalidate the model as a tool provided that it can get higher dimensional metrics correct.

        and yes you can also look at the out of sample data.

        Psst. at Berkeley we can actually do this since we extend the land record back to 1750.

      • Hi Fan,
        My best fit model has some important advantages over GCM’s with respect to the subject of this thread. Which is why I posted it. Namely, it properly includes the AMO.

        If I were using 5 year smoothed data like the MET office does I could show you that I can get a R^2 0.95 correlation with HADsstV3 simply using my ocean heat content proxy (integrated sunspot number departing from equilibrium value) plus a 66yr periodic sinewave.

        And yes, that ocean heat content proxy rises in an accelerating curve for the last 150yrs, consonant with sea level changes. Including the dropoff back to the lower rise rate after 2004.

  13. Steven Mosher

    “While some in the blogosphere are arguing that the recent pause or stagnation is coming close to ‘falsifying’ the climate models, this is an incorrect interpretion of these results. ”

    Let me expand a bit on this by way of example.

    In the 1970’s while the physics of radiation propagation, reflection and scattering was well known, the models used to predict it and the computers used to run these models were severely limited.

    While we understood the physics of reflection from curved surfaces, modelling that and computing that was not feasible. Suppose you took a complex surface and used the computer codes of the day to predict the radar return from a curved surface. The answer you would get would be wrong. Not horribly wrong, but it clearly didn’t match the return you would get in field, range and lab tests.

    Did this make these models wrong? did this make the underlying physics wrong? Well, no. The models were not falsified, the models were limited in their usefulness. They could be used to answer specific questions. And within those constraints they operated extremely well.

    Here is what those models were used to design

    http://en.wikipedia.org/wiki/File:F-117_Front_View.jpg

    in short, the limitation of the models was not seen by any user as a ‘falsification” of the physics. The limits were not construed as ‘falsifying the models”. The failure to “match reality” was construed as a limit on the usefulness of the model. They could be used to design a plane that was composed of flat plates. The models worked. they served their purpose.

    In the 80’s there was a breakthough in algorithms for computing the return from a special class of curved surfaces, and there were advances in computing power. The models were improved and the result was this

    http://en.wikipedia.org/wiki/File:B-2_Spirit_original.jpg

    And the advances continued and another class of curved surfaces were added to the mix. resulting in this

    http://en.wikipedia.org/wiki/File:YF-23_front.jpg

    If your motivation was to end the stealth programs then at any time you could have tested a model using various shapes and found that the models got the answer wrong. And you could have stomped your scientific foot and beat people with a plastic Popper bat. However, if you had a different purpose and different motivation, if your goal was to build something that had predictable performance, then the models were very useful, critical to success. They were gods word. And they worked.

    The issue with climate models is not whether or not they match “reality.” Since reality is huge its easy to either select a dimension and show that the models can perform reasonably or select a dimension and show that they fail miserably. The issue is rather that models have been constructed by scientists over time to serve one set of purposes, principally exploratory and scientific, and now they are being asked to serve ill defined and unspecified policy purposes or dubious epistemic purposes like attribution.

    You want better climate models. Start by having policy makers define what they want to use them for and what counts as an adequate enough answer. Justifying and criticizing models without a firm well defined quantifiable measures of merit just leads to a useless debate.

    • Steven,
      So would you agree that averaging out the models and coming to a conclusion on future global temps is a misuse of the model?

      • Steven Mosher

        mis use?

        no.

        i look at it pragmatically.

        I have a tool for predicting hurricane landfall. it always gives me the wrong answer. I have 20 such tools, they all give the wrong answer.
        Then i create a model of models. I average them. I do not need a statistical justification for averaging them. I just average them. Then I test
        does the model of models outperform any given model? if yes, then the best choice going forward is to average the models.
        This is an instrumentalist view of things.

        On the other hand if some of the models perform better, then I’d use those.

      • Thank God. He was a carp in his spare time…

      • The stealth-tech examples are misleading because in all such cases physical experiments were possible to tell the limits of the computer models. (In fact, one could have [at much greater expense] used entirely trial-and-error experimental methods to derive shapes for stealth surfaces, but that is neither here nor there). Climate models can’t have their errors so easily calibrated, making them more akin to econometric models than to engineering-science ones.

    • moshe plops it out of the park with a pine-tarred plastic proper pitch.
      ==========

    • You want better climate models. Start by having policy makers define what they want to use them for and what counts as an adequate enough answer.

      CYA

    • However, when computer models were used to design the front aspect of the engineers used a physical models, including one full size, to measure the actual return. Moreover, using full size physical models the engineers made physical changes to cut down the radar cross sectional area.

      • Steven Mosher

        yes the front aspect return especially the return out of the cavities wasnt very predictable. Think of it as turning a TWT design on its head.
        the screens over the inlets help but then the air flow behind them was effected. These mechanical tweaks are pretty standard, look at the wing of an airplane.. see the little vortex generators.

        Anyway, with the aircraft after the 117 the inlet problem was solved another way.

        Finally the full scale testing is not without its own complications, data adjustments, inaccuracies and complications.

        lets take tail design

        http://www.aerospaceweb.org/question/planes/q0176.shtml

        To his cedit ‘Cap’ at mcdonnell douglas did not scream that the CFD models were falsified or that wind tunnel tests were hoaxed, or that flight test was adjusted.

        he fixed the plane and guys went back to improving their tools so that the likilihood of fucking up and killing your friends was diminished.

        lets look at models in medicine: Lets look at MRI

        Now an MRI works by applying a model to signals.
        When damadian built the first one it didnt work. well it did work he was just too big. When he switched to a smaller person ( one of his graduate students ) the model worked.

        over the course of years there have been many improvements.

        http://www.clinical-mri.com/pdf/Garmisch/A_Historical_Overview_of_Magnetic_Resonance.pdf

        its never been perfect and we dont respond by saying the physics of magnetism are wrong nd we dont thrown the bbaby out with the bath water

      • Steven Mosher

        argg i had a long comment about the inlet and inlet screens but Its gone..

        Also some neat stories about MRI.. using models in medicine

      • “To his cedit ‘Cap’ at mcdonnell douglas did not scream that the CFD models were falsified or that wind tunnel tests were hoaxed, or that flight test was adjusted.”

        Oh come off it Mosher, the aircraft behaved differently from the model and reality was accepted.

        The product of the GCM’s is an estimate of global temperature.
        These models have failed to describe future temperatures. In any other scientific field this failure would be acknowledged. It climate science it isn’t and you Mosher, use sophistry to defend their failure.
        You said it was all about the science.
        Well the science is saying game over.
        I know, you know and the modelers know the have no described the movement, storage and release of energy in the oceans. The threatened ‘Thermogeddon’ when the oceans were releasing heat and now that part of the cycle is over, they look like fools or shysters.

        Why do you defend the indefensible?

      • Steven Mosher

        Doc

        ‘Oh come off it Mosher, the aircraft behaved differently from the model and reality was accepted.”

        You miss the point. The point is that no pragmatic working person suggested that the models were useless. Folks recognized that they did somethings well and other things not so well. Nobody suggested that gravity was wrong even though the models used gravity. What pragmatic people do is accept the limits of models. They dont scream Popper. They fix the model.

        take the MRI. did you read the article? So a person moves when they are in the MRI and the answer it gives is wrong. Well there you go, magnetism must not exist since the device relies on magnetism and since it got the wrong answer.
        And a moving person is obviously real, therefore this machine which returns a modelled picture of my innards must be wrong because it cannot capture moving people. Do we say that? nope. We say “If you are going to use this model obey these constraints, outside these constraints its gives useless answers”

        whats missing in the GCM story is not “capturing reality” which is an impossible meaningless requirement. What is missing is a firm sense of the purpose and the operating manual. what is it good for. The only people who can answer that are the policy makers who want to use the answers.

        Also absent a GCM you have to answer the question what information should a policy maker use?

    • Steven your comment here is perfectly reasonable.
      But I have to ask, why didn’t the designers of the models say something about the improper use of their models, worse still many proclaimed that their models were fine to be used as reference for what an increase in CO2 would do.

      • Steven Mosher

        some modelers drink their own koolaid. When you have to look your friend in the eye and say my model will protect your life, you tend to have a different attitude.

      • Mosher,

        But presunably there was a long process of development of the wings after the initial design based on the model output which involved building wings, testing them, tweaking based on the exerience of the engineers and training the models with new empirical data. Hundreds of hours of this. Surely they didn’t they didn’t just strap the design from the model onto a plane and send proplr out the fly them in combat situations?

        There;s got to be a long process of empirical verification between the modellers output and somebody risking their life?

      • Steven Mosher

        “HR | August 16, 2013 at 10:19 pm |
        Mosher,

        But presunably there was a long process of development of the wings after the initial design based on the model output which involved building wings, testing them, tweaking based on the exerience of the engineers and training the models with new empirical data.

        Of course, but in the course of that exploration, where models get it wrong, where you discover that models only give right answers under limited conditions, nobody ever suggests that newtons laws are wrong. nobody ever suggests that gravity doesnt work. nobody ever walks into an engineering meeting and says Popper Popper Popper the models are false throw them out. No. we say “dont use the model outside these conditions” dont use the model to answer these questions. Its useful here and not useful there or less useful there. And sometimes a crude model is all you have, so you build in design margin. The model says X is safe, but we are not sure because there is no way to EVER collect data on it in a controlled fashion, so we err on the side of caution.
        An example: Crew system design. protect the ejecting pilot from damage.
        You are NEVER going to take a plane down to the deck at 800 KEAS and have your friend punch out to test your model. Never. you use physics and pad the hell out of your answer and then someday some poor soul punches out at 900 KEAS and breaks his arms and legs and you have one lousy data point to adjust your model. one lousy data point.

      • “GCMs are built to try to understand and explore how the climate works when we cannot perform controlled experiments.
        Folks are trying to use them to set policy.” – Steven Mosher

        I hope I am not beating a deadhorse here.
        Misplay 1) Bring GCMs into areas where they shouldn’t be.
        Misplay 2) When they fail in those areas, write them off completely.

      • I understand your points Mosher. Things aren’t just black or white, we are on a sliding scale of knowledge and depending on the question then the model can contribute to a greater or lesser extent.
        I accept there are sceptics that react with a knee-jerk with GIGO, there are others that sometimes forget to do this when it’s a result that confirms their bias (a good example might be this weeks report on WUWT in which models suggest an important role for black carbon in arctic ice melt). Many though really want to understand were on that sliding scale GCMs and the questions of attribution and future projection sit..My head tells me the outputs are over-interrupted in a direction that provides supporting ‘evidence’ for particular policy option. That’s more true when it comes to summary reports and synthesis reports such as the IPCC, when it comes to the literature it seems that the usefulness of models for these sorts of questions is still up for discussion.

      • Here is one question I have ATM.
        As the observations reach the lower limit of the model projections we have a bunch of questions raised about pauses, internal variability, how many years before the models are invalidated etc. But within the IPCC we don’t seem to have any sort of discussion about reformulating the approach. If I had a methodology that did a bad job of generating useful results I’d look for ways to change it, improve it. Bluntly put it looks like the warmest 25% of the GCMs should be scraped from the IPCC multi-model ensemble. If there was a test of acceptance for inclusion in the ensemble based on matching actually observations then they would likely fail. I think they stay there because.

        1) They are also providing the top end of the future projections and the worst case scenario’s
        2) It would be difficult to kick out the UK, Canadian and Australian (or whomevers GCMs are failing) contributions to this international effort without some type of political repercussions
        3)It’s bad PR to say we are losing X number of models because they are too hot.

        All of which have nothing to do with science. I guess my question is how do you remain focused on seeing this as a question of science when it’s clear that decisions are made that have nothing to do with science yet there is no recognition amoung those making the decision that they are doing anything other than science.

        I’ll re-formulate it in a more crude and blunt manner. The IPCC multi-model ensemble is a political construct, how do you challenge that in a purely scientific way?

      • Preface: I basically agree with Mosher’s and Curry’s attitude toward assessing models for purpose. On the other hand, I disagree with the claim that Popperian-type considerations play no role in making that assessment. To the extent that GCMs are much more curve-fitting exercises than has been admitted, they are fully subject to the critique of induction captured in the parable of the chicken being fattened up by the farmer.

        Nobody (here) is using GCMs’ poor performance on surface temperatures to say that physics is wrong. They are saying that the relevance of the underlying physics to the climate has probably been fundamentally misconstrued. God knows that that would be easy to do–over on Sean Carroll’s physics blog I saw a comment free-for-all among fairly well-qualified people (including the host) about where the measured mass of the nucleus came from, and the nucleus is a more experimentally accessible system than the earth’s climate.

        In any case, GCMs are better analogized to econometric models than to engineering-physics models, for reasons of observational vs. experimental data sources, system complexity, uncertainty about correct functional forms of relationships, etc. Econometric models are often tuned to the data while also containing “causal” pieces as well, including balance relationships (e.g. output-income identities in macroeconomics or downward-sloping demand curves).

        Nobody in his right mind would deny the usefulness of econometric models for some applications, such as estimating the shift in subway ridership if fares were raised and nothing else changed. They’re certainly better than a guess with no model based on no data.

        On the other hand, models of supermarket sales as a function of category pricing have been systematically biased toward favoring higher prices than managers believe optimal (or even feasible for business survival). Some coauthors and I provided a theoretical explanation for this discrepancy, but the field is far from being able to usefully model supermarket sales in a way that would enable store managers to improve their category pricing. The supermarket problem seems a lot simpler than modeling climate to an accuracy useful for policy makers. The available data are certainly better. I don’t believe, however, that if we had access to a bunch of supercomputers programmed by exiles from physics departments that we would quickly crack it.

      • stevepostrel:
        “In any case, GCMs are better analogized to econometric models than to engineering-physics models, for reasons of observational… …data sources…”

        You gave me an idea. Supply and demand curves. Temperature on the Y axis and feedbacks on the X axis. One plots the CO2/water vapor supply curve as assumed. The demand curve is the net of everything else including natural variability.

        This is to say that if you isolate CO2 and water vapor as one line, you now have everything else on a different line. The current temperature is just the intersection of the two curves. A starting temperature value might be 14 degrees Celsius and a starting feedback value might be the one that gives us our about plus 33 degrees Celsius of normal greenhouse warming.

        The easiest part of the graph is the supply curve. The demand curve is everything else that gives us our current temperature. Let’s say we can’t see that curve as we haven’t figured that part out yet. We can only see its intersection with the supply curve. The demand curve may move, it may change its slope, even reverse its slope. It may be elastic or inelastic or some value in between.

        The more certain one is about the CO2/water vapor supply curve, the stronger the model becomes I think. What have I missed? Accountants should not attempt some things, I know that.

    • Steven Mosher:
      Thanks for the explanation. What I think I read is that given aircraft designs A and B, if A comes back more stealthy than B, you have something useful. The total radar return in the model may be off by 40% compared to reality. If that error is always in the same direction and let’s say it’s 40% +- 10% the model works when it compares two designs.

      Do you know if the SR-71 Blackbird (A reminder of what NASA is capable of doing) was used to help develop the model? I noticed some similariites between the Blackbird and the YF-23.

      • “Do you know if the SR-71 Blackbird (A reminder of what NASA is capable of doing) was used to help develop the model? I noticed some similariites between the Blackbird and the YF-23.”

        NASA is not involved in black programs- it’s a civilian space agency.
        SR-71 was built by Skunk Works, lead by famous Designer: Clarence “Kelly” Johnson.
        http://en.wikipedia.org/wiki/Lockheed_SR-71_Blackbird
        Built for CIA.
        “Lockheed’s previous reconnaissance aircraft was the relatively slow U-2, designed for the Central Intelligence Agency (CIA). The 1960 downing of Francis Gary Powers’s U-2 underscored the aircraft’s vulnerability and the need for faster reconnaissance aircraft. The CIA turned again to Kelly Johnson and Lockheed’s Skunk Works, who developed the A-12 and would go on to build upon its design concepts for the SR-71.”

      • This is just a preliminary take on much of the discussion of models on various blogs and elsewhere.

        The accuracy of a model (examples: thermometers, radar guns, radar signature computer model, climate computer model) depends on:

        1. The better the physics relative to reality, the more accurate the model.
        2. The complexity of the system. The more complex the system, the more inaccurate the model tends to be.
        3. The available computing power (in the case of electronic circuits and computers). More computer power means better accuracy, all else equal.

        These are the reasons, in general, that among various models, climate models are the least accurate, and therefore the least useful.

      • Steven Mosher

        Ragnaar.

        getting the absolute value of the RCS correct was vital to the design of the aircraft. You had to know ( and display in 3D to the pilot ) his vulnerablity to known threats. So for example in the B2 design we flew simulated missions over fulda gap and we had to build a simulated ground network of radar stations along with command and control delays to insure thatthe plane could penetrate undetected. With the Yf23 and f22 the same kind of missions were simulated along with an advanced soviet threat. getting the umber right was important because of the missile engagement timelines. You had to be able to prove that you could detect lock launch and kill before detection. Given the radar on the other side and pure kinematics you can take a good stab at how small one of these aircraft is.

        As for the Sr 71.

        The radiation codes were government issue. But over the course of time you make improvements and feed them back to the government for standardization. I never met Kelly but did some contract work at the skunk works. Brilliant guys.

      • My comment sure made me look silly. Yes, the rightful recipient of praise is Lockheed and Kelly Johnson. Sorry about that.
        “Modern stealth aircraft first became possible when Denys Overholser, a mathematician working for Lockheed Aircraft during the 1970s, adopted a mathematical model developed by Petr Ufimtsev, a Russian scientist, to develop a computer program called Echo 1. Echo 1 made it possible to predict the radar signature an aircraft made with flat panels” – http://aircraftnut.blogspot.com/2013/05/stealth-aircraft-fighters-and-bombers.html
        What I think I am reading is that flat panels were easier to model so we got the F-117. Comparing the F-117 to the Raptor and YF-23 is comparing two states of computer and modelling power. Some are saying the YF-23 was the more stealthy design, yet it lost out to the Raptor. Well, there were other considerations.

      • Steven Mosher

        “Some are saying the YF-23 was the more stealthy design, yet it lost out to the Raptor. Well, there were other considerations.”

        yes, the YF23 was more stealthy in both the RF and IR domain.

        The way a competition works is this.

        The government says ” Prove you can make it this small: X”

        both sides go away and study and prove what they can do. The government wants a competition where both teams can do X. that lets them decide the program on things like cost and risk. doing better than X is not rewarded.

        After the first review You tell the government

        “we can do X/2″ we can beat your requirement by a big margin.

        The other side tells the goverment what they can do. Suppose it is 2X

        The governemnt will then come back and say ” DO 2X” because they want all projects to “pass” the technical requirement. There are no bonus points for beating the requirement. So when you see the requirements become more lax you know that you are beating the other team. The biggest issue was we designed the plane for ‘all aspect’ broadband stealth. Thats code words for having a low IR signature from the rear of the plane. In our mind ( and simulation) that was the biggest vulnerablity so we made the damn thing invisible to an IR missile shot at it from 6 o clock. That deck on the rear of the plane had shuttle tiles on it. When the final spec came out the air force dropped the rear aspect stealth requirement ( run away from a guy behind you ) and all the effort we put into protecting the planes ass was for nought. no points for a gold plated ass.

        There were a few considerations then that decided it ( rumors )

        1. At northrop we already had the B2. They like to spread the wealth
        2. We pissed them off by spending 1 Billion dollars of our own money
        marketing the F-20 against the F16 in international markets
        3. We had another undisclosed stealth project that was in big trouble
        we sucked at risk management.
        4. The side stick mafia

      • Looking at both designs, the YF-23 doesn’t have a horizontal stabilizer while the Raptor does. Visually that cleans up the back end but goes against the no V tails convention. The YF-23’s diamond shape main wing also makes visual sense, (but is radical) presenting the same profile flying away as approaching, and again cleans up the tail by moving the main wing away from it. I’d think the Raptor has too much going on in the tail area and presents a bigger RF target flying away.

        And as you said about the IR engine signature from the Raptor being higher. That’s unfortunate for the pilots.

    • Curious George

      Steven – a very nice comment. Your aircraft modelers improved their models as new algorithms became available. Climate modelers don’t even attempt to patch known problems in their code.

    • “Suppose you took a complex surface and used the computer codes of the day to predict the radar return from a curved surface. The answer you would get would be wrong….
      ,,,
      Did this make these models wrong? did this make the underlying physics wrong? Well, no. The models were not falsified, the models were limited in their usefulness. They could be used to answer specific questions.
      ,,,
      The failure to ‘match reality’ was construed as a limit on the usefulness of the model. They could be used to design a plane that was composed of flat plates. The models worked. they served their purpose.”

      Obscurantism at its finest,

      Let me translate. (If Steve McIntyre were deconstructing this logical gordian knot, he would say follow the pea.)

      -You develop a model for the purpose of predicting radar returns from a curved surface.
      -Its prediction are wrong.
      -It can predict returns from a flat surface.
      -It is therefore not wrong but “useful.”
      -Ignore the fact that it is not useful at all for the purpose for which it was developed.

      Let’s apply this lesson to GCMs.

      -GCMs (in their current form) were generated to predict future global warming to justify decarbonizing the global (or at least western) economy.
      -Their predictions of global temperature are wrong.
      -They are useful to a degree in predicting regional weather effects.
      -They are therefore not wrong, but “useful.”
      -Again, ignore the fact they are wrong for the intent for which we are claiming they are useful. (This is why Mosher always writes models are “useful,” not “useful for accurately predicting future temperature.”)

      “Useful” is a content free term, like “fairness” and “for the children.” It means nothing without defining the “use” for which it is “ful.” If and when it fails to have the desired political effect, the warmists will simply reframe it into another term.

      Validated-accurate-robust-accurate

      Perhaps “efficient” is next?

      • Serious question, why wouldn’t models be “useful” for ruling out (to varying extents) certain theories?
        The “purpose” of GCMs was to estimate changes based on emissions scenarios. If those estimates don’t match reality, that is still usefull information to know, isn’t it? That is, of course, assuming that the modeler accepts that their estimates are wrong.
        I would imagine, in the F117 case, the question “why was the model wrong?” Was important to getting the design right.

      • Steven Mosher

        Gary

        “-You develop a model for the purpose of predicting radar returns from a curved surface.
        -Its prediction are wrong.
        -It can predict returns from a flat surface.
        -It is therefore not wrong but “useful.”
        -Ignore the fact that it is not useful at all for the purpose for which it was developed.”

        You misunderstand.

        The models could only be built for the purpose of getting the return of a flat plat correct. If you tried to predict curves the answer was wrong. using them for curved surfaces was mis using them.

        The same would go for trying to use CDF code to estimate whether the Vortex from an F/A LEX will impinge on the vertical tail and cause it to break after years. ( bitch of a problem )

        The analogy with GCMs goes like this

        GCMs are built to try to understand and explore how the climate works when we cannot perform controlled experiments.
        Folks are trying to use them to set policy.

        They are not wrong, they are being misused. in the same way that trying to use a flat plate model to predict curved surfaces would be an unwise use and the same way trying to estimate fatique on an F/A18 tail using CFD would be an operator error.

      • Steven Mosher,

        “GCMs are built to try to understand and explore how the climate works when we cannot perform controlled experiments.
        Folks are trying to use them to set policy.

        They are not wrong, they are being misused.”

        The models on which GCMs have been built may have been created for the purpose you say. That was my understanding of their history. But that time has long since passed.

        I must say this is the first time I have seen you say that GCMs should not be used to set policy. It is hard to think of a cogent argument for the consensus policy agenda without them.

        But I must admit there is a part of me that is waiting for the other shoe to drop and find out that one or more of the words you use in this simple, straight forward rejection of the misuse of GCMs, is not perhaps so simple and straight forward.

        Are you sure you would not like to “revise and extend your remarks”?

  14. Paul Vaughan

    _______

    FIRMLY calling BS on this:
    “natural internal variability”

    It’s NOT internal.

    Solar-Terrestrial-Climate 101
    _A_ / _B_ / _C_

    Nonnegotiable.

    _______

  15. They seek it here, they seek it there,
    Those modelers seek it everywhere.
    Is it in Heaven? Is it in Hell?
    That demmed elusive AMO!

  16. Schrodinger's Cat

    If I understand this correctly, the models fail to simulate known natural climate variability of major importance such as the AMO.

    If they cannot model known natural processes then why are the scientists claiming, in an authoritative manner, the number of degrees of warming that will result from a given increase in CO2?

    Why are they allowing trillions of dollars to spent be on the findings of models that cannot handle known, normal climatic processes? The first rule of modelling is that the model must be validated before you believe any of the output.

    • Steven Mosher

      simple. The asssumption is that ‘internal variability” ( AMO) must integrate to zero over time. They must integrate to zero otherwise the system will have created energy from nothing.
      Consequently, the models can be used to predict the rise in temperature that is due to energy inbalance, while they may get shorter time scale redistributions of energy wrong.

  17. Irreducible Imprecision

    Rather than validation and verification it may be best to think of models in terms of plausibility and irreducible imprecision.

    ‘AOS models are therefore to be judged by their degree of plausibility, not whether they are correct or best. This perspective extends to the component discrete algorithms, parameterizations, and coupling breadth: There are better or worse choices (some seemingly satisfactory for their purpose or others needing repair) but not correct or best ones. The bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms…’

    There are significant unknowns in, uncertainties of and computing power limitations to model ‘plausibility’ that lead to both the difficulties of decadal projections and irreducible imprecision in longer runs.

    ‘Atmospheric and oceanic computational simulation models often successfully depict chaotic space–time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but nonunique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model. Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision….

    Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable. They echo other famous limitations on scientist’s expectations, namely the undecidability of some propositions within axiomatic mathematical systems (Gödel’s theorem) and the uncomputability of some algorithms due to excessive size of the calculation (see ref. 26).’

    http://www.pnas.org/content/104/21/8709.long

    • So the ‘average’ is just the output of all the models divided by n; so one group could knock out 123 different versions of CRUCRAP5 and change the average by shear force of numbers.
      I have decided to improve my statistics by reading a sample 16 times and using 16 as my n, then presenting a SEM based on 16 repeated readings.

    • “think of models in terms of plausibility and irreducible imprecision”
      This sounds like an apology for a realist oil painter who did a so-so job of a portrait which might be recognizable as a likeness of its subject but might be mistaken for someone else.

    • Do not cut, fold, spindle, or mutilate

      No – the pdf is the weighted average of hundreds or thousands of perturbed parameter runs of single models.

      e.g. http://rsta.royalsocietypublishing.org/content/369/1956/4751.abstract

      Does this work? Begging the question. You first have to have a ‘plausible’ model in my opinion.

      And idiot analogies where they substitute for real knowledge are pointless JT.

  18. Schrodinger's Cat

    Would you consider 17 years of non-warming to be a degree of irreducible imprecision or something more serious?

    • Chief Hydrologist

      A change in the trajectory of surface temperatures, a change in the behaviour of ENSO, a shift in PDO modes, an abrupt change in AMO – all related – following the 1998/2001 climate shift. Good luck modelling that.

  19. Judith says: “Fig 3 is the key figure IMO, which I couldn’t figure out how to extract”

    I had made a copy for my upcoming book on climate models:
    http://bobtisdale.files.wordpress.com/2013/08/figure-3-from-ruiz-barradas-et-al-2013.png

    Regards

  20. Schrodinger's Cat

    I guess if you average enough spectra from enough ensembles of enough models you might eventually get close….

    • An infinite number of monkeys with paint brushes could knock out the right profile long before the models do.
      The monkeys would be cheaper and less trouble than the scientists.

  21. Alright lets actually establish something in these comments instead of a bunch of vague opinions that don’t matter next week.

    The AMO is claimed to exist.

    What is the evidence for it?

    I’ll start off with the only line of evidence I know for the AMO:
    1) A pattern in the global sea surface temperature record.

    Anything else?

  22. Schrodinger's Cat

    Perhaps GHG has an oscillation, maybe it is effective every 20 years.
    Then perhaps not.

  23. Schrodinger's Cat

    Let us speculate for a moment that GHG theory is fine as a lab experiment but in the real world, convection and a host of other physical processes means that the net effect of anthropogenic radiative warming is negligible.

    This would mean that the warming 20-30 years ago was mainly natural and the climate has returned, more or less, to the pre-1980 type climate. We now need to look at natural processes in more detail, particularly solar influences.

    Climate scientists must be hugely relieved that the planet is no longer in jeopardy, mankind can once again adopt energy policies that will save the poor from freezing to death. The scientists must be hugely grateful that mother nature is more resilient than they once thought.

    Oh, they aren’t? They are not relieved? They are not just disappointed, they’re hopping mad and issuing denials? Do they want AGW to be true?

    Sorry about this comment. It is late and I jest. Just be aware of the old saying – many a true word was spoken in jest, – and I’m not joking.

    • Steven Mosher

      1. we’ve looked at solar variation in detail. It doesnt explain the secular rise in temperatures since 1850.
      2. Therefore, it must be something else.

      A) GHGs + unicorns
      B) unicorns.
      C) GHGs

      the pause indicates that C is probably an incomplete answer.
      B– “natural variation” isnt an explanation it just restates our ignorance
      in other terms. A? well A means there is a bit we do understand. GHGs,
      and a bit we dont..

      When you get around to describing “natural variation” let me know when you come up with the lab experiment to test it.

      • Steven, “1. we’ve looked at solar variation in detail. It doesnt explain the secular rise in temperatures since 1850.”

        or the secular rise from 1700, what’s a guy to do?
        https://lh4.googleusercontent.com/-WJiDVg2R0KM/Ubx8SnnJOxI/AAAAAAAAIm4/ejLd98zWmpM/s800/giss%2520and%2520ersst%2520with%2520ipwp.png

        Could it be that volcanic aerosols and solar have longer term influences on climate because of NH albedo feed back? That perhaps there is more to ENSO than meets the eye?

        https://lh6.googleusercontent.com/-0Ljwh9NTkS0/Ubx9dAfZtAI/AAAAAAAAInM/UROadZys8_k/s800/giss%2520and%2520ersst%2520with%2520ipwp%2520from%25200%2520ad.png

        Nah! it’s the unicorns and fish whot dun it.

      • The Black Death was caused by communal bathing, a rising cat population and a lack of prayer.

        When you say that ‘we’ve looked at solar variation in detail’ , you are not being technically honest. You have no idea what the solar output was with respect to its spectrum,

      • Steve Mosher

        You left out a reason for the warming since 1850

        D) a long-term variability (on centennial time scales) or forcing by some mechanism of which we are not yet aware

        (Sounds a lot more reasonable than “unicorns” – or “GHGs”, for that matter.)

        Max

      • Dazzlingly flashy
        Unicorns leap inflections.
        Methusalah stirs.
        =========

      • “B– “natural variation” isnt an explanation it just restates our ignorance”

        So?

      • Steven Mosher

        well capt when you have an explanation for stuff let me know what lab experiment you did.

        Thats actually quite funny. Everybody here talks about the AMO and natural variation like it were something they brought into the lab to study. where is there rigor. They will all argue that c02 experiments in the lab are just that, but none of them seems to be bothered by the utter lack of lab experiments ( controlled experiments ) about sunspots and natural variation.

        odd that.

      • Matthew R Marler

        Steven Mosher: B– “natural variation” isnt an explanation it just restates our ignorance
        in other terms.

        True, but so what? If we do not have a complete understanding, and if we know that there are flaws/inaccuracies in the CO2 model, then it means that we have some or much yet to learn. Since not all the effects of solar variation on climate have been explored in detail, that is one place to explore: what, for example, are the effects of the wide swings in UV radiation? We don’t know what they are, but 20% or so of incident radiant energy is in the UV range, all or almost all of it is absorbed in the upper atmosphere, and UV that varies by 100%, then there is no reason to believe that the effects of the variation are negligible. So why neglect them? Study of possible climatological effects of UV variation has barely begun, as far as I am aware.

      • Steven Mosher

        D) a long-term variability (on centennial time scales) or forcing by some mechanism of which we are not yet aware

        Yes, I used to be in the bender of school of LTP. But then I asked myself.
        Are we really doing anything more than pointing to the logical possibility that the best explanation may someday be replaced?

        Do I have any lab experiments that suggest LTP? If I “find” LTP does that really help? The existence of LTP does not entail that C02 wont warm the planet. LTP only impugns attribution arguments.

        So yes, I drop an apple. It falls. I postulate a force called gravity. I accept this posit. I do other science that rests on that assumed truth. That science works.
        I build more on that foundation. It works.

        Then you come along and suggest that it might be gravity And unicorns.

      • “I postulate a force called gravity.”

        If your science sucks, just tell a gravity story. All better.

        Andrew

      • Steven Mosher

        “B– “natural variation” isnt an explanation it just restates our ignorance”

        So?

        ###############

        so then you are not doing science. You are doing something else.

        The question is
        1. What explains the wiggles

        and you are answering “wigglehood”

      • Wigglehood is better than tying old ladies to posts and then burning them alive.
        Scapegoating an innocent is a long established way of dealing with things you don’t understand.

      • Steven Mosher, “well capt when you have an explanation for stuff let me know what lab experiment you did.”

        Now that is just silly. The lab experiment has been done and there are plenty of field experiments that verify the results, CO2 equivalent gases produce a small radiant impact of ~3.7Wm-2 per doubling. That makes CO2 a great tracer gas. CO2 equivalent forcing has a “benchmark” no feed back response of 3.3Wm-2 per degree or 1.12 C per 3.7Wm-2. That helps in determining the range of natural variability.

        The hard part is the natural variability range and typical time constants. That is best done by looking at the “global” power source, the tropical SST and the average temperature of the oceans. The maximum Th is ~300C and the average Teff is 277C which has an effective energy of 334.5 Wm-2 equal to the average effective DWLR energy of 334 Wm-2. But then Lindzen and others have already pointed that out.

        The “Benchmark” response is limited by the specific heat capacity of the atmosphere. Without adding mass to the atmosphere, the “Benchmark” value can’t change significantly, it is almost like the lapse rate is fixed doncha know. You can if you like, use the ideal gas laws to determine the specific gas constant for 1000 milibar and 300C and find that just like classic physics should indicate, that value is 3.3Wm-1-K, dangest thing. Of course I could have the units wrong, perhaps someone should double check that?

        The rest is just accounting and curve fitting.

        http://redneckphysics.blogspot.com/2013/08/is-picture-worth-trillion-dollars.html

        0.8C per doubling based on a 1980 to 2010 baseline is your approximate “sensitivity” to atmospheric forcing with the average temperature of the oceans at 4C. The “sensitivity” though is no linear and dependent on the Tave oceans. You need a kick butt ocean model to figure out all of the pseudo-cyclic oscillations on time scale up to 1700 years or you can just add a big ol’ +/-2C which is about +/- 10 Wm-2.

        That makes the radiant portion of the atmospheric effect about 1/3 of the total which remarkably agrees with ASHREA data for radiant barriers. I think they have done an experiment or two on that subject.

        What amazes me is getting past 1.12 C per doubling requires a great deal of unverifiable assumptions and conjecture that the observations tend to ignore.

      • Matthew R Marler

        Steven Mosher: so then you are not doing science. You are doing something else.

        Outlining what is not known, and quantifying it, are parts of science. In the late 19th century, to take your Newton/gravity example, it was not known why Newton’s laws produced an inaccurate model of the precession of the perihelion of Mars, and considerable work went into quantifying it as exactly as possible. To some, the error was of no practical consequence (it still is of no practical consequence), but to others it was evidence of something yet to be learned. And in modern gravity science, similar anomalies lead to (hopefully testable) conjectures about cold, dark matter, suggestions that Newton’s second law might be slightly inaccurate, and so on.

        You pronouncements about what constitutes “science” are persistently narrow. The first stage of “science” is the recognition that there might be something to learn. Your recommendation about AGW usually includes the admonition that radiative physics is sufficient, and other energy flows don’t need to be learned, even in cases where the radiative physics is demonstrably inadequate.

      • He’ll learn. He’s pretty quick to pick up the beat. But sometimes you wanna reach for the 2X4 and bang on the, well, the drums.
        ================

      • Science says we can explain this warming easily with some to spare that leads to needing (also observed) aerosol effects, but you say no it can’t (reason missing) then something about natural variability which is not a theory at all.

      • I didn’t know that the objective was science. I thought the objective was policy. What is the objective?

      • Steven Mosher

        Your “unicorn” analogy misses the logic of the argument.

        It is simply this:

        Our knowledge of what makes our climate behave as it does is still filled with immense gaps of knowledge: large “uncertainty” as our hostess calls it.

        (Rumsfeld used the expression “unknown unknowns”; Taleb calls them “black swans”.)

        Therefore, we are unable to draw the conclusions you would so like to draw in your oversimplified analysis.

        If you want to refer to “lack of knowledge” (or “uncertainty”) as “unicorns”, so be it. I find it a bit arrogant, so I prefer:

        A) GHGs + unicorns unknowns
        B) unicorns unknowns
        C) GHGs

        Max

      • manacker, to have your gaps in understanding you have to completely dismiss what is already known. That step is missing.

      • Jim D

        What you just wrote sounds like double-talk (or gibberish) to me.

        Come with specific points if you wish to send a message.

        Max

      • manacker, go to our concurrent conversation. The shapes of the curves are compelling. There is no gap in understanding why these look so similar, nor is it just coincidence. I would not like to be in your shoes explaining it away as coincidental while something else yet unknown is going on.

  24. It is almost like we all went out to dinner together, your tribe talked all night about the weather and then the rest of us got stuck with a billion dollar check? Look around us, are there bigger problems than the air we breath and it’s temp.?

    • Tom

      are there bigger problems than the air we breath and it’s temp.?

      I can think of MANY problems that are “bigger”.

      In fact, I have absolutely NO PROBLEM with either the air I breathe or its temperature.

      Do you?

      If so, what are they?

      Max

    • Steven Mosher

      a conceptual model is not a model. It is words on a page. I cannot test your words on a page.

      • You call it a conceptual model, yet deny it is a type of model. WUWT?

      • Steven Mosher

        Jim2. if you read his page you will see he refers to it as ” a conceptual model”

        Personally I would not call it a model, I’d call it a fairy tale. words on a page.

        It always helps to look at things in that reductive manner as a test.

        Take a paper by Mann. It is not science. It is words about the science behavior he claims to have engaged in. It is not the science itself.
        He says ” I took tree rings and did this calculation” thats not the science. Thats words pointing to the science. What we want is access to the science: the data and the code. So, reading wilde I always see the same thing. I see words on the page. I dont see any instructions. “if you do X you will observe Y”
        and there are never any numbers: Rain will increase? where? when? how? how much? how long?.

      • I see your point SM.

    • Curious George

      Not a single equation. Congratulations. It is ready for a publication in Scientific American.

  25. The authors have failed to present information that the dogmatists of Western climatology can ignore so there’s still that next government paycheck and global meeting on AGW to look forward to until all us cows finally march off the cliff.

    • Wagathon | August 16, 2013 at 7:39 pm said: ”The authors have failed to present information that the dogmatists of Western climatology can ignore so there’s still that next government paycheck”

      the Urban Sheep is paying for it; no worries !!!

  26. The order in which both natural and anthropogenic caused climate perturbations are investigated is important. It makes sense to investigate the largest perturbations first, In that regard, understanding and predicting ENSO has to be given priority over,say AMO or PDO. or other unknown unknowns, ENSO does not have a fixed period which makes it more diffiicult to investigate. But it bhas huge economic and health effects which alone should give it some priority.

  27. Matthew R Marler

    While some in the blogosphere are arguing that the recent pause or stagnation is coming close to ‘falsifying’ the climate models, this is an incorrect interpretion of these results. The issue is the fitness-for-purpose of the climate models for climate change detection and attribution on decadal to multi-decadal timescales. In view of the climate model underestimation of natural internal variability on multi-decadal time scales and failure to simulate the recent 15+ years ‘pause’, the issue of fitness for purpose of climate models for detection and attribution on these time scales should be seriously questioned. And these deficiencies should be included in the ‘expert judgment’ on the confidence levels associated with the IPCC’s statements on attribution.

    It seems to me that the first sentence in that paragraph was largely nullified by the rest of it. Whether you refer to the “verisimilitude”, “accuracy”, “validity”, “fitness-for-purpose” or whatever, the property must be demonstrated to be present before the model is relied upon. The apparently increasing gap between model forecasts/extrapolations etc demonstrates the opposite, namely that the model should not be relied upon.

  28. Judith.

    Here is Figure 3

    http://oi42.tinypic.com/33m8ygk.jpg

    For future reference

    You can make a screen capture (alt-print screen) with the figure showing on the screen.
    Open powerpoint and paste the image
    Crop the image to just include the figure
    Right click on the image and select save as picture

  29. The 30-year average temperature in recent years has been 0.7 degrees warmer than the 30-year average a century earlier. Without any forcing change (GHGs, solar, aerosols, volcanoes), how much warmer would we be than a century ago? That is the “natural internal variability” magnitude. The problem with this having any significant magnitude in a 30-year average is the Planck response which is a natural restoring force to the equilibrium temperature. No global average perturbation can be sustained for long due to this restoring force, which is actually the tendency for a warmer surface than radiative equilibrium to radiate to space more. We see it in action after El Ninos for example where there is a quick radiative cooling that removes the surface temperature excess within a few months. Understanding the strength of the Planck response helps to understand how constrained natural internal variation is, and no one has proposed anything that could even be a couple of tenths sustained for 30 years, a small fraction of the 0.7 degrees that have occurred. Therefore about 90% or more of the 0.7 degrees must be forced, and the only question is how to attribute the forcing between GHGs, aerosols, the sun and volcanoes. The question becomes much clearer when century scales are used rather than decadal scales because of the Planck radiative restoring force on temperature.

  30. AMO is also about sea ice.

    the BIG melt in 1920-1940 slavishly followed AMO

    http://upload.wikimedia.org/wikipedia/commons/thumb/1/1b/Amo_timeseries_1856-present.svg/2000px-Amo_timeseries_1856-present.svg.png

    (see Tony B on this blog or Frank Lasner at http://hidethedecline.eu/pages/posts/arctic-sea-ice-data-collected-by-dmi-1893-1961-259.php )

    Late 1930s sea ice extents are in the early 2000s range, but may be even lower, because regions without observation had conservatively been presumed to be ice covered.

    If the 1920s melt started with much more ice than the 1979-2012 melt, the total melted ice volume may not differ much between both periods.

    Also, a reanalysis of 1964 with rediscovered satellite data filmrolls resulted in a massiv reduction from 9.2 mill km2 -> 6.9 mill km2, also correlating well with the then still positive AMO..

    Reduction 9.2 -> 8.28 mill km2
    http://www.the-cryosphere.net/6/1359/2012/tc-6-1359-2012.pdf
    Further reduction -> 6.9 mill km2.
    http://upload.wikimedia.org/wikipedia/commons/thumb/1/1b/Amo_timeseries_1856-present.svg/2000px-Amo_timeseries_1856-present.svg.png

    1964 sea ice is now in the ballpark of 1990-2000 and equals 2001.

    And preceding years appear to have had even less ice !

    And, of couse, the 1979-2012 melt also correlates very nicely with AMO.

    Wherever sea ice has been checked with improved data, AMO emerges as the dominant driver.

  31. Some unicorn must have had a very full bladder holding in enough water to make the gulf stream slow from the MWP to the LIA. It must have been quite a relief to let it all out at the end of the LIA.

    http://www.whoi.edu/science/GG/people/wcurry/Curry_pdf_files/nature05277.pdf

  32. There are more ”climate models” than there are models on the catwalk

    climate models are more anorexic than the models on the catwalk

  33. For models, decadal prediction will always be a problem because this is a deterministic problem, and chaos does not help with such predictions (Lorenz), especially because the initial state has to be accurate and the ocean 3d state is poorly known or measured. People conflate decadal (AMO) prediction, where climate change is actually a small component, with long-term climate prediction where climate change dominates, as though they are equivalent in some way. They are not. Short-term prediction is trying to predict the noise, while long-term prediction predicts the signal that becomes evident over a few decades.
    A prediction from 1980 with a 3 C per doubling (if they knew we would be at 400 ppm by now) would have come up with a rise of 0.75 C which is not bad, as it underestimates the land rise, but overestimates the ocean rise.

    • If you want to have a good simplified version of the AGW model that only requires a calculator, one is

      dT=3*ln(x/338)/ln(2)

      where x is the CO2 amount in some year and dT and 338 are relative to 1980. Plug in today’s x=400 and you get 0.73 which is below the land temperature rise, but above the ocean rise, and, of course, well above natural variability in amplitude.

    • Interesting idea Jim D. I suspect, however, that if we started at 1950 or at 1990 or at 2000 or at any other decadel point, the fit may not be as good.

      If one were to have access to the CO2 data for each decadel point from 1950 (say) onwards, would this simple model provide a reasonable range of projections to 2050?

      Anyone?

      • The problem with 1950 is the growth of the aerosol effect after that time and through the 1970’s which acted in the opposite direction. That growth slowed relatively by 1980. Anyway, the point was that this rise was quite predictable with moderate CO2 sensitivity even ignoring aerosols after 1980, or from IPCC forcing, aerosols are lately being canceled by the growth in other GHGs, leaving the CO2 as a good proxy for the mean forcing.

  34. I think this quote should be the focus of a completely separate post.

    ““GCMs are built to try to understand and explore how the climate works when we cannot perform controlled experiments.
    Folks are trying to use them to set policy.

    They are not wrong, they are being misused.”

    Title: Steven Mosher Admits GCMs Should Not Be Used to Set Policy

    • For what little it’s worth, I, as a full fledged, card carrying, knuckle dragging, mouth breathing skeptic (sorry, “Skeptic”), have absolutely no problem with GCMs as tools to explore how climate works, so long as they are not used as arguments for drastic policy decisions(aka decarbonization/cap and trade/carbon taxes).

      I agree 100% with Steven Mosher that that is a misuse of GCMs, and hereby call on the IPCC, Real Climate, James Hansen, Al Gore and the remainder of the CAGW cast of thousands to cease and desist.

  35. Apropos of the issue of “misuse.”

    “A China zoo has been forced to apologise after it tried to pass off a dog as a lion. Angry visitors to the People’s Park in Luohe, Henan province, complained when the ‘African lion’ started barking. Zoo staff said they had pretended the Tibetan mastiff was a lion because they could not afford the real thing, local media reported. The state-run Beijing Youth Daily claims the zoo commonly replaced exotic animals with common species.”

    http://www.nationalreview.com/corner/356026/arthur-daleys-zoo-andrew-stuttaford

    GCMs – the lions that barked.

    • I can’t recall laughing this much since last Michaelmas!
      Er … ‘when the ‘African lion started barking.’ Hmm..
      jest when i’mfeelin’ cheesed off with hum-inanity-, meself
      included, something like this. Thx GaryM
      BC

  36. Here’s one to give Fan’s adrenaline a boost

    The Age of Global Warming is Over
    http://www.quadrant.org.au/magazine/issue/2013/7-8/the-age-of-global-warming-is-over

    And here is the empirical evidence (see the ‘Activity Timeline’ charts for each link below):
    Climate Change
    http://climatechange.carboncapturereport.org/cgi-bin/topic?
    Carbon Credits
    http://carboncredits.carboncapturereport.org/cgi-bin/topic?
    Alternative Energy
    http://alternative.carboncapturereport.org/cgi-bin/topic?

    It’s all over bar the shouting. Tell the reality deniers.

  37. ” Such studies suggest that while model uncertainty is important, key results, such as attribution of a human influence on temperature change during the latter half of the 20th century, are robust.”

    Robust meaning capable of reproducing consistent results under virtually every scenario?
    Hmm, I don’t think so.

    “The approaches used in detection and attribution research described above cannot fully account for all uncertainties, and thus ultimately expert judgment is required to give a calibrated assessment of whether a specific cause is responsible for a given climate change.”

    Ultimately one should not pretend to use faulty and incomplete data as a tool of action as the predictions are a form of guesswork or betting.
    Better to leave the money in one’s wallet/purse.

    Expert judgment would be to avoid making a judgment on incomplete and faulty data.

    • Where oh where are these robust studies?

      Michael Mann’s Hockey Stick Study?

      Steig et al’s Antarctic Warming Study?

      Model-simula-shuns-of-what’s -goin’-on-in-our
      -compluh-compluh-compluh-cated-inter-acting-
      climate -system?

    • “Expert judgment would be to avoid making a judgment on incomplete and faulty data.”

      That’s a contradiction of course: “Expert judgement is not to make a judgement”

      It’s impossible to not make a judgement at this point given we know are pushing up the CO2 level in the atmosphere.

      The decision is to either continue doing this, or to curtail emissions. Both decisions require a judgement to be made.

      You judge that because the data is faulty and incomplete we should continue emitting. Someone else might judge that because the data is faulty and incomplete we should curtail emissions and only consider resuming them at a time when the data was more mature and complete and could reassure us that elevated CO2 levels were safe.

      • And how precisely do you propose imposing your expert judgment on the Russians, Chinese, Indians and Africans?

      • Indeed, at some point bracing for impact is a better strategy than trying to grab the wheel.

      • The precious conceit of a Western elite
        Bangs head on to BRIC wall at speet.
        ====================

      • lolwot,

        That wasn’t intended as a sarcastic remark (well, not just sarcastic).

        What do you propose to actually DO about the Russians, Chinese, Indian and Africans who will emit more and more GHGs?

        And why should anyone agree to hamstring our economy as you CAGWers propose, until you can explain how you will do it?

  38. Going to repost as I didn’t get any satisfactory evidence provided. Going to (annoyingly) keep reposting this to provoke someone into providing evidence, or until I feel an exaustive point has been reached where I will conclude there is insufficient evidence for the AMO apart from a tenuous “cycle” in the SST record.

    The AMO is claimed to exist.

    What is the evidence for it?

    I’ll start off with the only line of I know for the AMO:
    1) A pattern in the global sea surface temperature record. Eg:
    http://www.cgd.ucar.edu/cas/catalog/climind/AMO_fig123.gif

    The top graph is the SST record of the north Atlantic, which is what has actually happened (although I note my skeptic friends shouldn’t be accepting such climate-scientist instrumental records as the basis for evidence should they!)

    The AMO is based (imagined? fabricated?) by detrending the top graph to produce what appears to be a cycle left over (the bottom graph).

    Of course it’s a far from perfect oscillation and it has less than 3 cycles. Seems like tenuous evidence to base the existence of an actual oscillating phenomenon on.

    So there must be additional evidence for an importantt 30/60-year oscillation in the North Atlantic right?

    *Note that when correlating the AMO with hurricanes or fish stocks, are they actually just finding a correlation between these things and North Atlantic SST (the top graph?). There’s an obvious mechanism for SSTs to affect things. Detrended SSTs (ie an “AMO”), not so much.

  39. Let me bring this out as a new part, as I feel it is important.

    @@@@@
    Jim D | August 16, 2013 at 11:34 pm |
    Eric H., you were right about the 1 C no-feedback response. Jim Cripwell was just blowing smoke. Of course a radiative imbalance can’t be opposed by convection. We’re talking about earth having to lose extra energy to space. Convection doesn’t cut it as an explanation. Cripwell believes some of the weirdest things.

    @@@@@

    Jim D is WRONG. We are not talking about the earth losing energy into space. We are talking about how energy passes from the surface to the TOA, where it is radiated into space. And this transfer of energy from the surface to the TOA is influenced by conduction, convection, the latent heat of water and radiation; NOT radiation ALONE.

    • David Springer

      The energy imbalance is on the order 0.5W/m2 and it not directly measured because our satellites have only +/- 4W/m2 resolution on outgoing radiation.

      In any case even assuming the modeled figure of 0.5W/m2 is accurate that’s only enough to raise ocean basin temperature 0.02C/decade. That’s not significant. If all or most of that energy was concentrated in the atmosphere and ocean mixed layer it would be significant although still not particularly alarming because we’d still be needing a replacement for fossil fuels before a century has passed. Direct conversion of sunlight and CO2 to fuel by genetically engineered microorganisms using non-arable land and non-potable water is the inevitable solution and it isn’t far off:

      • David Springer

        Not sure why above video doesn’t start at 41 minutes into it. Let’s try again and it starts at zero move up to the 41 minute mark to get to the part on fuel manufacture.

      • David Springer

        WordPress is modifying the youtube URL to make it embedded and it loses the starting time when it does. Skip up to 41 minutes into the video.

      • David you do realize that CO2 in the atmosphere is not an infinite resource and thinking you can just suck this precious resource from the earths fragile atmosphere unchecked is madness. You’ll throw the earth into catastrophic cooling and no doubt at the same time deny that it’s happening. Will you people never learn?

    • Jim Cripwell, it is about the ability of the earth to lose heat to space. GHGs act as an insulator keeping the surface warmer than it would be without them. Adding insulation, for the same surface heating by the sun, warms the surface. The atmosphere maintains a temperature difference of 33 K for a heat loss of 240 W/m2 giving an R-rating of 0.14 W/m2/K (R-0.86 in American units). Small but significant.

      • I inverted the units 0.14 K/(W/m2) is the MKS R rating used in Europe.

      • Jim D, you write “GHGs act as an insulator keeping the surface warmer than it would be without them.”

        Precisely. And, supposedly, the more CO2 you add to the atmosphere, the greater the insulation. So as you add more CO2 to the atmosphere, the slower heat gets to the TOA. But a change in forcing caused by more CO2 is independent of radiation. The atmosphere responds to restore the imbalance with ALL modes of energy transfer; conduction, convection the latent heat of water, and radiaton, not just radiaiton alone.

      • Jim Cripwell, no, radiation to space is only from infra red. Convection plays no role there. To increase the IR it has to be warmer, unless you reduce the GHGs.

      • JIm D. you write “Jim Cripwell, no, radiation to space is only from infra red.”

        I agree, and have said so. The earth’s surface does not radiate into space, except for a small window in the IR spectrum. It is the TOA that does the most radiating into space. But there must be a source of energy feeding the TOA, so it can continue to radiate into space. Otherwise the temperature would drop, and the radiaiton would slow down. That energy is fed to the TOA through the atmosphere. So the key is how does the atmosphere respond to a change of forcing. Not how does the TOA respond to a change of forcing. The atmosphere responds to a change of forcing by changes in conduction, convection, the latent heat of water, and radiation. Not radiation alone.

      • Jim Cripwell, what is radiated out of the TOA depends on the whole atmospheric profile down to the surface. This profile in the troposphere is set by the surface temperature via convection. The only way to warm the troposphere is for the surface to get warmer. That is why the TOA radiation depends so much on the surface temperature.

      • Jim D,
        “This profile in the troposphere is set by the surface temperature via convection.”

        The profile is also affected by the atmosphere’s thermal properties, which will change as you add more CO2. Being a GHG, CO2 has thermal properties quite different from the bulk of the atmosphere.

      • Jim D. I give up.

      • People are asserting that a 1% change in forcing has consequences in atmospheric convection and ocean circulation that we can’t even fathom when this is well within the seasonal and regional variation that we already see and account for. When it gets to 3 or more degrees, more new things may start happening that are outside anything seen in human existence, but until then it is just a 1% perturbation where variations are quite linear.

      • JimD, “When it gets to 3 or more degrees, more new things may start happening that are outside anything seen in human existence, but until then it is just a 1% perturbation where variations are quite linear.”

        That is the same linear no threshold model mentality that leads to tech bubbles, housing bubbles, financial bubbles and climate bubbles. The data doesn’t support 3C Jim, and it is falling every data because it never should have been estimated that high.

  40. I’m slightly confused by this.

    1) Models have run hot and are probably giving the wrong results.
    2) Long term periodic activity is not well modelled and could account for the difference between models and measurement.
    3) The models can be adjusted to account for internal variability as long as we knew what it was.
    4) Does this help? Are we in the same territory as “climate sensitivity” in which water vapour amplification is incorporated into to models to allow them to attribute temperature rise to CO2

    My rather simplistic questions stem from my distrust of modelling gained from biology, in particular cardiac arrhythmias. In this system, which is highly non-linear, results are exquisitely sensitive to initial conditions and parameter selection. We have the advantage of much greater ability to perform experiments than in climate science and most of these show that model outputs are complete rubbish. We are nowhere near producing realistic models simply because we do not understand the biophysics well enough, yet the system is orders and orders and orders of magnitude simpler than climate.

    I am very suspicious of a system of modelling that it iteritive, bad results – incorporate more features – still bad results – incorporate more features …. and the belief that this process exposes the basic physics of the system.

    • “My rather simplistic questions stem from my distrust of modelling gained from biology, in particular cardiac arrhythmias. In this system, which is highly non-linear, results are exquisitely sensitive to initial conditions and parameter selection.”

      Well climate model output evidently isn’t, so completely different situation.

      • Here is another one lolwot, hope you don’t have a heart attack…

        http://news.goldseek.com/GoldenJackass/1375128000.php

        invisible is the neo password for science, around people today.

      • Biologists use quite sophisticated models, but take their outputs with a pinch of salt and not as gospel.
        I am growing primary human cancers in mouse models, however I am not so stupid as to believe I am replicating what happens in a human being. It is but one step toward treating human beings with cancer.

      • @ Doc Martyn,

        I quite agree with you. I don’t think that the mouse is a particularly good model of human arrhythmias, although there are those who are convinced that there are. You can have some similarities by gene knock-outs etc, but the differences in scale, and more important, how arrhythmias actualy arise in man are not represented in my opinion.

        I have used mathematical modelling (oe “experiments in silico” as the pretentious call it) to understand some of the measurements I make in man, but the idea that one can have a global model inorporating mechanics, electrophysiology etc that really represents what occurrs in a patient is, in my view, premature.

      • @ lolwot.
        You do talk complete rubbish on occasions. Do you really believe that the climate is a linear process? Do you really believe that parameterisation has no effect on the model outputs? Do you really believe that initial conditions have no effect on a non-linear system?

        I realise that these questions might be a bit difficult, but why don’t you think anout it during a period of silence?

  41. Antonio (AKA "Un físico")

    Is there any scientific article sumarizing the predictions of climate models vs. (so far) observation?. In these models, how is it related radiation & convection in earth’s atmosphere and earth’s surface?. Any evidences from this side in favor or against the hypothesis “man causes climate to change”?. (I am only interested in links to scientific articles, [i.e. not in your opinions]).

  42. Antonio, “(I am only interested in links to scientific articles, [i.e. not in your opinions]).”
    Well I am crushed, crushed I tell ya!

    http://www.academia.edu/4210419/Can_climate_models_explain_the_recent_stagnation_in_global_warming

    For the radiation/convection issue consider the Earth Energy Budgets,

    http://www.iac.ethz.ch/doc/publications/StephensLiWild_etal_NatureGeoscience.pdf

    That is an update of the Earth Energy Balance in light of Latest Global Observations. The biggest differences are, Latent and Sensible(convective) flux higher than previously estimated and the atmospheric window (room for CO2 to impact) is half previously estimated.

    Sorting out what is really observation from what is “modeled” observation in the Earth Energy Budgets has gotten a bit tricky.

    http://www.ecd.bnl.gov/steve/pubs/Earth%27sEnergyFlows.pdf
    That is the “accepted” for publication version.

  43. Let me bring this out as well as a new piece
    @@@@@
    lolwot | August 17, 2013 at 7:28 am |
    The Beenstock paper is utter BS.
    Proving that CO2 isn’t a greenhouse gas using statistics.
    Ridiculous. Skydragons have more credibility in THEIR arguments than that.
    @@@@@

    Spoken like a true believer in the Church of CAGW!!

    It never fails to amaze me how the warmists on this blog try and put words in my mouth, and claim, or imply, that I have written something i have not. I have never, I hope, claimed that anyone can prove that CO2 is not a greenouse gas; I have always stated that CAGW is a perfectly viable hypothesis.

    What Beenstock et al does is to put a new interpretation on what little empirical data we have. That interpretation gives a strong indication that the value of the CS of CO2 is indistinguishable from zero. Now lolwot, what is wrong with that specific claim?

    • You are the true believer here, you’ve misinterpreted the Beenstock paper to fit what you want to believe.

      • lolwot, you write “you’ve misinterpreted the Beenstock paper to fit what you want to believe.”

        How have I misinterpreted Beenstock et al? Details please.

      • From the abstract: “we find that greenhouse gas forcings have a temporary effect on global temperature. Because the greenhouse effect is temporary rather than permanent…”

        It’s nuts. Greenhouse effect is temporary??

    • “the value of the CS of CO2 is indistinguishable from zero”

      That statement is in contradiction to CO2 being a greenhouse gas.

      The value of the CS of number of gold medals in the Olympics is indistinguishable from zero.

      The value of the CS of CO2 is not.

      • lolwot, you write ““the value of the CS of CO2 is indistinguishable from zero”
        That statement is in contradiction to CO2 being a greenhouse gas.”

        No it is not. CO2 is a greenhouse gas. It’s absorption response almost certainly follows Beer Lambert. Any substance that follows the Beer Lambert law of absorption eventually becomes saturated. When saturation is approached, the amount of new absorption becomes indistinguishabkle from zero.

      • Whatever, this is skydragon stuff I am not interested in continuing this.

      • im not saying you are wrong, I am just saying these skydragon type debates just go in circles, I dont have time for it right now.

      • Somebody tell lolwot that the squiggly line slopes downward in spots.

        Andrew

      • The ‘saturation’ argument was thought to be correct….about a hundred years ago.

      • Micheal, “The ‘saturation’ argument was thought to be correct….about a hundred years ago.”

        It was correct a 100 years ago and it is now. In order for CO2 to not be “near” saturation requires construction of a precise resonance chamber with a controlled pressure. In the real world warming causes expansion.

    • lolwot, “It’s nuts. Greenhouse effect is temporary??”

      As the system approaches a new “equilibrium” condition the energy imbalance decreases to zero. It is like charging a battery. The limit to the battery charge is the energy storage capacity of the battery. So yes, once the oceans reach a charged state, the CO2 portion of the WMGHG effect ends

      Since y’all like to bring up the distant past, at some point in Earth’s history the “average” surface temperature was likely 10C warmer than it is today. At that temperature the atmosphere would have expanded and there would have been more atmospheric mass loss to space until ocean heat capacity and the atmospheric heat capacity reached a new “equilibrium”, ~1000 to 1 ratio. That establishes the “benchmark” no feed back “sensitivity” to atmospheric forcing. As the Earth approaches that limit, the Greenhouse portion of the atmospheric effect becomes fixed or saturated as the specific heat capacity of the atmosphere limits the impact. As the ocean energy decreases due to “other” factors, solar, volcanoes, impact events, WMGHGs have a stronger impact with a maximum CO2 impact of 6 to 9 C.

      http://climateaudit.files.wordpress.com/2013/07/callendar-1938-logarithm-annotated.png?w=720&h=560

      It is a complex dynamic system that is not linear by any stretch of the imagination. The small range that can be assumed linear, decreases as you approach a set point or “strange attractor”. Pretty neat design, er accident of nature.

      • David Springer

        CO2 warming is largely dry land-only phenomenon which still explains everything except lack of warming in Antarctica but that’s a problem for any CO2-warming narrative. Sensitivity is a high figure over very dry land (which includes frozen land) and a low figure over the ocean. It’s due to downwelling infrared evaporating water instead of warming it. The deep ocean is accumulating some heat and we don’t see it passing through the mixed layer because the heat is entering the ocean from river runoff not from overhead infrared radiation. No observations are left unexplained by this as far as I know except lack of warming in Antarctica.

      • David, intuitively, I agree that water is far more likely to evaporate than warm in response to radiative transfer from the stratosphere, especially wrt to ocean depths in excess of 700 metres. But hey, I’m no scientist!

      • To further clarify: the evapotation process itself will tend to cool the underlying water as for example, any Coolgardie safe will do and I’ll leave the Americans to ponder what this really means :p

  44. Lolwot,
    “There’s no way you can take surface temperature and CO2 records and determine that CO2 didn’t cause the warming. It’s just ridiculous to think that can be done.”

    Sure you can.
    https://wattsupwiththat.com/2013/05/17/an-analysis-of-night-time-cooling-based-on-ncdc-station-record-data/

  45. Black Russian

    Judith : While some in the blogosphere are arguing that the recent pause or stagnation is coming close to ‘falsifying’ the climate models, this is an incorrect interpretation of these results ….

    But does falsify the 95% confidence levels in the next IPCC report ?

    • Black Russian, you write “But does falsify the 95% confidence levels in the next IPCC report ?”

      Precisely. And it is this claim, by the IPCC, of such high confidence levels which, IMHO, ought to be the battleground on which the quality of the next IPCC report is fought. CAGW is a hypothesis, which we cannot prove is wrong. But you cannot base high levels of confidence on something that is merely a hypoithesis.

      • But, if AGW is not responsible for all of the many maladies that have been ascribed to global warming, doesn’t that make the world a less predictable, less understandable and more uncertain place than we thought and hoped for when all we had to do is blame Bush and America for all of the world’s problems? And, doesn’t uncertainty just drive us back to God or… worse–e.g., looking out for our own interests because government science has failed the reality test?

      • No hypothesis can be proven right either!

  46. JC:

    While some in the blogosphere are arguing that the recent pause or stagnation is coming close to ‘falsifying’ the climate models, this is an incorrect interpretion of these results.

    Does not it falsify IPCC’s projection of 0.2 deg C per decade warming for the next two decades?

  47. Well I used to believe what the scientists said about AGW but that was before I looked into the subject myself and realised the Models had to be wrong. Notice, not might be but HAD to be.

    Of course the Models on the forecast for the 21st century are currently running well off. However, that is not the reason I know they are wrong, that is actually a point in their favour!

    What did the Scientists predict, via their Models, for the 21st Century?

    They predicted a massive acceleration, averaging about four to five times, the warming seen in the 20th century.

    Of course the alarmist scientists and their followers, now that their Models are currently way off, say that. of course the Models are off you can’t expect them to match the temperature record on a decadal or even longer period. That is because of natural variation in the Earth’s climatic processes which can mask and even reverse the true climate signal. They admit that this natural variation (weather) can be very strong it is not only holding back the forecast massively increased forcing effect of the 21st Century it is also absorbing all the vast pipeline heating that was forecast to kick in at the end of the 20th Century.

    You remember the ‘pipeline’ heating don’t you? This was the heating, that the alarmist scientists were saying, at the end of the last Century, that was inevitable, even if we had stopped producing CO2 full stop at the start of this Century.

    There is no way that these scientists would have said that natural variation could be so large and strong at the end of the last Century. Their theory and models are built on the fact that it isn’t. However, now it comes to the crunch they agree it is to save their Models. And you know what I agree with them. The fact that the Models are off so far this century, is not any proof that they are wrong. Why is that?

    Well the Models just output the climate signal, they average out the ENSO induced natural variation, that we see with the PDO cycle for instance. The temperature record has this natural variation ‘weather’ signal added to the climate signal it is also measuring.

    So we are comparing apples and oranges. Whilst the Models and Temperature Records both measure the underlying climate signal the temperature record has and additional cooling or warming signal added to it via ‘weather’.

    Now we can all understand this and the alarmist scientists apparently now agree. The Models, if they are outputting an accurate climate signal, should not, cannot, MUST not match the short to medium term temperature records, for other than those periods when the ‘weather’ signal is neutral.

    So how did I come to immediately realise that the Models are wrong?……………….

    Because the Models, on their hindcast, do match, quite closely, the temperature record of the 20th Century!!!

    Seeing that as all the readers of this and the scientists themselves, must agree that that is impossible for an accurate Model, then it follows they must be wrong.

    Of course we know why they do match the 20th century temperature record, they were tuned to do so by their creators. You can see why they did it.

    One, they didn’t think natural variation had large effects on the climate signal.

    Two, they didn’t even know about some of the ENSO effects, e.g. the PDO cycle was not identified until 1996.

    Three, they probably thought nobody would take much notice of models that couldn’t even match the known temperatures.

    This disconnect, between the Models climate signal and the Temperature Records climate plus ‘weather’ signal, should have been even more apparent in the 20th century, according to their own theory. According to them, the average forcing effect of increasing CO2 was much lower in that Century than they say it is in the 21st Century. Therefore, it should have been much easier and noticeable for the Earth’s ‘weather’ effects to hold back and reverse the underlying climate signal the Models produce.

    In their hubris they overlooked the fact that an orange doesn’t equal an apple, no matter which way you cut it.

    Also the Models are falsified, not by their 21st Century forecasts but by their 20th Century hindcasts and I defy any alarmist out there to try and stitch them together.

    The facts as stated above are obvious and understandable and blow huge holes below the waterline in the good ship CAGW.

    As far as the Models go they are caught!

    Reality has caught up with them, In defending their forecasts, having to agree that the models outputs are necessarily different to the temperature record then they have completely undermined their hindcasts of the last century. To reiterate, an accurate model should not, cannot, must not, except for the shortest time, and the very long term, match the temperature records.

    Alan

  48. Judith: You appear to have missed the boat with these words:

    “While some in the blogosphere are arguing that the recent pause or stagnation is coming close to ‘falsifying’ the climate models, this is an incorrect interpretion of these results. The issue is the fitness-for-purpose of the climate models for climate change detection and attribution on decadal to multi-decadal timescales.”

    Attempts to falsify theories and hypotheses lies at heart of the scientific process. If the models have been falsified to your satisfaction, they can’t be used. Interpretation has nothing to do with it. “Fitness-for-purpose” reeks of politicization of science – we must deliver the answers that society needs. (We do, of course, use simpler theories like Newtonian mechanics even though we know they have been replaced by improved theories like relativity, because we understand the full range of conditions under which Newtonian mechanics can be trusted to give correct answers. We have no idea of how far current models can be trusted.)

    Before climate models are fit for ANY purpose, they need to be revised so that they are consistent with observations. If adjustment of parameters fails to produce a model that is consistent with all observations (as some of the large ensembles of models studied by Stainforth et al suggest), then we need to recognize that we don’t have the computational power or the collection of parameters (for different cloud types, for diffusion of heat in the ocean under different situations, etc) to describe the real behavior of the earth.

    Until there is recognition that climate models have been falsified, the effort of modelers will be directed towards making political use of models rather than improving them.

  49. Pingback: Weekly Climate and Energy News Roundup | Watts Up With That?

  50. Might be a good thread to discuss Robert Pindyck (MIT) criticizing climate models:

    “Model Mockery” (op Ed discussing Pindyck’s work):

    http://opinion.financialpost.com/2013/08/22/lawrence-solomon-model-mockery/

  51. Pingback: Pause tied to equatorial Pacific surface cooling | Climate Etc.

  52. Pingback: Questions Policymakers Should Be Asking Climate Scientists Who Receive Government Funding | Bob Tisdale – Climate Observations

  53. Pingback: Questions Policymakers Should Be Asking Climate Scientists Who Receive Government Funding | Watts Up With That?