Science without method

by Judith Curry

Since people are clamoring for a new thread, lets talk about this article in the the Australian Quadrant entitled “Science without method,”  subtitled “Global warming research: whatever happened to the scientific method?”  To review previous Climate Etc. posts on the Scientific  Method, click here.

In the first two paragraphs, the article cuts straight to the heart of the matter:

Most of these papers are, of course, based upon the output from speculative and largely experimental, atmospheric models representing exercises in virtual reality, rather than observed, real-world, measurements and phenomena. Which leads to the question “What scientific methodology is in operation here?”

Though much has been written concerning the scientific method, and the ill defined question as to what constitutes a correct scientific approach to a complex problem, comparatively little comment has been made about the strange mix of empirical and virtual reality reasoning that characterises contemporary climate change research. It is obvious that the many different disciplines described as being scientific, rather than social, economic, or of the arts, may apply somewhat different criteria to determine what fundamental processes should define the “scientific method” as applied for each discipline. Dismayingly, for many years now there has been a growing tendency for many formerly “pure” scientific disciplines to embody characteristics of many others, and in some cases that includes the adoption of research attitudes and methods that are more appropriately applied in the arts and social sciences. “Post-modernism”, if you like, has proved to be a contagious disease in academia generally.

FYI, postmodern science (differentiated from postnormal science) is described here, summed up by the following quote: “If science is carried out with an amoral attitude, the world will ultimately respond to science in a destructive way. Postmodern science must therefore overcome the separation between truth and virtue, value and fact, ethics and practical necessity .”

I’m ok with this so far.  Read the article for some history of physics and discussion of the scientific method in this context.  Then the article comes back to climate change and the enhanced greenhouse effect:

Out of this cut and paste “history” of physics, comes the strongest criticism of the mainstream climate science research as it is carried on today. The understanding of the climate may appear simple compared to quantum theory, since the computer models that lie at the heart of the IPCC’s warming alarmism don’t need to go beyond Newtonian Mechanics. However, the uncertainty in Quantum Mechanics which Einstein was uncomfortable with, was about 40 orders of magnitude (i.e. 10^40) smaller than the known errors inherent in modern climate “theory”.

The following two sentences are incorrect.  Newton’s second law, the ideal gas law, the first and second laws of thermodynamics, Planck’s law, etc. all well established “assumptions” that provide the foundation for reasoning about the Earth’s energy budget and the transport of heat therein.

Yet in contemporary research on matters to do with climate change, and despite enormous expenditure, not one serious attempt has been made to check the veracity of the numerous assumptions involved in greenhouse theory by actual experimentation. The one modern, definitive experiment, the search for the signature of the green house effect has failed totally.

Oops.  The signature of the Earth’s greenhouse effect is well known and demonstrated by the infrared spectra at the surface and measured by satellites (see this previous thread).    The “enhanced” greenhouse effect and the associated feedbacks are at issue.  I suspect the author meant to say “enhanced greenhouse effect” in the above sentence.  Further, you can’t conduct an experiment on a natural system such as the Earth’s climate system in the same way you can conduct a controlled experiment in a physics or chemistry lab.

Projected confidently by the models, this “signature” was expected to be represented by an exceptional warming in the upper troposphere above the tropics. The experiments, carried out during twenty years of research supported by The Australian Green House Office as well as by many other well funded Atmospheric Science groups around the world, show that this signature does not exist. Where is the Enhanced Green House Effect? No one knows.

The issue of what is going on with temperatures in the tropical upper troposphere (both in terms of the observations and also the relevant processes in climate models) is an active area of research and needs to be sorted out.  More on this topic in a future post.  Assuming the veracity of the relevant data sets (which disagree with each other in any event) and this interpretation of the discrepancy between observations and climate models in the upper tropical troposphere, a discrepancy between climate models and observations in the upper tropical troposphere could be a result of known inadequacies in convective parameterizations, subgrid entrainment processes,  ice cloud microphysics, etc.  Such potential model deficiencies accumulating in the upper troposphere do not invalidate the  enhanced greenhouse effect; rather the discrepancy is motivating increased analysis and improvement to the relevant model parameterizations.

In addition, the data representing the earth’s effective temperature over the past 150 years, show that a global human contribution to this temperature can not be distinguished or isolated at a measurable level above that induced by clearly observed and understood, natural effects, such as the partially cyclical, redistribution of surface energy in the El Nino.  Variations in solar energy, exotic charged particles in the solar wind, cosmic ray fluxes, orbital and rotational characteristics of the planet’s motion together provide a rich combination of electrical and mechanical forces which disturb the atmosphere individually and in combination.  Of course, that doesn’t mean that carbon dioxide is not a “greenhouse gas”, so defined as one which absorbs radiation in the infra red region of the spectrum. However, the “human signal”, the effect of the relatively small additional gas that human activity provides annually to the atmosphere, is completely lost, being far below the level of noise produced by natural climate variation.

The final concluding sentence of the above paragraph is an overconfident dismissal of the human signal on climate.  Separating natural (forced and unforced) and human caused climate variations is not at all straightforward in a system characterized by spatiotemporal chaos.  While I have often stated that the IPCC’s “very likely” attribution statement reflects overconfidence, skeptics that dismiss a human impact are guilty of equal overconfidence.

So how do our IPCC scientists deal with this? Do they revise the theory to suit the experimental result, for example by reducing the climate sensitivity assumed in their GCMs? Do they carry out different experiments (i.e., collect new and different datasets) which might give more or better information? Do they go back to basics in preparing a new model altogether, or considering statistical models more carefully? Do they look at possible solar influences instead of carbon dioxide? Do they allow the likelihood that papers by persons like Svensmark, Spencer, Lindzen, Soon, Shaviv, Scafetta and McLean (to name just a few of the well-credentialed scientists who are currently searching for alternatives to the moribund IPCC global warming hypothesis) might be providing new insights into the causes of contemporary climate change?

Of course not. That would be silly. For there is a scientific consensus about the matter, and that should be that.

Well the IPCC deserves the above criticism to a large extent.  Yes, much research is ongoing to understand discrepancies in climate models and improve them.  However, in the AR4, their experimental designs and conclusions are dismissive of explanations that involve natural variability.

JC’s conclusion.  I found it pretty surprising to find an article on such a sophisticated topic in the Quadrant, but the more I read the Quadrant, it seems that they have some pretty good articles.  This article raises an important issue related to reasoning about complex systems.  From the previous thread on frames and narratives:

The second monster in my narrative is the complexity monster.  Whereas the uncertainty monster causes the greatest confusion and discomfort at the science-policy interface, the complexity monster bedevils the scientists themselves.  Attuned to reductionist approaches in science and statistical reasoning, scientists with a heritage in physics, chemistry and biology often resist the idea that such approaches can be inadequate for understanding a complex system.  Complexityand a systems approach is becoming a necessary way of understanding natural systems. A complex system exhibits behavior not obvious from the properties of its individual components, whereby larger scales of organization influence smaller ones and structure at all scales is influenced by feedback loops among the structures.  Complex systems are studied using information theory and computer simulation models.  The epistemology of computer simulations of complex systems is a new and active area research among scientists, philosophers, and the artificial intelligence community. How to reason about the complex climate system and its computer simulations  is not simple or obvious.

So while this article is very interesting and raises important issues, in the end it oversimplifies the issue.


639 responses to “Science without method

  1. Judith,

    It would be extremely hard to generate a model based on a few hundred years of data out of 4.5 billion years of planet history.
    Most of the past is theories as very little physical evidence that current science can find but some of the science has been tainted by bad conclusions.

    The path I am following makes a great deal of sense to the evidence that is left behind. Totally measurable with the rotational loss of water at a rate of .00025mm per year on average or 2.5mm/10,000 years.

    • It will make many readers cringe, Joe, but . . .

      . . . modern climatology, funded by Western governments,
      is no more scientific than conclusions of the Flat Earth Society.

      World leaders, Al Gore, and the UN’s IPCC tricked an army of “Nobel Prize winning climatologists” into thinking Earth’s climate and long-range weather are independent of the stormy Sun that heats the Earth, completely engulfs this planet Earth, and sustains our very lives [1-5].

      Political leaders want us to think that they have the power to control Earth’s climate, but they and their followers appear to understand the principles of science no better than members of the Flat Earth Society.

      To understand how the gigantic, powerful Sun controls the tiny ball of dirt on which we live, see 1-5.

      1. “Super-fluidity in the solar interior: Implications for solar eruptions and climate”, Journal of Fusion Energy 21, 193-198 (2002).
      http://arxiv.org/pdf/astro-ph/0501441

      2. “The Sun Kings: The unexpected tragedy of Richard Carrington and the tale of how modern astronomy began” by Stuart Clark (Princeton University Press, 2007).

      The solar eruption that completely surrounded the Earth in 1859 is described in the front flap.

      3. “Heaven and Earth: Global Warming, the Missing Science” by Ian Plimer (Conner Court Publishing Pty Ltd., 2009)

      4. “Earth’s Heat Source – The Sun”, Energy and Environment 20, 131-144 (2009): http://arxiv.org/pdf/0905.0704

      5. “WeatherAction” and the long range weather and climate predictions of astrophysicist Piers Corbyn.
      http://www.weatheraction.com/

      With kind regards,
      Oliver K. Manuel
      Former NASA Principal
      Investigator for Apollo

  2. ” I found it pretty surprising to find such a sophisticated analysis in the Quadrant”

    By what measure is it a “sophisticated analysis”? It’s main conclusion is wrong, the support it invokes for it is likely both wrong and wouldn’t support the conclusion if it was right. It then further concludes nefarious motives on the part of scientists who don’t support the wrong conclusion (which includes yourself apparently). It spends a lot of time on an irrelevant discussion of quantum physics.

    To put it another way, John Nicol can now claim his article was described by Judith Curry climatologist and chair of the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology as ” a sophisticated analysis” which “raises an important issue related to reasoning about complex systems.”

    An article which concludes the greenhouse effect doesn’t exist and that scientists covered it up.

    • ok, i am going to change the wording to “an article on such a sophisticated topic”

      • Professor Curry,

        The analysis in the Quadrant seems far more sophisticated than reports on global warming in Nature, Science, PNAS, etc.

        I would encourage the editors of these once-respected journals to study and comment on the first sentence: “Most of these papers (on global warming) are, of course, based upon the output from speculative and largely experimental, atmospheric models representing exercises in virtual reality, rather than observed, real-world, measurements and phenomena.”

        With kind regards,
        Oliver K. Manuel
        Former NASA Principal
        Investigator for Apollo

      • Despite all the flaws uncovered, climate science itself is still amenable to the standard scientific method.

        See today’s news story by Dr. David Whitehouse:

        “Is It The Sun Wot Done It?”

        http://thegwpf.org/the-observatory/2864-is-it-the-sun-wot-done-it.html

      • steven mosher

        They make an EMPIRICAL claim that MOST (meaning more than 1/2) of papers are based on models. They offer no evidence for their claim so one cannot accept it on faith as you have. But you like the sentiment expressed, so you agree.

        Some scientific attitude you have.

        Based on some of the other mistakes they made in the article I would doubt their claim.

      • Judith, note that comments on the previous topic include a link to a Quadrant article by Bob Carter, David Evans, Stewart Franks, Bill Kininmonth & Des Moore, who collectively have reasonable credentials to assess climate science and policy. It’s by no means unknown for Quadrant (not “the Quadrant”) to carry good articles.

      • A bit of context from an inhabitant of the land of Oz might help. Quadrant is one of a plethora of “small” magazines originally set up under the auspices of the Congress for Cultural Freedom sharing a sociopolitical niche with mags like Norman Podhoretz’s “Encounter” in the US, which now cater to a neoconservative perspective (this is an observation, not a criticism). While sometimes able to come up with some thought provoking journalism, it suffers from a small pool of writers who tend to ruminate on contrarian themes (this is a criticism as well as an observation).

        It’s current editor, Keith Windschuttle, is a onetime Maoist turned neoconservative whose most recent contributions to the Australian intellectual scene have been to question the “consensus” around the contribution of European settlement to the catastrophic decline in our indigenous population, which is often likened to genocide. His methodology as a historian is rather questionable though I actually believe the Australian track record in our dealings with our indigenous folk is far better than, say, that of the US in its dealings with Native Americans (where a policy tantamount to active genocide seems to have been pursued).

        With regard to AGW, Quadrant tends to lionise Monckton and Plimer. While I haven’t delved into Monckton’s presentations, Plimer’s opus is greatly flawed by dubious referencing (see Monbiot’s debate with Plimer on the Australian ABC in which Plimer dodges Monbiot’s admittedly aggressive questioning turning the “debate” into an sterile exchange in which two angry men talk past one another).

        Windschuttle’s judgment when it comes to editorial oversight of science is as dubious as his approach to history. Quadrant’s occasional forays into my own field – psychiatry – have left me wincing.

        I’m a politically conservative soul inclined towards the sceptical/lukewarming end of the spectrum of the AGW debate who dislikes alarmist oversimplification. However, I’m a stickler for accuracy and feel considerable irritation when I see complexities papered over from partisan motives on either side of the debate. Quadrant, I fear, tends to do this all too often even while arguing a perspective towards which I feel a natural sympathy.

      • +1
        We see a lot of oversimplified narratives on both sides of the aisle in climate science. Historically when this happens it is a good indicator that reality is much closer to the mean.

      • Chris your characterization of Keith Windschuttles methods as questionable is ridiculous. Windschuttle is a classical historian basing his assesments on what is documented and demonstratable not what is speculative agenda. Hence his demolition of Henry Reynolds and Lyndell Ryan. If you are a stickler for accuracy as you claim then you need to exercise some.

      • Keith Windschuttle when read closely is one of the most extraordinary cherrypickers I have come across.While his critique of Reynolds and Ryan is not without substance, he often cites the very historians he criticises elsewhere when it suits his purpose to bolster his argument. He dismisses oral history but chooses to give great weight to documentary evidence from the time such as records from courts and enquiries as if the latter told the whole story or at least the only story we can safely believe. Anyone with a modicum of experience with courts of law or public enquiries would instantly know how extensively “edited” such sources are and how much information they exclude.

        His work, IMHO, lacks internal consistency, which I thought a great pity as I began reading his work from a position fundamentally sympathetic to his his thesis but finding myself increasingly disconcerted by seeming non sequiturs (it’s too long since I read his work to give specific examples, which would take us way off topic). I would add that I’m still not fond of the so-called “black armband” perspective on Australian history, which Windschuttle critiques. I just don’t like the way he did it displaying much the same flaws mutatis mutandis as the historians he sets out to demolish.

        His approach seems similar to writing a history of the “Climate Wars” and Climategate restricting our evidence to the findings of the Muir Russell and other official enquiries into the emails, the peer reviewed literature and the IPCC, while excluding the output of the blogosphere on either side.

      • batheswithwhales

        Judith,

        For some people it is a big and cherished victory to find even the smallest point, even if it is just a spelling mistake, a slightly wrong choice of wording, etc. We find them in all open forums on the web.

        It is obvious to 99% of the readers and participants here that whether the article is “sophisticated” or not compared to other articles from the same source, or compared to your own expectations of the source, is irrelevant to the issues brought up.

        But great response. Give the trolls their little crumb, or morsel of pleasure, and keep focusing on the issues.

        That’s what we are here for anyway. Not for endless nitpicking.

    • BAZZZZINGA! :-)

    • John Carpenter

      “An article which concludes the greenhouse effect doesn’t exist and that scientists covered it up.”

      Not the same take home message I got. Sounds to me like you already decided what the article was going to say before you finished reading it. This seems to be the prevalent POV for those who want to close their ears to other possible descriptions of how the climate might work…. and then attributing nonsensical statements to the authors of anything that questions the climate establishment.

      Your post further drives home the type of mentality that John Nicol is pointing to.

      • “Not the same take home message I got.”

        Of course and naturally anyone “taking home” a different message than yours is wrong or better yet

        ” drives home the type of mentality that John Nicol is pointing to.”

        ” This seems to be the prevalent POV for those who want to close their ears to other possible descriptions of how the climate might work…. “

        If he brings the evidence I’m happy to listen. As things currently stand the “No greenhouse effect” idea is in the kook bin.

      • John Carpenter

        “If he brings the evidence I’m happy to listen.”

        I doubt it.

      • So you want evidence of natural forcings? What cave do you live in? I’ll send some over.
        Cheers,
        Big Dave

    • shaperoo,
      Psssst….Did ya read it?
      Didn’t think so.

  3. Ah yes, the “contagious disease” of post-modernism, which any self-respecting scientist should despise. It’s not at all clear to me that the definition you provided to “post-modern science” is what the author had in mind though. “Post-modernism” in the article seems to be a way of referring to the mixing of interdisciplinary methods (namely the dilution of formerly pure sciences by the arts and social sciences) and a disregard for the scientific method. This is bad form in my view, but I’m fairly used to seeing post-modernism used as a punching bag.

    • Formally called “fiction”.

    • ““If science is carried out with an amoral attitude, the world will ultimately respond to science in a destructive way. Postmodern science must therefore overcome the separation between truth and virtue, value and fact, ethics and practical necessity .” This is nonsense. The detached observation of reality required to understand natural processes is not dependent on the moral stance you take. However, it can lead you to truth and understanding, which is the basis of any sustainable moral code. The adoption of a particular morality as a prism through which scientific study is undertaken would appear to be a barrier to a disinterested search for truth.

  4. I have no idea what this means:

    If science is carried out with an amoral attitude, the world will ultimately respond to science in a destructive way.

    Maybe I just don’t speak postmodern.

    • ChE –
      It means that if the scientists screw up too badly they could find a crowd with torches and pitchforks on their front lawn.

      Although IMO that crowd would do better to find the politicians houses first.

    • While I am not familiar with this particular author, the argument appears to be that science must be conducted on a moral foundation or the consequences for the world will not be good. Something like – science without morality is inherently harmful. I could see such a view being called postmodern, but that doesn’t mean all “postmodernists” would agree with it.

      • I’m not aware of any historical link between “morality” and science in the past. Immoral science (i.e. Mengele) is a problem, but ammoral science is the way it always was.

        This has a creepy medieval church ring to it. Your science will fit into our framework.

      • David L. Hagen

        Climategate is an example of removing moral values of truth and integrity from science. The consequence is we are left with “anything goes” as long as it favors getting the next grant.

      • No. They stepped over the line between amorality and immorality. They actively circumvented normal ethics.

      • Yes, but in their own minds, they crossed the line between amorality and morality, which I assume is meant by post modern science in the article: once you start thinking that science must work for the good of society, you might end up on a slippery slope, because everyone does not have the same concept of “good”.

        Obviously some scientists thought that catastrophic climate change was imminent, so science should become a tool to stop the disaster.

        In this way, even bad science can be justified as long as it is for the “common good”.

      • Bingo. We’re all aware of and sensitive to the dangers of immoral science. What’s less obvious is the perniciousness of moralistic science. That’s what gets them into ethical trouble. If there’s a “greater good” that can be rationalized, hang on to your wallet, because their ethics will go out the window.

      • Let me ask you a question: How is the idea you’re articulating different from a moral belief system in which the dispationate search for the truth is the central moral value? It may be that I’m simply misunderstanding your point. I suppose another way to get at my question is to ask what the distinction between “amoral” and “immoral” might be.

        Then let me ask another question. Which was Mengele, and how do you know?

      • The problem is of course nobel cause corruption.
        This has permeated every other social movement that ahs enbraced science as its rationale.
        AGW is chock full of this problem.

      • The Road to Hell is paved with Good Intentions. Good Intentions have their roots in morality. Authority is Morality in action.

        Folks the AGW debate will NEVER have a conclusion because the fanatics will never see the attempted falsification of AGW as an amoral natural function of science, they will continue to see it as an Immoral act propagated by Immoral people. The AGW moderates will simply go along with their ‘leaders’. Observations back this up immensely.

        AGW is a Religion, the only sane course of action is to prevent the fanatics from tithing us and negatively influencing our lives and futures. This is tricky because the fanatics have convinced the AGW moderates that everyone is going to Hell if the fanatics don’t get their way. They are going to tithe us and make our lives more difficult whether we like it or not.

        Resistance is the key. Stop going along with them. Stop your State Govt’s from funding this religion. Including, no make that especially learning institutions. They have enough followers to make their priests wealthy without forced government altruism and indoctrination.

    • Latimer Alder

      Scuse me for interrupting and all, but isn’t the whole reason for doing science, – rather than prayng for enlightenment from the gods or studying the entrails of dead animals or any of the other daft things people get up to – is that it is a method that doesn’t involve ‘morality’.

      Like Gil Grissom say ‘its all about the evidence’. Not about whether you like or dislike the conclusions. Or whether you work your burette with the love of yahweh or the worshp of gaia or any other idea in your heart.

      When did they stop teachng this in junior science classses?

      • “When did they stop teaching this in junior science classses?”

        After they stopped teaching this at colleges and universities, methinks.
        But then again, so many of the cAGW proponents don’t seem to have heard of Lysenkoism either – or if they have, they see nothing wrong with that and even seem to try and emulate it.

      • Wow, that is a very weird idea you have there.

        No, I don’t think “the whole reason,” or even a very small part of the reason, for doing science is to avoid things involving morality. It seems fairly clear to me that the main reason for doing science (aside from the fact that its fun) is because it predicts the future better than the dead-animal type activities you mention.

        Morality remains every bit as important as it ever was. It can’t be replaced by science, because it serves a very different purpose. Either one without the other is a formula for human misery.

      • Latimer Alder

        Perhaps I misphrased what I meant.

        Attempts to understand the world prior to science were based upon ‘morality’ in the sense of ‘it says so in the Bible’, ‘the Gods predict it’, ‘Aristotle said so’, ‘Allah has decreed it so’ etc etc.

        Science frees us from such false considerations. It is thus morally neutral….it does not depend on what you or I or Fred Bloggs or Jenny Sixpack believe because of their faith and what they’ve been told to believe.

        We agree I think that science should score a null on morality. Thise who claim to be doing something with a moral dimension aren’t doing science. They are doing morality. Different stuff.

      • Ah, well–that does make a lot more sense. But it’s really, really wrong.

        Firstly, it’s not really true that “before science” people understood the (physical) world in terms of morality, or, at least, not universally so. The Persian king, Xerxes, did famously have the seas flogged when they refused to obey his command, delaying his invasion of Greece. But the Greeks certainly didn’t think of the physical world that way. Not that they thought about it with the modern scientific method, either, of course. But they understood the difference between objective and subjective reality. Indeed, I believe the Greeks are credited with the first formal articulation of that distinction, with the Ancient Greek aphorism: “Fire burns here and in Persia, but the laws differ.” So I think you’re mistakenly conflating “moral reasoning” with “everything that’s not reductionist analysis,” or “not rational,” or something to that effect.

        I’m not sure, but it seems to me that you’re also conflating “morality” with “belief.” Of course, the classic “fact-value distinction” teaches that there is no objective reality to be found within moral questions, so it seems like you’re speaking from within that paradigm. This particular bit of belief, like the others I mentioned, can be traced to a particular place in our history. It happened when the initial euphoria of the enlightenment philosophers died off. They believed that reductionist analysis was going to finally answer those thorny moral questions with the certainty of a mathematical proof, but it eventually became apparent that, while reductionist analysis would tell you how to build an internal combustion engine that you could predict with great certainty would work, when applied tomoral questions you got much less satisfactory results. Of course, the only logically justifiable conclusion from that data is that reductionist analysis doesn’t answer moral questions, but the one our society drew was that moral questions can’t be answered (and so it’s pretty much dumb to take them seriously).

        So I believe we do very much agree that moral reasoning and science are two very different things, but I fear we have different opinions about the value of moral reasoning.

      • Latimer Alder

        Up until a moment ago I wasn’t aware that I had an opinion at all about the value of moral reasoning :-). Nor that I was ‘speaking from within a paradigm’. Us IT/Chemistry guys leave all the high-faluting stuff to the philosophers. We just get on with making things work and stuff. Too much introspection about values and things gets in the way of getting the job done.

        But thanks for devoting so much of your time to my pretty throwaway remarks. I hope it was worth it.

      • Heh…You’re being ironic, but believe it or not it is quite possible to have an opinion about something and not know it. In fact, it’s pretty ordinary to do so.

        In fact, I think your comment here pretty much illustrates my point about how our culture has relegated moral reasoning to the back of the intellectual bus. Not to take away from your scientific accomplishments. But like I said, either one without the other is a formula for human misery.

        You might consider what, exactly, are the things you’re “trying to get done.”

      • Latimer Alder

        Thanks for your advice to

        ‘You might consider what, exactly, are the things you’re “trying to get done.”’.

        Since I’ve temporarily forgotten how to spell ‘patronising bastard’, I’ll have to answer another way:

        Well y’know like ‘stuff’ and all.

        Maybe like building those computers and all that the clever people use to tell us what to be scared about and how evil we all are because we want to keep our houses warm in winter.

        Or providing the Internet so that academics can have philosphical discussions of no practical value whatsoever about morals from their nice centrally heated/air conned offices while the rest of us try to keep public transport going in the cities (while stilll being the aforementioned evil amoral bastards making mother gaia weep with sadness about our lack of care for her).

        Just earning a living and making things happen sort of stuff. Nothing like as important as auto navel examination.

        What was your conclusion when you indulged in such introspection? Values shape up OK? Top notch on the old morality front? Ticketyboo on the piles? Or does sitting on so much moral and intellectual superiority cause you pain?

      • Latimer –
        So far, so good. Now how’d you like to take on “amoral science”? :-)

      • Latimer Alder

        As a freelance, I’ll take on any commission for a suitable consideration….

        But as a proof of concept, I think there needs to be a disticntion btween the science and teh morality (or amorality or immorality). The science is morally neutral. How that knowledge is applied may have moral attributes. But the two are distinct and different. Climatologists seem to not only confuse them, but get the order the wrong way round: viz:

        Moral isuse: Save the world from my perception of evil (i.e. humans I do not agree with). Strategy: scare then witless so I can prevent them doing whatever I don’t like. Tactic: Produce climate scare stories. Work product: ‘science’ that ‘proves’ the climate scare.

        Ref: Sir John Houghton- ‘we must announce disasters or nonbody will listen’

        Interesting snippet: Houghton and Gavin Schmidt were both at Jesus College, Oxford at the same time……….It isn’t a very big place AFAIK…200 undergraduates?

    • “If science is carried out with an amoral attitude, the world will ultimately respond to science in a destructive way. ”

      How does this follow? It is nonsense. The author appears to be slip one by and trying to say amoral = immoral. World of difference.

  5. The IAC’s review of the IPCC clearly identifies “biased treatment of genuinely contentious issues” as one among several deficiencies in the IPCC’s modus operandi. It is the alternative hypotheses (rather than GHGs) which have been ‘swept under the carpet’ in the interests of supporting the agendas of sponsor governments. The vast majority of the 194 nations participating in IPCC expect to receive vast sums of money as the main outcome of this process. In fact huge sums of money have already been pledged and in part committed to the UN by several nations.
    The IPCC’s focus on CO2 seems to rely largely on assigning a large value to climate sensitivity together with large positive feedbacks (directly/indirectly to CO2) in their computer models.
    For example Shaviv has demonstrated a model which involves solar magnetic variability modulating cosmic ray flux, low altitude troposphere ionisation, cloud nucleation, cloud cover and albedo. When tested against historical data, the residuals are HALF those of the IPCC models. And he does this without needing to assign large values for climate sensitivity and feedbacks. Increased CRF during passage of the solar system through the spiral arms of the Milky Way (with star formation and supernovae) would account for glaciations. Did IPCC give any serious consideration to this in AR4?

    • This is the problem. EVERY area needs serious attention until proven ABSOLUTELY incorrect.

    • The change in tone is self evident to readers of the full IPCC reports, from the parts written by scientists (which sound sensible) to the parts designed for the politicians (overloaded with spin).

      The sad part is that even the scientists within the IPCC who acknowledge scientific uncertainty http://environmentalresearchweb.org/cws/article/opinion/35820 have not been very vocal about their view, being somehow beholden to a higher authority – probably a non-disclosure agreement signed by IPCC participants as is the norm with government bodies.

      Beyond that, somehow the public face of climate scientists outside IPCC aligns more with the IPCC view than the uncertain view.

  6. I see the article as having more credibilty than the Dr Curry commentary on it. (Still waiting to see a clear, testable statement of the core physical phenomenon thought to underly atmospheric warming.)

  7. An interesting question raised in the article is of course “What scientific methodology is in operation here?” concerning the climate models.

    Can the results validate the method?

    If you do 100 runs of a climate model and check for correlation after 50 years, and one of the runs seems to fit pretty well – does it prove that both method and the assumptions fed into that particular run were correct?

    Of course not. This is science done exactly backwards – instead of testing a hypothesis through exposure to real life (lab experiments, observations) climate modelers are testing real life through exposure to multiple hypotheses: so if the one run (among hundreds) with x aerosol forcing and x solar forcing, etc, fits observations best, then two things are proven:

    1. The model run (hypothesis) in question, its methodology and the particular inputs of forcings etc were proven right.
    2. The climate system is proven to have characteristics resembling the forcing scenario of the model run (hypothesis).

    So they seem to validate each other. But in fact nothing is validated. The dynamics of the climate system during the period of the experiment might as well have been caused by an ensemble of other factors, unknown at the time, or even the same factors in a different mix.

    And this will be the case for the foreseeable future.

    So where is the value in this exercise?

    • This planet has produced a few anomalies that take centuries to show a difference in order to get a correct reading.
      Fast tracking science into a generating bad science for not covering EVERY avenue that interacts or is viable.

      • Well fast tracking certainly seems to be going on. Afterall that’s what the IPCC is all about, constructing a “certainty through consensus” in a couple of decades, where a normal scientific process would take a century to produce anything at all “certain”.

        And we keep hearing how climate models have predictive skill. Just a few days ago Michael Mann held a lecture where he stated that Hansen’s 1988 model basically got it right, seemingly proving both that climate models are reliable, and that the forcing scenario of CO2 employed by the model was pretty close to reality.

        And this was climate modeling in its very infancy. Quite a visionary, Mr Hansen, cramming the global climate system into his IBM in 1988, pressing the button and hey presto!

    • Extremely well put. The value is that millions of people are coerced into believing that which they will never understand and have confidence in this belief because they feel it is moral. This has never been a scientific debate, it has always been one of morality. That is why it will never end until it is only the Believers who are funding it.

      This idea of Postmodern Science is a crock and extremely dangerous. Science has not been nor will it ever be moral or immoral. Nature knows nothing of right or wrong behavior, it only dictates whether those behaviors are possible.

      Once you introduce ‘morality’ you automatically introduce immorality which must be addressed by those who claim to be ‘moral’. Morality requires an ‘authority’ to coerce moral behavior. We’ve seen the outcome of numerous dangers in ‘Scientific Authority’. Morality has everything to do with how Humans interact with each other, but NOTHING to do with how the Universe behaves.

      One reason why I think the AGW Believers like this Postmodern crap is that ‘moral’ has another definition tied to certainty. ‘Moral’ also means PROBABLE. A moral certainty is an intuitive probability. One that requires no conscious rational process or thought. Sounds about right for AGW.

      These charlatans have already bet their careers, they must continue their bluff and ‘Postmodern’ charades to continue to ‘deserve’ their incomes. They sure as heck are not earning them.

      • Science has not been nor will it ever be moral or immoral.

        Your post is thought provoking. The (conventional) scientists I know generally think that postmodernism is one of the tools being used against them (that any perspective or view by non experts has equal value to the peer review research they produce) and are also generally hostile to scientists that act as advocates or campaigners why still conducting research full time becasue they think it suggests a conflict of interests. From what I can gather, most climate scientist think they are doing normal science in the sense of puzzle solving and incremental discovery.

        I think that climate change activists might like the element of ethics in relation to the need to choose between an emphasis on some research rather than spend money on other things (research in breast cancer). The right, particularly the religious right, might think that some elements of science conflict with other knowledge and producing abortion pills, or animals with human DNA might be considered immoral science. I think many people would consider experiments on human subjects or attempts to clone humans as immoral science, but this is probably not what you mean here.

        http://mitigatingapathy.blogspot.com/

      • The (conventional) scientists I know generally think that postmodernism is one of the tools being used against them

        Not surprising since morality has been a tool of force since recorded history.

        (that any perspective or view by non experts has equal value to the peer review research they produce)

        The value of peer review is bastardized when the function of the review is anything other than rigorously testing methods and conclusions. Expert is a relative term. If a non-expert properly fails to falsify someone’s work, or does so with ease, the value comes with that procedure being able to be repeated by experts and non-experts alike. Is it not? Is the ability to falsify not the mission of a review?

        and are also generally hostile to scientists that act as advocates or campaigners why still conducting research full time becasue they think it suggests a conflict of interests.

        Makes sense if there is any correlation between the advocacy and a desired result.

        From what I can gather, most climate scientist think they are doing normal science in the sense of puzzle solving and incremental discovery.

        From what I gather, most climate scientists , and their advocates (politicians and activists), are engaged in postmodernism and disdain ‘conventional’ science since it restricts their behavior and funding. There is a ‘moral certainty’ and the authority that is inherent with morality that drives the entire Climate Science Industry. It is an industry that must keep up the AGW charades (desired results) in order to keep relevancy. This is no longer, if it ever was, a search for truth, it is a crusade.

        I think that climate change activists might like the element of ethics in relation to the need to choose between an emphasis on some research rather than spend money on other things (research in breast cancer)

        I have no issue with activists being ethical, with their own money. The US needs a moratorium on Federal grants for science during which period random and exhaustive audits are performed to find the value to all 300 million citizens. We’re all in this together after all, we should all benefit from our collective, though highly distributed, efforts.

        The right, particularly the religious right, might think that some elements of science conflict with other knowledge and producing abortion pills, or animals with human DNA might be considered immoral science

        Who gives a damn what the right, whether religious or not, thinks about amoral results of a repeatable test? 2+2=4, if anyone disagrees, then they are free to and if they do so while in a contract then a judge can be used to determine if the complaint requires easement or some other remedy.

        I think many people would consider experiments on human subjects or attempts to clone humans as immoral science, but this is probably not what you mean here.

        Involuntary experiments are infraction against one’s personal property, namely themselves. Voluntary are common and often necessary to that individual. For me cloning comes down to funding, if you don’t want to participate in the funding of human cloning, then you should be free from it.

      • batheswithwhales

        No.

        You got it wrong. Postmodernism was originally a movement in the arts, painting literature, where “modernism” had been a break away from the classic, confined classifications and genres of the trade.

        From there it made its way into the social sciences, particularly sociology, anthropology, where it made its impact through such concepts as cultural relativism, where each culture should not be judged through the glasses of an outside observer, but rather in the context of the culture in question. And where the fight against racism and discrimination between people became an ingrained principle of the science itself.

        Womens studies / cultural feminism is another science area deeply affected by post modernism, where it is frequently claimed that that gender is only a construct of upbringing and the pressures of society. And biology has nothing to do with the personae we become.

        Noone with real life experience believes it, but go to a University near you, and you will find a well funded professor claiming that you like cars instead of makeup, because society expects it of you.

        This is an extremely rough guide to post modern science, I know, but I think it is enough to make my point:

        The real (formerly hard) sciences have to some degree followed in the footsteps of the social sciences and the arts towards post modern science in looking for a “mission”, and upon finding this “mission” (saving the world from catastrophic climate change) they abandoned all concept of scientific truth-seeking, and went for all-out advocacy.

        (The fact that this new “truth” also gave them unlimited funds, tripled incomes, media attention, fame, TV-time, radio time, and made headlines in local newspapers all over the world, probably didn’t reduce their enthusiasm for this new-found way of doing science.)

  8. What could be more reductionist and simplistic than the formula equating radiation to temperature:
    (ΔTs): ΔTs = λRF, where λ is the climate sensitivity parameter

    But we have complexity to burn in the various GCMs, which are all designed to model what we don’t understand fully, which run away unless their inputs are carefully massaged, which don’t model cloud effects properly.

    I like Tomas Milanovic’s understanding of complexity http://judithcurry.com/2010/11/12/the-denizens-of-climate-etc/#comment-11807

    • Is the system predictable or not? If not, can I go outside and play now? My homework’s done.

  9. While the author of the article highlighted likely meant for the reader to presume the answer to each of the questions in the last italicized paragraph from said article to be ‘no’, the actual answer to each and every question is ‘yes’.

    What’s more astounding is that most of the questions posed are totally irrelevant. Lean, as she is brought in the text, has been published and part of committee after committee looking into the solar influence, both by total irradiance and spectral irradiance, over and over again. It’s not like she’s on the outside looking in. She has a paper cited almost 700 times.

    What’s the citation record for a climate change paper?

    She is also a proponent that human forced warming is a major contributor to the temperature record, in accord with the IPCC. So I don’t know why she’s brought up at all.

    I don’t get it. I thought the article was about the ‘scientific method’. Yet, no such method was used in producing the article. Irony at its best I guess.

  10. Here is a prime example of the sort of outrageous behaviour (by academics) which brings climate science into disrepute! http://joannenova.com.au/
    Just follow the money trail. Also, note the posting below the one about Art Robinson and his 3 now disenfranchised, ex-PhD student offspring.
    (Apologies if this seems off topic but the money trail is IMO inextricably enmeshed with the bias in climate science.)

  11. Judith wrote: “The final concluding sentence of the above paragraph is an overconfident dismissal of the human signal on climate. ”

    Not what he said. He said any human signal is small and lost in the noise of the natural variability. The point is that, using the scientific method, you can’t find it. Could it be there? Maybe. He doesn’t say there isn’t any human influence.

    • I concur stan.
      The ratio of natural carbon to human produced carbon in the carbon cycle is evidence of this.

  12. The following requires repeating!

    The one modern, definitive experiment, the search for the signature of the green house effect has failed totally. Projected confidently by the models, this “signature” was expected to be represented by an exceptional warming in the upper troposphere above the tropics. The experiments, carried out during twenty years of research supported by The Australian Green House Office as well as by many other well funded Atmospheric Science groups around the world, show that this signature does not exist. Where is the Enhanced Green House Effect? No one knows.

    • The irrefutable hallmark of real science is that it involves testable and potentially falsifiable hypotheses. It is now be relevant to ask: “What is testable and potentially falsifiable about the mainstream hypothesis of AGW being the dominant cause of global warming?”
      The answer seems to be something like “We’ll adjust our model to conform to the known data” thereby ‘moving the goal posts’.
      Has the scientific method metamorphosed?

      • Well, it should be obvious that what is testable and falsifiable about AGW being the dominant cause of global warming is whether, if we keep putting more GHGs into the atmosphere, global temperature increases steadily beyond anything known in the last few hundred thousand years.

        The problem is you don’t have a control planet to live on if the null hypothesis of “no AGW” is rejected.

        In the real world (that is, where science is done), it’s been obvious for decades that there are bezillions of reasonable scientific hypotheses that can’t be tested directly with known technology. Real science therefore looks for indirect ways of testing these hypotheses (think of giving huge doses of chemicals to rats, instead of minute doses of chemicals to millions of humans).

        It is true, as implied in this thread, that the epistemology of computer simulations is a novel and problematic area. There are plenty of actual experts working on this topic. But it is false that the only evidence for AGW (or a climate sensitivity of around 3) comes from GCMs.

      • Latimer Alder

        ‘But it is false that the only evidence for AGW (or a climate sensitivity of around 3) comes from GCMs’

        Care to list some of the rest of the evidence you assert? Measurements of what actually happens and stuff? Experiments?

      • Well, response to volcanic eruptions is one; pattern matching with non-GCM climate models is another. You might consider reading the relevant IPCC chapters and some of the literature therein. I can send you some papers too if you don’t have access to them.

      • Latimer Alder

        Thanks for your offer Paul.

        I’d be delighted to learn from any method that doesn’t rely on first creating a model which uses a sensitivity (fudge) factor. And then using real world measurements to match the model output by adjusting the fudge (sensitivity) factor.

        Because this method of reasoning puts the cart before the horse by essentially assuming that the model is correct in all other respects and so ‘the sensitivity must be….x’. I tried to do this in my MSc thesis until my very wise research supervisor forecfully pointed out the error of my ways.

        Please also send any limks ot useful papers. Outside academe and university/industry libraries, it is not possible to shell out twenty quid each time to read an irrelevant paper. There aren;t that many twenty quids to go around.

      • Well, I’m not sure we’ll agree about what counts as useful, but I would start with
        Knutti, R., & Hegerl, G. C. (2008). The equilibrium sensitivity of the Earth’s temperature to radiation changes. Nature Geoscience, 1(11), 735-743. doi: 10.1038/ngeo337.

        You might also look at Annan, J. D., & Hargreaves, J. C. (2009). On the generation and interpretation of probabilistic estimates of climate sensitivity. Climatic Change, 104(3-4), 423-436. doi: 10.1007/s10584-009-9715-y.

        This latter paper argues that very high estimates of climate sensitivity (e.g., over 4ºC are very unlikely but supports the ~3ºC best estimate, based on a Bayesian methodology.

        I can send you a copy of either or both if you send me your email (I’m paul.baer@gatech.edu).

      • Paul,

        Shaviv’s has demonstrated that the IPCC’s favoured value for climate sensitivity is a gross overestimate. IPCC’s model predicted a temperature drop of 0.3 degree following a volcanic eruption whereas the actual, observed drop in temperature was 0.1 degree.
        Likewise recent work has revealed that the water vapour positive feedback is actually less than the value ASSUMED in the IPCC models which also renders false the assumption of constant relative humidity with rise in temperature.

      • OK, I’ll have to learn about Shaviv’s paper. Do you have the full citation?

        And I’ll have to bone up on water vapor too, I guess!

      • Paul,

        This may be of interest
        I strongly recommend these presentations. Here are the links viz.

        Shaviv

        Courtillot

        Shaviv’s presentation illustrates the nexus between intrinsic solar magnetic variability, solar wind, cosmic ray flux, ionisation in the lower troposphere, cloud nucleation and changes in cloud characteristics (total water content, extent of cloud cover and albedo). While it is acknowledged that correlation may not necessarily reflect causation two pieces of independent evidence are provided viz

        (1) Long term variations – over the past 550 million years, climate proxies (O18:O16 ratios in brachypods in ocean sediments) correlate with cosmic ray flux (CRF) as measured in iron meteorites which are subject to higher CRF during the passage of the solar system through the spiral arms of the Milky Way [where we experience a higher cosmic ray flux due to Type II (core collapse) supernovae associated with star formation regions].

        (2) Short term variations – Forbush decreases – several-day-duration decreases in CRF due due to solar flares and coronal mass ejections correlate with changes in cloud nucleation and cloud parameters – total water content, extent of cloud cover, decreases in albedo and short term (days) increases in the rates of sea level rise due to thermal expansion.

        Shaviv’s model accomplishes all without needing to invoke any net positive feedback (from CO2/H2O) or high value for climate sensitivity (which is treated as a free parameter). The variations attributable to this chain of events is of the order of 1 +/- 0.35 Watt/M2. (c.f. The IPCC GCMs use the variation of Total Solar Irradiance of 0.17 W/M2 which is insufficient to account for the changes in the energy balance of the oceans. i.e. The cosmic ray flux, modulated by the solar wind is the dominant mechanism of ionisation in the lower troposphere, thus influencing cloud nucleation, and amplifying the climate variations by varying the cloud characteristics. Because of the small particle size, the surface area:mass ratio is increased by the cosmic rays thereby increasing the albedo of the clouds formed.

        These variables (solar magnetic variability, solar wind, cosmic ray flux, cloud variations) have not been included in IPCC’s GCMs and, compared with the latter, the residuals (the difference between Shaviv’s model and empirical observations) are half i.e. his model provides a better fit to the empirical (observational) data.

        This is supported by the work done by Svensmark et al and provides a rational explanation/mechanism for the glaciations associated with passage of the solar system through the spiral arms of the milky way – a galactic context nicely complementing the Milankovich effect. Incidentally, Shaviv also demonstrates that the IPCC GCMs overestimate (several fold) the drop in temperature following volcanic eruptions .i.e. they grossly overestimate the climate sensitivity.

        Courtillot demonstrates a tight correlation between solar magnetic variablity and regional surface temperatures for Europe and USA (also illustrating the futility of the concept of ‘global temperature’.

        Note: An earlier scientific paper on that very same subject by Nicola Scafetta accepted for publication in the Journal of Atmospheric & Solar-Terrestrial Physics on 12th April 2010 is also well worth a read – find it here at

        http://arxiv.org/PS_cache/arxiv/pdf/1005/1005.4639v1.pdf

      • Gyptis – I’ve started looking into the work of Shaviv and Courtillot, and – as I’m sure you know – both are outside the mainstream of climate science. I’ll read their papers, but I’m curious what reason you personally have to believe that their work should be considered better than “mainstream” work.

      • steven mosher

        see the knutti paper I linked to. models are but one of the ways sensitivity is estimated.

        the fact that you ask for experimental data shows you have no understanding.

        A controlled experiment would look like this.

        Take the earth. Double the C02 instaneously. hold all other forcing equal. no volcanos, no change in the sun, no change in aerosols, no change in other GHGs. Wait a thousand years. Measure the response. Then repeat the experiment.

        So, we are left with estimating the parameter from a variety of sources. Historical sources.

        1. Paleo records
        2. modern observations

        And we can confirm those by building a model of the climate and doing “what if” analysis.

      • steven,
        Is there any evidence that CO2 levels have ever doubled instantly?
        Do you think using that as a baseline is valid, and if so how come?

      • steven mosher

        the point isnt whether of not C02 has ever doubled.

        The point is estimating the response IF it doubles.

        It’s perfectly acceptable as a baseline. you are simply calibrating the system response.

        Take your car on a dyno. Set the peddle to idle. x mL of gas per sec. wait for the speed to stablize. record that.
        Then, double the flow. wait for the speed to stablize. measure that. Whats the response to doubling? how much faster does the car go?

        You are characterizing system response. nothing more.

      • steve,
        But you cannot instantaneously get the rpm’s to double.
        The engine can’t do that.
        And on a dynamo you are measuring one thing.
        CO2 acts in a complex of dimensions that all vary with or without CO2 doing anything, as well as any influence the ghg effect of CO2 adds.
        My point is that the explanation I just outllined happens to fit reality:
        That the impact of CO2 is indistinguiishable from the typical variability of climate and cliamte manifestations.
        IOW I see no relevence in the CO2 based predictions of doom compared to reality.
        Perhaps part of it is due to making linear deterministic and instantaneous assumptions of CO2 impact?

      • hunter,

        so, in summary, the answer was that real world doesn’t matter.

      • Paul

        It was my training that the Null Hypothesis doesn’t depend on the field; it has no default value. The Null Hypothesis depends on the claim.

        In John Nicol’s case, he makes quite a few claims by assertion, such as in the phrase “should global warming resume..”

        Is Nicol’s claim that global warming is real and proven?

        This is a logical necessity for the phrase to be meaningful at all.

        Is John Nicol’s claim that global warming has stopped?

        There are statistical tests for accepting or rejecting each of these claims, corresponding sets with rather opposite Nulls, no?

        Do we choose for the first implicitly assumed hypothesis the null, “There is no ‘euphemistic climate change’*?” as our null, or its opposite, “Global warming is real?”

        (*I find the expression, ‘euphemistic climate change’ unsatisfying. What unpleasant or embarrassing category is ‘climate change’ meant to be a euphemism for? Death? Lewdness? Drug use? Disease? A criminal past?

        If John Nicol’s saying people who study climate are lewd sick felonious addicts not long for this world, he’s picked just the right word.

        Perhaps Nicol may have meant ‘synonymous’ or ‘equivalent’ or ‘more broad’ if he didn’t mean to imply shameful associations?)

        Failing to reject a null hypothesis does not prove it true; we’d generally want to choose our null hypothesis to reduce the costs of errors of specification and decision, however as those costs, a) are largely a matter of policy; and, b) have never been competently quantified in any way a skeptical analysis would deem accurate, what null to elect depends on facts not in evidence.

        In such cases, it’s typical for scientists and engineers, policy analysts and well-meaning amateurs to resort to conservative principles, such as the precautionary principle: which errors are most likely to leave us in a good position to revisit the bad decisions we might make at some future time when we have better information?

        “..authors’ prime expertise is often found to be not.. as one might have anticipated..”

        I guess he really did mean euphemism, and to imply everyone who disagrees with his narrow view of climate is an embarrassing or unpleasant wretch, often with insufficient expertise.

        “provides for abundant research funding, from which they feed, more easily than other areas of research of greater interest and practical use”

        Ayup. John Nicol clearly views those whose research gives answers he does not approve of as criminal lower life forms.

        Why would anyone repeat such drek?

        I’ve been described as having a nasty manner for the things I say about people who I think do bad science, but I hope if I ever so tar a whole profession so damningly, that I’ll have armed myself with footnotes and citations, references and source material, or sound logic or good reason, or anything more than propaganda, innuendo, scaremongering and smarm.

        Sorry to have bothered you with the point I was about to make, as it seems I’m basing it on a trashy bit of invective not really worthy of discussing.

        So, are you following the hockey play-offs?

      • Paul Baer

        You wrote:

        Well, it should be obvious that what is testable and falsifiable about AGW being the dominant cause of global warming is whether, if we keep putting more GHGs into the atmosphere, global temperature increases steadily beyond anything known in the last few hundred thousand years.

        Maybe you feel this is “obvious”, but it really hasn’t been demonstrated based on empirical data, Paul.

        (And “a few hundred thousand years” is an awful long time.)

        But I think we already have a fairly good test of whether the alarming AGW premise, as being promoted by IPCC is validated or falsified by the observed data.

        The premise: AGW, caused principally by human CO2 emissions, has been the primary cause of 20th century warming and, hence, represents a serious potential threat for humanity and our environment.

        The test:

        The GMTA record used by IPCC (HadCRUT).

        The ARGO record of upper ocean temperature.

        The record of atmospheric CO2 as measured at Mauna Loa.

        The premise: The IPCC forecast of warming caused by AGW of 0.2C per decade for the first decades of the 21st century.

        (This happens to be equivalent to the calculated theoretical “equilibrium” GH warming one would expected from the measured CO2 increase at Mauna Loa from 2001 to 2010 using IPCC’s model-based 2xCO2 climate sensitivity of 3C.)

        The observations show that there was no actual warming, either in the atmosphere or the upper ocean. Latent heat (net melting ice, net evaporating water) is too small to make much difference and there is no real evidence that the deep ocean has warmed. IOW our planet lost energy over this period, while atmospheric CO2 increased to record levels. (Trenberth referred to this “unexplained lack of warming” as a “travesty” and suggested in an interview that it might be explained by energy radiated “out to space” with clouds acting as a “natural thermostat”.) This sounds perfectly reasonable to me.

        I agree fully that CO2 is a GHG, that GHGs trap outgoing LW radiation, which should result in warming (all other things being equal), but the observed data seem to show that “all other things” were NOT equal and, therefore, don’t look too good for the “alarming AGW” premise as promoted by IPCC.

        But maybe we should wait another 10 years and see if it looks any better then.

        What do you think?

        If we still have essentially no warming after 20 years would this falsify the IPCC premise of alarming AGW?

        Or do you think it would take 30 years?

        Max

      • Well, there are several claims there that I quite justifiably ought to have good answers to. I’m not optimistic that we can come to an agreement, but I’m willing to give it a shot.

        I will say this up front: if there were no balance of evidence of AGW demonstrable between 2000 and 2020, I would think a good case had been made that the fears of “alarmists” (like myself) had been shown to be overstated. I’m not prepared to say now anything much more specific than that, because it would depend on the various kinds of evidence (which I expect to continue to be fragmentary and even contradictory) in fact accumulated.

        However, we are not likely to agree easily on what counts as evidence. I will say categorically that “temperature in 2020 being less than temperature in 2000″ would not itself count as disconfirming evidence. As many people with different perspectives have pointed out here and elsewhere, there is a great deal of natural variability in the system; I would consider a question like “what is the difference in the five year mean at 2000, 2010, and 2020?” to be a necessary adjunct.

        However, this (flat temperatures) plainly could not “falsify” AGW in any complete sense. As someone has demonstrated here, if it’s true that there is a 60 year cycle imposed on a rising trend, with the down trend beginning in 2000, you might well expect a 30 year period of near level temperatures, followed by a dramatic increase in the next 30 years. (Clearly we’ll need a new name, as it will no longer resemble a hockey stick :^) Indeed, without any compelling explanation for the trend of the century from 1900-2000, a reasonable prediction is that this trend will continue, and that indeed temperatures will begin to shoot up dramatically in 2030.

        It is the trend that worries me. It does in fact seem to me – and I’ve studied the problem pretty closely by now – that anthropogenic GHG emissions are the most likely explanation of that trend. I don’t need to argue it’s “very likely” to be very worried. (This, of course, is because I also consider the possible consequences of even another degree (C) of warming to be quite, well, alarming.)

        To finish up for the moment: I do not know the details of the ARGO ocean measurements. It’s plainly incumbent on me to know more about it if others think it is a strong piece of evidence about AGW.

      • But Paul, you’re not a scientist, so why are you a self-confessed “alarmist”? Do you just accept the “scientific” consensus? And what is wrong with a degree of warming? Warming and CO2 are both good for plant and animal life.

      • Well, I am partly a scientist. And I’ve been reading papers about climate change for over 15 years, especially about carbon cycling and climate sensitivity. But, quite frankly, it’s substantially because I have been a participant in academia for my whole adult life, and I actually have substantial trust in the basic quality control within “establishment” science. Certainly more than I do in the quality control of the variety of “contrarians” who are producing non-mainstream research results.

        It is also the case that I think that the impacts of a degree or two of global warming could be – not will be, but could be, with substantial probability – very harmful. I am concerned that the melting of land glaciers, the artic ice cap and permafrost could already be locked in with current levels of GHGs, and that the consequences could be very bad. Unfortunately, again, these kinds of hypotheses are not subject to what so many people here refer to as “the scientific method” – there’s only one experiment you can do, and if the null hypothesis of “no AGW” is wrong, you don’t get a second chance.

        To put it yet a third way, I’m much more averse to the risks of climate change than I am to the risks of strict carbon mitigation. This raises a different set of questions (and goes back to the economics posting that brought me back into this blog). About which, more another time.

        But the fact that a lot of it does have to do with trust raises the questions that were beginning to be addressed in the polyclimate thread: who, if anyone, could be trusted by “both sides” (a gross oversimplification, admittedly) to be an honest juror?

      • Latimer Alder

        @paul

        You say

        ‘I have been a participant in academia for my whole adult life, and I actually have substantial trust in the basic quality control within “establishment” science’

        Do you not understand that one of the primary reasons why there is such a preponderance of scepticism among educated scientists and engineers outside the academic world is that we look at the ‘quality control standards’ of academe and see them as ludicrously trivial and superficial compared with the standards pertaining in industry and elsewhere in the outside world. That we’ve spent our professional lives working within really tight evidence led projects – with serious actual consequences to thousands or millions of people if we get it wrong. Where data is kept and archived by law .. where failure to comply means jail time and career suicide.

        And then we see a bunch of lab based dilettantes with no QC systems worth the name managing to get a non-valiadated run of a model to work and then declare that they fully understand soemthing complex like the climate. By design (or possibly by accident) they throw away the supporting data and then have the effrontery to claim that we are far too ignorant to understand their magnificent craft…because they are Climate Scientists..and we are not.

        And you academics can’t understand why we’re sceptical.

        Do your work to the same standards of the outside world – without moianing and groaning that you’re all far too important to bother about such trivia. Once you;ve done it for ten years and got soem consistent results..that have been externally audited and reproduced …just like we have to do.. and once you’ve built up a portfolio of work so derived, then maybe we’ll start to think that you’ve learnt a bit about QC.

        Up until then your processes are a joke.

      • Latimer: suppose I accept your premise that QC in academia is inadquate. What if you accepted my premise that the prima facie evidence is that we have already put an unreasonably large amount of GHGs into the atmosphere? If we had to do a QC process in a hurry that would provide enough confidence to justify a mitigate/don’t mitigate decision, what would you do? How many people would you assign? How much money would you spend? Keep in mind that the reason academic QC is limited is partly because resources are scarce.

      • Latimer Alder

        @paul

        First I don’t accept your argument that academic QC is inadequate because of lack of resources. That is the reflex squeal of grant-funded people everywhere.

        I think it is far more likley that QC is inadequate because you institutionally don;t care about whether you are right or wrong. You care about getting a paper published, getting it mentioned by others and getting the next grant. And so you see adequate QC as an unnecessary burden and don’t even pay lip service to it. The paper’s the thing! Not the truth. Once the paper’s published you have the bix ticked and can move on.

        That is not a personal criticism, it is a direct consequence of the way the academic reward structures are designed. But it is a f….g awful way to design those structures to get to the bottom of a possibly difficult problem like climate (if problem it really is).

        In other fields outside academe, reward structures are different and scientists and engineers are judged on being consistently right (the bridge design was built and didn’t collapse, the medicine performed in real life as it did in the tests..the aeroplane flew and performed as designed…..the implemantation of the new IT system had the predicted effects on finance and customer satisfaction etc etc). You do not get the brownie points just for writing a headline ctaching paper. They don’t come until your predictions have been validated and verified and been shown to be true.

        And just to be sure you might just get audited by seriously unpleasant people to make sure that you followed the rules and didn’t cut corners. That’s just part of life…to be tolerated if not embraced. Anyone outside academe seen trying to resist external audit is effectively raising a big white flag saying ‘look at me – I’ve got something to hide!!!’

        Your other point is the classic call to arms ‘I don’t care what you do ..but juts do something’ or ‘Do it quick, but don;t bother doing it right’

        I’ve seen absolutely no evidence that ‘climate change’ is an urgent problem that needs fixing despite thirty years of people writing a rag tag of increasingly hysterical academic papers about it. Sealevel rise hasn’t inundated even the most low-lying lands. Polar bears have not been wiped out. Al Gore’s beachside apartments are still on the dry side of the beach.

        But if it were to be so, the first thing I’d do was to review the organisational design for the effort needed to understand and solve the problem. I think it very unlikley that the correct way to do so would be having by a host of disparate teams beavering away at whatever bit of the cake they happened to find interesting in an uncoordinated way. And certainly not to reward them just for writing papers and running away from teh consequences. Its also extremely unlikely that such a design would have an IPCC-like political coordination function where each group jockeys for influence and power, independent of the need to actually understand the problem.

        But these things are so discredited in climatology that they will probably die a death anyway. I don’t imagine that anyone will bother to read IPCC5, let alone take any notice of its conclusions. It is already a debased currency.

        Whatever better structire is designed, it’ll have reproducibility, responsibility and ‘ownership’ at its heart. If you becoem the worldwide expert on topic A, the price you pay for that status and influnce is that you become its ‘champion’ You have to go out and persuade others that you are right…not just those who agree with you. You have to be able to justify every last iten of data and every little bit of reasoning.

        And I’d have a QA team employed composed of the nastiest toughest most devious and seriously unpleasant bastards I coudl find to keep the experts honest.

        Many of today’s practitioners are likely hopeless cases and will fall by the wayside. The mental chnage from paper publishing in the clo seted world of academe to the tough and nasty world of proving your case and standing by it will be beyond them. They are set in their ways.

        Those who can’t pass an audit will find other pastures. Those who confuse science with advocacy or running their own personal propaganda websites will be required to shape up or ship out. But good people doing good solid work will thrive. We;d have proper training for climatologists..including a deep understanding of correct scinetific processes.

        Once bedded in (5 years??) we can start to do some proper work on climate..with reliable data, sound processes and hence draw good conclusions. In another 5 years we should begin to put our feet firmly on the floor with some preliminary conclusions and the next generation of scientists can take over.

        It’d be atough job to make those changes happen. But the current structures just aren’t fit for the supposed importance of the task at hand. Maybe good enough for understanding the lifecycle of the lessser undulatinmg Mexican fly toad..where nobody in th real world gives a s..t about the outcome.

        But ludicrously inadequate for studying ‘the most important problem humanity has ever faced’.

      • Paul Baer

        Thanks for your response.

        As I understood you, the absence of a “visible” warming trend over a 20-year period of observation would apparently not yet give you reason to not “be very worried” about AGW.

        IMO this would mean that your “worry” (a reaction linked to the emotion of “fear”) is rather one of “faith” or “belief” in a hypothesis, even if this hypothesis is not corroborated by actual physical observations.

        Do you see it this way, or do you still believe that your “worry” would, in that case, be based on “scientific reality”?

        Max

      • As I said, a flat trend for twenty years would lead me to think it more likely that the climate sensitivity is low, but what impact it would have on my overall views would depend on what other evidence had accumulated in the meantime, concerning ocean heat measurements, satellite measurements of clouds, etc.

        Again, as I noted in my comment, quite a few people have pointed out that there is evidence of a substantial 60 year cycle imposed on a steady rising trend. And – absent any other compelling explanation of the rising trend – I would think it a reasonable conclusion that you would expect warming to pick up dramatically again in 2030, and for the century level trend to continue. Wouldn’t you? If not, why not?

      • Paul, Latimer writes “The paper’s the thing!” – indeed, as Michael Tobis, who seems to be typical of climate academe, has said here – finding the Null Hypothesis does nothing for your cite count.

      • Latimer Alder

        A fine example of institutional and organisational bias.

        If nobody is going to get brownie points (papers, citations, kudos) fro finding out that there really isn’t anything there, then – surprise, surprise, nobody is going to examine the case too hard. But they will spend a lot of time and effort looking for the slightest sign of a positive signal. Because they can publish a paper about it…and that is the sign of admission to the Climatologist’s Club.

        People act in the ways that their institutions expect and that they get rewarded (in th widest sense) for. Academic work has an inbuilt bias to publish only positive results and ignore/suppress any neutral or neagtive stuff.

        And guess what..surprise, surprise…that’s what people do.

        Which is another reason why this particular organisational/reward model is unfit for discovering the truth about ‘climate change’

      • Paul,
        In the rational world, no evidence means no crisis.
        Why pick 2000 – 2020?
        Why not pick 1995 – 2015?
        Why not pick the last 170 years?
        2020 is convenient far away to focus on the sales pitch of apocalypse and not on the boring reality of today and yesterday.

      • You have missed the point entirely. Even IPCC scientists are now admitting that they cannot distinguish or quantify warming from natural causes as opposed to anthropogenic warming.

      • What and who exactly are you referring to by “IPCC scientists are now admitting”? The IPCC has never claimed to quantify this precisely; even statements like “it is very likely that the majority of 20th C warming is from anthropogenic GHG emissions” are pretty darn vague (and forgive me, I’m not looking up the exact quote.)

      • Paul,
        here is the link where you will find the full citation.
        http://judithcurry.com/2011/04/07/separating-natural-and-anthropogenically-forced-decadal-climate-variability/

        Amy Solomon’s name is prominent on AR4

      • Paul,

        Citation: Solomon, Amy, and Coauthors, 2011: Distinguishing the Roles of Natural and Anthropogenically Forced Decadal Climate Variability. Bull. Amer. Meteor. Soc., 92, 141–156. doi: 10.1175/2010BAMS2962.1

        http://judithcurry.com/2011/04/07/separating-natural-and-anthropogenically-forced-decadal-climate-variability/

        Amy Solomon’s name is prominent on AR4

      • Thanks, that looks like an interesting paper. It’s not obvious that it contradicts the IPCC, but it’s not obvious that it doesn’t. I look forward to reading it.

        However, I do think that you’re mistaking Amy Solomon for Susan Solomon, who was the co-chair of WGI for AR4. Amy Solomon’s name does not appear among the contributors.

        http://www.ipcc.ch/publications_and_data/ar4/wg1/en/annexessannex-ii.html

      • We have control planets. Mars and Venus have high CO2 atmospheres. Adjusted for the distance from the sun, their atmospheres should be warmer than earth at similar pressures if the GHG Theory is correct.

        They are not. At similar pressures, the atmosphic temperatures of venus, earth and mars vary with their distance from the sun, not their composition. The surface temperatures are driven by the pressure of the atmosphere at the surface, and the distance from the sun.

      • ferd,

        ‘We have control planets. Mars and Venus have high CO2 atmospheres.’

        You’re not very familiar with scientific controls, are you?

        The single most important terrestrial process in climate (average weather), is the Coriolis force on earth. It accounts for the vast majority of the atmospheric dynamics we observe daily, which are then captured in climate via the averaging process.

        The Coriolis force is tied to the rotation of the earth about its axis. To compare Earth and Venus, we’d have to asked then, how do these rotation rates compare? There are 365 earth days in one solar orbit. There are 2 Venus days in one solar orbit. Actually, the day on Venus begins to turn due its solar orbit before its own rotation.

        Therefore the Coriolis force is MUCH smaller on Venus than on earth. If the Coriolis force is driving most of what we observe on earth, then we can’t really compare earth’s climate to Venus’s climate. There is no way to account for this difference.

        Mars is an interesting case. It’s much smaller than earth, with a gravitational field less than half as small. But it has a faster rotation about its own axis. So I think the Coriolis force should be fairly strong.

        But there’s no water. There is less water in the atmosphere on Mars than CO2 in the atmosphere on Earth. So there is no hydrological cycle. Without that, there is no way to compare the atmospheric dynamics.

        Nice try though.

      • In the context of a discussion of what is testible/falsifiable, it should be pointed out that the phrase “climate sensitivity” references the change in the equilibrium average global temperature at Earth’s surface. As the equilibrium temperature is not observable, claims regarding the magnitude of the climate sensitivity are not testible/falsifiability.

      • Terry,

        ‘As the equilibrium temperature is not observable, claims regarding the magnitude of the climate sensitivity are not testible/falsifiability.’

        I think that comment should result in your banishment from commenting ever again on this blog.

        The temperature of a gas, like the atmosphere, is proxy for the average kinetic energy of the particles that make up that gas. In the case of the atmosphere, it’s most molecules. The kinetic energy is related to the translational motion of those molecules.

        Those motions can be measured at any time we want. The bulk status of thermal equilibrium plays no role in whether or not those motions are ‘observable’. The molecules are moving around at some rate and we can observe that rate via a thermometer.

        Therefore, temperature is ALWAYS observable. I didn’t think that this thread would get to the point where I would have write that statement, yet here we are.

      • maxwell

        We agree that the temperature is observable. However, it is the equilibrium temperature that figures in the notion of the climate sensitivity (aka equilibrium climate sensitivity) and the equilibrium temperature is not observable.

        Perhaps you are unfamiliar with the term “equilibrium temperature” as used in climatology. It is unrelated to the notion of “equilibrium” in thermodynamics and synonymous with the term “steady state” as used in engineering heat transfer. Equilibrium temperatures are a consequence from holding all of the various forcings constant and waiting for an amount of time that is unbounded. Though we can think about these temperatures we can’t observe them.

      • Terry,

        I see. Sorry for the confusion and jumping on you.

        With all of the confusion on this thread, I began to see the real possibility someone would challenge basic thermodynamics.

        Cheers.

  13. While the article certainly misses several points, it does show what happens when you try to move from science to policy. Every Tom, Dick and Harry is going to dust off their calculators or slide rules to try and figure out why they have to pay through the nose for something that doesn’t seem to be happening in the catastrophic way it is presented. Especially, when the proposed solutions are little more than Jane Fonda regurgitated wishful thinking and legitimate errors are found in the iconic published science.

    MT seems to think that, just making a poor choice of statistical method, is no reason question the abilities of a scientist. Well, a screw up is a screw up, don’t be trying to change the world if you are a screw up. Climate scientists would do well to distance themselves from the screw ups. Which might just get Tom, Dick and Harry to go back to 12 oz curls and football.

    • There is a method?

      ‘Global climate model simulations of the 20th century are usually compared in terms of their ability to reproduce the 20th century temperature record. This is now almost an established test for global climate models. One curious aspect of this result is that it is also well known that the same models that agree in simulating the 20th century temperature record differ significantly in their climate sensitivity. The question therefore remains: If climate models differ in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy?

      The answer to this question is discussed by Kiehl (2007). While there exist established data sets for the 20th century evolution of well-mixed greenhouse gases, this is not the case for ozone, aerosols or different natural forcing factors. The only way that the different models (with respect to their sensitivity to changes in greenhouse gasses) all can reproduce the 20th century temperature record is by assuming different 20th century data series for the unknown factors. In essence, the unknown factors in the 20th century used to drive the IPCC climate simulations were chosen to fit the observed temperature trend. This is a classical example of curve fitting or tuning.

      It has long been known that it will always be possible to fit a model containing 5 or more adjustable parameters to any known data set. But even when a good fit has been obtained, this does not guarantee that the model will perform well when forecasting just one year ahead into the future. This disappointing fact has been demonstrated many times by economical and other types of numerical models (Pilkey and Pilkey-Jarvis 2007).’
      http://www.climate4you.com

      And this is before chaotic instability in the models is considered.

      ‘Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable.’ http://www.pnas.org/content/104/21/8709.full

      It has all been said many times before. A warning, however, for Dallas and others. Dynamical complexity implies the possibility of catastrophic change, abrupt change, tipping points, non-linear change etc. Trifling with the Dragon Kings is hardly a prudent course. http://web.sg.ethz.ch/wps/pdf/CCSS-09-005.pdf

      • “Global climate model simulations of the 20th century are usually compared in terms of their ability to reproduce the 20th century temperature record. This is now almost an established test for global climate models.” Yet, as I understand it, the very large majority of them don’t “reproduce the 20th century temperature record”, they more or less reproduce the 20th century temperature anomaly record. Most of the models run several degrees hot or cold so far as absolute temperature is concerned: see http://rankexploits.com/musings/2009/fact-6a-model-simulations-dont-match-average-surface-temperature-of-the-earth/.

      • Willis Eschenbach

        Kiehl 2007 is available here.

        w.

      • Keith Jackson

        Willis,
        I have extended Kielh’s analysis with a simple two-hemisphere analytical model. This was done because I have long been concerned that many GCMs use large aerosol cooling efffects to offset high model sensitivities: the concern is that aerosol effects are concentrated in the NH, but observations show the NH to be warming faster than the SH where aerosol effects are much smaller. A usual excuse is that the greater expected SH warming is being offset by heat absorption into the larger SH ocean. Recent Argo observations show that this is unlikely to be valid. The upshot is that climate sensitivity is most likely lower that 2 degrees C, possibly much lower (my best guess 1 to 1.5). I do some Monte-Carlo analyses to show how observational uncertainties influence these conclusions. I’d be happy to furnish a copy of a paper on this to you or others who might be interested if we can figure out how to make a connection. By the way, I’ve much enjoyed many of your pithy and to-the-point observations on this whole circus.

      • No doubt Dragon Kings are not to be trifled with. It is a bit difficult to make good decisions if you only focus on the unthinkable though. You would never get out of bed because of dread. I am a little concerned that more people don’t support more actions that tend to hedge their bets, on both sides.

        I am shocked at the poor statistical choices, especially paleo and recently in Antarctica’s modern era temperature reconstruction. The number of not all that accurate peer reviewed papers is a touch troubling. I understand it is a new science, but with all the clamor to change the world NOW, I would think a little more robust peer review is in order. If than means throwing a few mathematically challenged “experts” under the bus, so be it. What is the saying, lead, follow or get out of the way?

      • Give me ten good reasons why I should get out of bed.

        The uses of science are so appalling on both sides of the climate wars. Each side mouth half understood concepts in an idiomatic scientific jargon. I won’t say I understand it any better – but taking it less seriously is a pre-requisite for intellectual growth. Hell of a lot more fun too.

        The policy issue is separate from the science. I decided long ago that changing the composition of the atmosphere might not, ipso facto, be the most prudent course. Since we have not the wit to determine the outcome of the great atmospheric experiment caution suggests that it be limited to the extent feasible. Dragon Kings in 1976/1977 and 1998/2001 merely reinforce my native caution.

        The policy responses are many and varied – multiple paths and multiple objectives. I can’t see what the problem is? Reduce black carbon and tropospheric ozone for health and agricultural benefits – as well as making major progress with anthropogenic forcings. Conserve and restore ecosystems. Conserve and restore agricultural soils by adding carbon. Provide health, education, safe water and sanitation to stabilise population. Halve at least the incidence of malaria and HIV. Encourage free trade and good economic governance.

        The climate wars are a distraction at best. That I personally am called a sceptic by one side and an alarmist by the other – is just amusing. That in Australia we have wasted a generation of opportunities for biological conservation while quibbling about inconsequentials – more and more species going to the wall for reasons of the feral invasions mostly – is a crying shame . The lost opportunities for humanity globally is a tragedy of staggering proportions.

    • After spending a couple of weeks reading this blog, it’s become increasingly clear to me that the fundamental disagreement here is about burden of proof. A large majority of the posters believe that the data so far does not falsify the null hypothesis of “no AGW”, and they probably won’t be satisfied that there is AGW unless global temperatures rise several more degrees. The much smaller number of posters who support the mainstream consensus believe that there is enough evidence to support a null hypothesis of “CO2 doubling will cause dangerous climate change”, but as a practical matter, given the noise in the signal, there is little hope of falsifying this in a conventional statistical manner either in a short period of time. The logical experiment from this perspective is to stop GHG emissions, but this turns out not to be a cheap experiment.

      • Restate that in fuzzy logic, and you’re more-or-less correct.

        http://en.wikipedia.org/wiki/Fuzzy_logic

      • Paul –
        The logical experiment from this perspective is to stop GHG emissions, but this turns out not to be a cheap experiment.

        That’s likely the understatement of the century.

      • Bill Collinge

        Paul,

        Having tuned in to this blog for more than a few months now, I mostly agree with your very concise summary. However as far as the logical experiment, that begs the question “are there other logical experiments?”, with geoengineering springing to mind. What if it is cheaper, even factoring in all externalities, supposing we could do that, to geoengineer via stratospheric sulfates or space mirrors? Shouldn’t we do that then?

        The arguments I see against the latter are often, but not always, applications of the precautionary principle. However, I feel like the precautionary principle can be applied in reverse to the GHG problem from the skeptical point of view (first, do no [economic] harm) or words to that effect. In this context, the burden of proof can and will continue to be debated and I don’t see an easy answer on the horizon. Distributional equity issues are on the table as well.

        I’d appreciate your additional thoughts or Dr. Curry’s.

        Bill

      • No, Paul.

        The “logical experiment” is NOT “to stop GHG emissions” and see what does (or doesn’t) happen.

        We already have the “experiment” under way (see my above post).

        Max

      • I think you missed my point: depending on your null hypothesis, there are two different “logical” experiments. If you think the null hypothesis is “no AGW”, it’s logical to just keep on emitting. No steady warming, no falsification. But – if you’re wrong – there’s no control planet left.

        If you think the null hypothesis is “AGW”, you stop emissions. If temperatures keep rising, it probably wasn’t AGW. Then you worry about what to do about rising temperatures (which is one reason lots of people are talking about adaptation as a “no regrets” alternative).

        The economic consequences of reducing emissions rapidly are of course a major concern, and people vary in their beliefs about what would happen. My personal view is that this is a much preferable experiment because the economic impacts of mitigation are subject to human choice, whereas the climate response to emissions is not.

      • Paul –
        If you think the null hypothesis is “no AGW”, it’s logical to just keep on emitting. No steady warming, no falsification.

        No – it’s NOT logical to just keep on emitting – but it’s what will happen anyway. Logical would be to convert to nuclear for electricity, convert cars to electric for local use but keep diesel/gasoline for longer distance transportation (this would require a heavy development program) AND decentralization of the grid, using solar and wind for local power generation where practical. Who ever told you humans were logical?

        Note – this is NOT necessarily the alarmist/green position because it envisions a different “mix”/philosophy.

        But – if you’re wrong – there’s no control planet left.

        There’s neither logic nor evidence that that would be true. Only fear.

        If you think the null hypothesis is “AGW”, you stop emissions. If temperatures keep rising, it probably wasn’t AGW.

        In which case you “might” have a planet – or not. Depending on other factors.

        But you would have no viable human civilization.

        Preference for the planet over humans is no different than preference for a political philosophy over humans. Witness Stalinist USSR among others – but that would be only a preview of the stop emissions scenario.

      • Humans are not only logical. But they are not only not logical. Deliberate policy design is a constant of modern life.

        And I don’t think that there needs to be only one alarmist/green position. Do you spend any time advocating for the energy transition you think desirable, or do you think it is hopeless or somebody else’s problem?

        To be clear, when I say “there’s no control planet left” I’m not implying that in fact the planet or the people on it will be destroyed, only that you will be left with the consequences, and they may be quite unfriendly. I do, frankly, fear what could happen. There is evidence of various kinds that provides reason for such fear.

        The claim that there is no viable human civilization without constant or rising GHG emissions seems to contradict your earlier idea of a largely nuclear plus renewables transition.

        The Stalinist Russia metaphor seems to me a bit stretched. Personally I think that reducing the risk of AGW dramatically is the pro-people thing to do. It might not be “pro personal freedom to pollute”, however, and making it “pro-poor” would take a bit of redistributive policy. But that’s doable if we choose to.

      • Paul -
        To be clear, when I say “there’s no control planet left” I’m not implying that in fact the planet or the people on it will be destroyed, only that you will be left with the consequences, and they may be quite unfriendly.

        That will eventually happen regardless of anything you or I or the entire human race does. There are two parts to that statement.

        First – You would have us “stop emissions” . Wonderful. How? When?

        Specifically, how are you going to stop the emissions of the Chinese, Indians and other Third World countries? Do you think they’re going to abandon their people and cultures to the poverty they’ve already experienced for the last several thousand years when the vision of what the developed nations have been (and are yet – if not in the future) stands before them and has been already internalized?

        Do you have any understanding of the depth of commitment of the Chinese to developing a technological society? How many nuclear reactors, coal plants, wind farms, automobiles and roadways – along with the supporting infrastructure they are building? How many new modern cities? Do you understand that the Indians are right behind them? And that between those two alone, the US emissions will soon be a minor blip on the CO2 scene? We are already #2 – and slipping down the list very quickly. That’s not to say that we shouldn’t upgrade our energy network with as much “non-carbon” energy as practical, but it does mean that your “stop emissions” scenario is simply not practical for the forseeable future. And THAT, my friend, is reality. Try reading Christian Gerondeau’s book – “Climate: The Great Delusion”.

        The second part to that first statement is this – sooner or later, “warming” will be something you or your descendants will wish fervently for. How long has humanity been in the present interglacial? How long have interglacial periods been in the past? Do you think this interglacial will go on forever? Do you understand that converting the world’s economies to handle “only” a warm world with minimum or no-carbon energy sources as youpropose would be a death sentence for far more of the world’s population than any likely degree of warming would be? I presume you know that the excess death rate from cold far exceeds that from heat? The best case scenario would be to “adapt” to whatever conditions prevail in the future. One does not do that by throwing away ones options.

        I do, frankly, fear what could happen. There is evidence of various kinds that provides reason for such fear.

        The only “evidence” so far comes from models. I’m an aerospace (read spacecraft ) systems engineer, I’m more than familiar with models – and with what they are and what they aren’t. And I don’t believe those that are telling you how terrible the future will be. Of course, YMMV

        I will not live my life nor abandon my grandchildren’s lives to fear. I’ve seen on this blog what state the “science” is in – and that short look has only confirmed what I already knew before coming here. If the science ever grows up, it may be worth listening to. But at present, I have no reason to draw the same conclusions as you. And far more reason to draw other conclusions.

        The claim that there is no viable human civilization without constant or rising GHG emissions seems to contradict your earlier idea of a largely nuclear plus renewables transition.

        Not at all. The largely nuclear plus renewables transition is only logical – in time. Which is why the Chinese and Indians are pursuing it. And why the rest of the world will eventually follow them – AFTER they realize that fossil fuels alone will not sustain 9 billiion humans forever. But they will get there first – our “head start” civilizatoin is our greatest handicap. More – my largely nuclear plus renewables transition (and that of the Chinese. etc) does NOT mean the elimination of all fossil fuel usage.

        Note that at present the immediate cessation of fossil fuel usage (as some on your side of the dance floor have demanded) would bring the economies of the world to a dead stop. And shortly after would cause the deaths by various means of a large part of the world’s population. Without fossil energy there is not nor will there be sufficient alternate power to keep them alive or to keep the economies running at even minimal level for at least the next 20 years or more. I’ve written some minor comments on this blog about some of the challenges involved in the transition.

        I don’t expect that you advocate immediate cessation, but there are those who do. It’s an extreme position that deserves no respect.

        Of course, at this point in time, the US is heading down that road due to present energy policies. So we may get a taste of that future sooner than expected.

        Keep in mind that it was exactly that rising GHG emissions scenario that vaulted this country into its present place of prominence in the world. It wasn’t the armed forces or war as some seem to believe – it was commerce, driven by a fossil fuel/CO2 economy. Do you believe this country can survive the coming “fall from grace” ?

        Long ago, a very wise man said these words –

        It sounds very pessimistic to talk about western civilization with a sense of retreat. I have been so optimistic about the ascent of man; am I going to give up at this moment? Of course not. The ascent of man will go on. But do not assume that it will go on carried by western civilization as we know it. We are being weighed in the balance at this moment. If we give up, the next step will be taken – but not by us. We have not been given any guarantee that Assyria and Egypt and Rome were not given. We are waiting to be somebody’s past too, and not necessarily that of our future.
        The Ascent of Man – final chapter, Jacob Bronowski

        The descent into the nether regions of History that he speaks of would be fueled and sped by fear coupled with the failure to face and overcome that fear. China is facing and overcoming its fear by preparing its country and people for a future of adaptation. Are we smart enough to do the same? If not, then our fate will become that of Assyria and Egypt and Rome.

        The Stalinist Russia metaphor seems to me a bit stretched. Personally I think that reducing the risk of AGW dramatically is the pro-people thing to do. It might not be “pro personal freedom to pollute”, however, and making it “pro-poor” would take a bit of redistributive policy. But that’s doable if we choose to.

        reducing the risk of AGW dramatically means what? In the face of increasing CO2 emissions by every nation that can manage it, just what do you expect to do – reduce our emissions in order to make a meaningless contribution to a barely measureable reduction in future temps while at the same time reducing our capability to survive the future catastrophes of “climate change” that, if real, will come regardless of what we do? What kind of “climate change”? What kind of catastrophes? Until you can answer those questions, there is no possible long term policy that would not carry greater risk than “business as usual”.

        Do you understand that the reduction you look for would reduce our survival potential – as a nation – and as a race – until such time as we can replace present energy sources AND provide equivalent levels for future population? Think about it – Haiti/Japan – which one survived better, which one will recover faster? Why?

        As for the “Stalinist” thing – he killed (how many?) millions of people? How many more millions would die if the carbon reduction you desire is too soon, too fast or badly handled? Do you expect either the UN or the US government to handle it right? Given the present disaster that passes for energy policy in the US? Much less the UN disasters perpetrated in Iraq, Rwanda, the Congo and several dozen other places around the world?

        Do you REALLY trust them to do it right? Then why aren’t they doing THIS right?

        http://www.jpands.org/vol16no1/goklany.pdf

        Because if they do it wrong, it’ll make Stalin look like an amateur.

        I’m not happy with this because it’s not nearly detailed and precise enough, but it’ll have to do for now because I’m out of time.

      • Paul –
        I would also add that you’ve fallen into the “binary solution” trap. Which is also what I did in answering.

        One of the more obscure truths in this life is that there are ALWAYS AT LEAST three solutions for every problem. IF…OR… is a form of false logic when applied to human problems.

      • The null hypothesis is natural climate change (Roy Spencer)

      • Paul, I’m not sure that the best experiment isn’t already being provided for us by mother nature free of charge. We will have a weak solar cycle, a negative PDO, and an AMO that will soon be going negative. All we have to do now is collect the data from the experiment.

      • Yes, except it doesn’t begin to address the question of what caused the warming trend in the 20th century.

        For which the best explanation I know of still seems to be anthropogenic GHG emissions.

        What are the other plausible explanations for the trend? Or is it just assumed to be essentially irrelevant?

      • The first question is – what caused the greater warming at the beginning of the 20th C?

        The second question is – what caused the greater warming at the end of the 20th C? And what evidence other than correlation is there that they have different causes? Correlation is not causation.

        AGW is only an explanation if it it’s demonstable that the cause of the former is not the cause of the latter.

      • I didn’t assume anything that I am aware of other then the fact that when the known or presumed natural variabilities change it should be of value in attributing their effects. This is self evident is it not?

      • Paul Baer

        Yes, except it doesn’t begin to address the question of what caused the warming trend in the 20th century.

        For which the best explanation I know of still seems to be anthropogenic GHG emissions.

        Are you referring to the “warming trend” of the late 20th century (1971-2000) or the statistically indistinguishable one of the early 20th century (1911-1940), which occurred before there was much human CO2?

        Studies by several solar scientists attribute around half of the total 20th century warming to the unusually high level of solar activity (highest in several thousand years), with most of this occurring in the first half of the century.

        The GMTA record shows a cyclical warming pattern, with multi-decadal warming cycles of ~30 years followed by multi-decadal cycles of slight cooling, also of ~30 years, while atmospheric CO2 shows a steady increase at a CAGR of around 0.4% per year, since measurements started in 1958, with a somewhat slower rate prior to this, as estimated from ice core data.

        There is no robust statistical correlation between atmospheric CO2 concentration and GMTA. Statistical analyses show the GMTA to be more of a “random walk”. Where there is no statistical correlation, the case for causation is weak.

        Cycles in ocean current oscillations (PDO, ENSO, etc.) seem to show a better correlation with GMTA than CO2, and there have been studies showing a link with these to solar activity, but the mechanism has not been established.

        Observations have also shown that cloud cover decreased 1985-2000, thereby reducing the planet’s albedo and increasing the amount of incoming solar radiation reaching the surface, with a reversal of this trend after 2000. This correlates with the warming seen from 1985 to 2000 as well as the lack of warming after 2000, but no clear mechanism has yet been proposed.

        So, you see, that there are many “best explanations”, none of which are really much “better” than any other.

        Max

      • randomengineer

        A large majority of the posters believe that the data so far does not falsify the null hypothesis of “no AGW”, and they probably won’t be satisfied that there is AGW unless global temperatures rise several more degrees.

        This is the Smart People Who Get It vs the Poorly Informed Hoi Polloi argument redux. Gee, we’ve never heard this one before.

        And… you’re wrong. Just like the 10,000 other alarmists who make variations of this argument.

        Most of the posters here are familiar with radiative transfer argument and are quite willing to accept that man influences his climate to some (pun!) degree.

        The actual disagreement revolves around “how much influence” which then informs “what needs to be done.” The non-alarmists want to see an acceptable argument that shows, once and for all, an actual quantifiable and repeatable Q (figure of merit) regarding historical human influence vs natural signal. Once human influence can be reliably demonstrated, THEN is when they will discuss “what if anything needs to be done.”

        Alarmists are either politically or ideologically willing to ascribe more human influence than can be actually proven (demonstrated) and naturally gravitate straight to “oh there’s a problem let’s fix it.” They bypass the step where contention needs to be shown as true. Due to ideology or politics they happily accept an inflated or imagined 51% chance of being right and then claim skeptics can’t do math.

        The actual argument is skeptics vs alarmists who don’t get Q.

      • This is the Smart People Who Get It vs the Poorly Informed Hoi Polloi argument redux. Gee, we’ve never heard this one before.

        Like the Princess and the Pea. The problem with you troglodytes is that you’re just not refined enough to feel the pea.

        No, I’m not going to follow that with the obvious pun.

      • I agree with Random’s summary and further suggest that many AGW “believers” are also pushing for highly costly actions by an individual nation (the US) that would have virtually no measureable impact on the world climate. The push this agenda in the “hope” that it will inspire the rest of the world to follow suit.

        In summary:
        1. We have a basic theory that increased GHG’s will lead to increases in worldwide temperatures
        2. We have no reliable data to demonstrate that a warmer world is actually worse for humanity overall in the long term
        3. We do not yet understand if/how much the increase in GHG’s will actually impact temperatures in the real world due to multiple other factors impacting the basic science that are not fully understood
        4. There is no realistic, implementable method to stop the rise in GHG’s for decades to come, yet costly actions are being implemented that will have no climate impact
        5. Virtually all the potential “concerns” relative to a warmer climate can be easily managed with proper infrastructure construction and management. This issue can only be accomplished by individual nations.
        6. We are likely decades away from having reliable models that can forecast climate at a regional level more than a short period into the future
        7. Supporters of AGW being a pending disaster for humanity do not seem to like to discuss these points and often revert to name-calling when they are not able to convience others of their position logically

      • Rob:
        1. Yes, we agree that we have this theory. We disagree about the probability distribution of the climate sensitivity (as a proxy for the increase of temperature/degree of climate change as a function of GHGs).
        2. Yes, there can be no “reliable data” about unobservable future states. I think there is good reason to be worried that plausible changes could be very harmful, and the evidence for this, while not conclusive, is substantial. We can talk about the details.
        3. Yes. See 1. But there’s no obvious reason to think that the uncertainties will resolve in favor of less change rather than more (and there are reasons – like carbon and methane feedbacks – to fear the worst).
        4. This is a statement about political economy. Views on this differ. What is “realistic” is not fixed. I will admit that what I think it would take to make actual reductions is not politically feasible today, but it could be in a small number of years.
        5. This seems not to be at all obvious to me, but it is subject to concrete discussion – what potential impacts, what infrastructure investments, what costs. A case can be made that this works better as long as the warming is kept modest (which is itself an argument for mitigation, but it suggests delay in mitigation is more acceptable).
        6. Yes, but “reliable” is a relative term. You can plan on the basis of probabilistic forecasts.
        7. Yes, for some, but the reverse is also true – inasmuch as there are reasonable cases for the alarmist “side” of each of these propositions, there is close-mindedness and name calling by (what is your preferred term for non-CAGW-supporters?)

        Obviously my goal here is to represent the way I think the AGW arguments go. I was going to say “the arguments that have persuaded me” but I’m willing to admit that it’s not merely logical arguments that have influenced my opinions.

      • The problem of course is simply that in a system this complex, “proof” is not easy to come by. You are right that the “alarmists” are willing to take potentially expensive actions based on probable, not certain, estimates of the risks of continued increasing GHG concentrations. As I said, the question of who should bear the burden of proof is not one which has a scientific answer; it does depend on your views about the risks associated with various courses of action.

      • Paul

        When you write about “probable risks” could you expand upon your position? What are the risk(s) to the United States for example and what do you believe the “probability” is of these risks being realized if no action or, only economically advisable actions; were implemented?

        It depends (imo) on the risks to individual nations and the costs to those individual nations and the results that the proposed actions will accomplish. IMO, most of the actions being discussed will accomplish very little and will cost a great deal. That would seem to be ill-advised.

      • randomengineer

        As I said, the question of who should bear the burden of proof is not one which has a scientific answer; it does depend on your views about the risks associated with various courses of action.

        The notion of burden of proof is where you go wrong. Demonstration of understanding is sufficient even with no proof either way, and alarmists have yet to demonstrate any understanding at all, much less “clear.” To this day we have yet to see what creationists call “macro” evolution but man also has sufficient understanding to know it happens. It’s not been proved in the sense of a court of law, yet we have a demonstration of understanding that is perfectly reasonable enough. Climate alarmists have yet to reach anything even close to this plateau.

        Meanwhile the view of “risk” is laughable. It’s purely imagined. If you have no clear understanding you have zero concept of risk in any direction. Alarmists however claim understanding they don’t have and then seek to imagine various risk factors accordingly.

        Climate change is natural. It must be else there would have never been ice ages. Man influences climate. This is demonstrable via UHI where it’s no mean stretch to extrapolate influence on mesoscale climate from multiple UHI sources.

        Jumping straight to “risk” from noting the obvious (man can also influence the climate) is simply politics. Without a figure of merit the notion of risk has no meaning whatsoever.

      • Paul Baer

        I think if you discuss this with a scientist, such as Judith Curry, you will see that “proof” is not part of the scientific method (as it is, for example, in law).

        But let’s walk through the logic.

        Another argument, which is invalid in science, is the “argument from authority”. This has been used in climate science, as follows: “90% of climate scientists believe that AGW is potentially dangerous, so it must be true”, or “the NAS or RS have endorsed the premise that AGW is potentially dangerous, so it must be correct”.

        This is a logical fallacy, as Wiki tells us:

        Appeal to authority is a fallacy of defective induction, where it is argued that a statement is correct because the statement is made by a person or source that is commonly regarded as authoritative.

        A second invalid argument, which has been used in climate science, as well, is the “argument from ignorance”. This has been used by IPCC to argue for the anthropogenic forcing (for example, AR4 WG1 Ch.9, p. 685):

        Climate simulations are consistent in showing that the global mean warming observed since 1970 can only be reproduced when models are forced with combinations of external forcings that include anthropogenic forcings.

        and (p. 686)

        No climate model using natural forcings alone has reproduced the observed global warming trend in the second half of the 20th century. Therefore, modeling studies suggest that the late 20th century warming is much more likely to be anthropogenic than natural in origin

        This goes in the direction of “our models can only explain the warming if we include anthropogenic forcing”

        Again, Wiki tells us that an “argument from ignorance” is an informal logical fallacy, which asserts that a proposition is necessarily true because it has not been proven false, in that it excludes a third option, which is: there is insufficient investigation and therefore insufficient information to “prove” the proposition to be either true or false.

        In this case, the “insufficient information” is the complete knowledge of all natural climate forcing factors and their impact on our climate (a point that Dr. Curry has also made).

        So we have identified two logical fallacies sometimes used in the climate debate to support the so-called “mainstream consensus” position as supported by IPCC.

        But how about the scientific method?

        A key part of this method, and of science in general is “empirical evidence”

        An essay “An Introduction to Science” discusses the application of the “scientific method” as follows:
        http://www.freeinquiry.com/intro-to-sci.html

        The scientific method is practiced within a context of scientific thinking, and scientific (and critical) thinking is based on three things: using empirical evidence (empiricism), practicing logical reasoning (rationalism), and possessing a skeptical attitude (skepticism) about presumed knowledge that leads to self-questioning, holding tentative conclusions, and being undogmatic (willingness to change one’s beliefs). These three ideas or principles are universal throughout science; without them, there would be no scientific or critical thinking.

        The scientific method involves four steps geared towards finding truth (with the role of models an important part of steps 2 and 3 below):

        1. Observation and description of a phenomenon or group of phenomena.
        2. Formulation of a hypothesis to explain the phenomena – usually in the form of a causal mechanism or a mathematical relation.
        3. Use of the hypothesis to quantitatively predict the results of new observations (or the existence of other related phenomena).
        4. Gathering of empirical evidence and/or performance of experimental tests of the predictions by several independent experimenters and properly performed experiments, in order to validate the hypothesis, including seeking out data to falsify the hypothesis and scientifically refuting all falsification attempts.

        How has this process been followed for AGW?

        Step 1 – Warming and other symptoms have been observed.
        Step 2 – CO2 has been hypothesized to explain this warming.
        Step 3 – Models have been created based on the hypothesis and model simulations have estimated strongly positive feedbacks leading to forecasts of major future warming
        X Step 4 – The validation step has not yet been performed; in fact, empirical data that have been recently observed have suggested (1) that the net overall feedbacks are likely to be neutral to negative, and (2) that our planet has not warmed recently despite increasing levels of atmospheric CO2, thereby tending to falsify the hypothesis that AGW is a major driver of our climate and, thus, represents a serious future threat; furthermore, these falsifications have not yet been refuted scientifically.

        Until the validation step is successfully concluded and the hypothesis has successfully withstood scientific falsification attempts, the “dangerous AGW” premise remains an “uncorroborated hypothesis” in the scientific sense. If the above-mentioned recently observed falsifications cannot be scientifically refuted, it may even become a “falsified hypothesis”.

        So the flaw of the “dangerous AGW” hypothesis is not that several scientific organizations have rejected it, or that it is not supported by model simulations, but simply that it has not yet been confirmed by empirical evidence from actual physical observation or experimentation, i.e. it has not been validated following the “scientific method” .

        And this is a “fatal flaw” (and IMO there is no sound scientific basis for wrecking the global economy with draconian carbon taxes and caps as long as this “fatal flaw” has not been resolved using the scientific method).

        Max

      • The link to the cited paper on the scientific method has changed to:
        http://www.indiana.edu/~educy520/readings/schafersman94.pdf

        Max

      • No one should say “90% of scientists believe, so it must be true.” I would not defend such a statement. Rather I would say, if the people who know the most about it say it is probably true, you might want to act as if it is. If both of your doctors say “there’s a good chance you have X, and we suggest treating it with Y”, the fact that they both admit they may be wrong is relevant but usually not decisive.

        Similarly with your argument against the argument from ignorance: No one asserts that the argument “CO2 has caused 20th century warming” is true because it has not been proven false. Rather the argument goes like this: “we have reason to expect that CO2 would warm the climate beyond natural variability. The climate seems to have warmed beyond natural variability (I do realize that this is also contentious). Therefore we might want to act as if it is the CO2 causing the warming.” If you have one plausible cause for a symptom (I return to the medical metaphor), and no more compelling alternative hypothesis, you might want to address the plausible cause as a precautionary measure.

        Again, my real point here is that there are not logical fallacies involved on either side, rather different judgments about the weight of evidence, the costs of action and the costs of inaction.

        Where I think there is room for discussion is on relatively particular pieces of evidence. I personally think that the claim that “our planet has not warmed recently” is false in the sense that matters; that is, what’s relevant is not whether this year’s temperature is higher than some year in the recent past, but whether (I’m simplifying here) the rolling average is higher than it was, say, 10 years ago or so. This is a discussion that has been going on in a variety of venues recently, though I can’t point to a particular one.

        As far as the claim that “net overall feedbacks are likely to be neutral to negative”, I’m behind on the research on this question. What is the evidence for this that you find most compelling?

        And, of course, there’s always this debate about “wrecking the global economy with draconian carbon taxes.” I admit that the impacts of carbon taxes or caps are uncertain and may be costly, but the economy seems to bounce back from pretty severe shocks in a couple of years. I’m not confident that the climate system is that resilient.

      • “if the people who know the most about it say it is probably true, you might want to act as if it is.”

        Paul,

        The problem is those people are mostly buraucrat scientist (anti-scientists) and they know the least about it.

        If they say something is true, then you might want to act as if it is not.

      • Paul Baer

        Your “argument from authority” regarding “90% of the scientists, etc.” is simply a rewording of the other one. It remains an “argument from authority” and, hence, a logical fallacy.

        “Our planet has not warmed recently” is a statement of fact rather than conjecture (based on physically observed data, assuming these are factual). It’s relevance in the “overall scheme of things” is a nice hypothetical theme of discussion, but does not change the actual physical observations.

        You ask about recent observations on climate sensitivity: Since AR4 was issued, there have been studies based on CERES and ERBE satellite observations (Spencer & Braswell 2006, Lindzen & Choi 2009/2011, Spencer 2011) which have shown that net cloud feedback is negative, rather than strongly positive as IPCC model simulations had assumed, and that the overall 2xCO2 climate sensitivity is likely to be around 0.6C, rather that 3C as estimated (on average) by the IPCC model simulations. These studies all came out after AR4, so were obviously not mentioned there; let’s hope they do get mentioned and considered in a future AR5 report (if such a report really ever gets published).

        We can get into discussions about the “pros and cons” of “wrecking the global economy” to fight an “imaginary hobgoblin (Mencken), but I think it probablt makes more sense to concentrate on the “science” behind this “hobgoblin” first.

        And, as I have shown you, it is not very robust.

        In the scientific process, it is still an “uncorroborated hypothesis” today, as I pointed out; if the recent cooling despite CO2 increase to record levels continues for another few years, it will become a “falsified hypothesis” (and IPCC can fold up).

        Max

      • Re: “argument from authority”

        Equally fallacious: ” A statement is incorrect because the statement is made by a person or source that is commonly regarded as authoritative.”

        The statement “X% of climate scientists say Y” proves nothing, but at what point do you think somebody would be entitled to give some weight to such a statement?

  14. Judith

    While I have often stated that the IPCC’s “very likely” attribution statement reflects overconfidence, skeptics that dismiss a human impact are guilty of equal overconfidence.

    I am not being overconfident when I dismiss human impact because that is what the data says:

    There was five-times increase in human fossil fuel use from about 30 to 170M-ton of carbon in the recent warming phase from1970 to 2000 compared to the previous one from 1910 to 1940. However, their global warming rate of about 0.15 deg C per decade is nearly identical as shown in the following graph.

    http://bit.ly/eUXTX2

    In the intermediate period between the two global warming phases from 1940 to 1970, there was global cooling with increase fossil fuel use of about 70M-ton as shown in the following graph.

    http://bit.ly/g2Z3NV

    And since about 2000, there was little increase in the global temperature with further increase in fossil fuel use of about 70M-ton as shown in the following chart.

    http://bit.ly/h86k1W

    Either change the data or dismiss AGW!

  15. “Separating natural (forced and unforced) and human caused climate variations is not at all straightforward in a system characterized by spatiotemporal chaos. While I have often stated that the IPCC’s “very likely” attribution statement reflects overconfidence, skeptics that dismiss a human impact are guilty of equal overconfidence.”

    In your last sentence Dr. C. you’re guilt of an asymmetry. The IPCC and their cohorts routinely and uniformly exaggerate their case for AGW. Most knowledgable skeptics, that is skeptics with a science background, do not dismiss a small human impact. Sure, the skeptics who do entirely dismiss AGW do so overconfidently, but that’s pretty much by definition…. hence it’s not a particularly meaningful. observation.

  16. Mercifully, that is one of the shortest things I’ve read by John Nicol.

    Since he denies multiple lines of evidentiary research e.g. satellite data, empirical observations, ice cores, modelling, etc. and rejects the summary of the IPCC and references to thousands of papers , I suppose it’s not entirely surprising that you wish to see him as a rational skeptic. Judith Curry, at least, thinks he raises important questions. He can usually be found claiming that people “underappreciate” the benefits of an increase in carbon dioxide, what with it being the basis for plant growth, and all that. He excoriates the public for reasonable and informed concerns about agriculture, clean water, increased poverty, and other almost certain effects of dangerous climate change in many regions. It’s hard to imagine what sort of disdain one has to have for one’s fellow human beings and for the natural world to be in John’s state of willful denial. But no matter…

    Judith Curry wants to engage in a more serious critique by identifying the most obvious nonsense in Nicol’s understanding of physics and climate science. On honest science sites, people tend to wonder how someone like John, with a PhD in physics, who claims to speak with superior knowledge of climate science, could make such elementary errors.

    Maybe it’s great that Judith wants to show the gentle reader why it is best to be open to the idea that someone like John, demonstrating what can only be described by any objective observer as astonishing gaps in his knowledge of the basics (never mind current knowledge of fast moving climate science), could be a person who accurately articulates both the important problems with climate science and the evolving strength and consensus of the science. But it just seems so unlikely, almost to the point of an error in reasoning, to place any confidence in such a source.

    Oh… and I wonder if it matters, even in just some small way, that he is associated with a think tank expressly funded to campaign against regulating emissions.

    • Any funding of people opposing mainstream climate science is massively outweighed by government funding of the latter. http://www.faststartfinance.org/content/contributing-countries
      The IAC review of IPCC has revealed irrefutable evidence of political interference, lack of transparency, bias, failure to respond to critical review comments, vague statements not supported by evidence, and lack of any policy to preclude conflict of interest. Anyone (outside climate science) with any engagement with science can recognise AR4 as a ‘snow job’ The IAC report is available at ttp://reviewipcc.interacademycouncil.net/report.html
      This, coupled with the revelations of ‘climategate’, the hockeystick and hide the decline undermines any confidence in the findings of the IPCC.

    • Dear Martha,

      I agree wholeheartedly with the need to limit the great atmospheric experiment. It seems only prudent. But what if much of the science is wrong? The models reveal nothing but the expectations of modellers.

      http://judithcurry.com/2011/04/25/science-without-method/#comment-64982

      Earth systems are themselves dynamically complex and resist deterministic and correlative methods. The satellites with their God’s eye view of global energy dynamics say something entirely different about the causes of recent warming and the results are dismissed minimal rationale. Other than when they say the right things about spectral absorption (Harries 2000) or are useful for analysing cloud feedback (Dessler 2010). What is a poor hydrologist and environmental scientist to make of this?

      We know that we have decadal variability in rainfall – have known since the 1980′s at least. These are linked in large part to patterns of standing waves in the Pacific but which are linked globally in chaotic spatiotemporal Earth systems. There is no one cause of any of this – but the blocking patterns in the atmosphere seem driven in part by ‘top down’ solar UV forcing of Earth systems – especially at the poles. A very new idea but one with considerable potential – http://iopscience.iop.org/1748-9326/5/3/034008.

      The associated decadal sea surface temperature patterns (SST) are very useful because they are persistent. I can say that Australia will be flood prone for another decade or 3, the Sahel will continue to green, India’s monsoons will intensify and the Americas will be dry in the west especially. There will be fewer cyclones in the Gulf of Mexico and more in Australia and Indonesia. These are just simple rules evolved over 100 years of observation. Because La Nina is involved – we are looking at more low level cloud (associated with cooler SST) in the Pacific and global cooling for a decade or three as well. You can surely see for yourself what this will do for the politics of decarbonisation of the global economy – unless the narrative is radically changed and soon. Do you want to continue to rely on a ‘science’ that is certainly wrong on the dynamic of Earth systems and on the energy budget?

      There is an effective approach to decarbonising the global economy that has multiple paths and multiple objectives. Health, education, safe water, sanitation, good corporate governance and free trade to stabilise population. Conservation and ecological restoration. Reducing black carbon and tropospheric ozone. Investing in energy research. There are many opportunities and many reasons to do this – primarily the emergence of humanity into a brighter future. It is happening, is obvious, is inevitable, is relentlessly pragmatic and is probably where we should focus rather than on the nonsense from both sides that is the climate wars.

      Robert
      Chief Hydrologist

      • ………………..where we should focus rather than on the nonsense from both sides that is the climate wars.

        Unfortunately the IPCC’s AGW advocacy is being used as the pretext for politicians to impose ‘economic remedies’ such as carbon tax and ETS without any proof that these will have any measurable impact on temperatures or climate. Such ‘remedies’ will not only be ineffective but will also be economically damaging. Has anyone compared the economic performance indicators (GDP, inflation, unemployment, national debt, foreign exchange reserves) for European countries with ETS c.f those of countries without ETS? I suspect such comparisons may be odious/toxic to AGW proponents.

      • Yes – concentrating on taxes is counter productive, ineffective and not likely to be implemented by any one outside of the benighted west – the latter is a good thing for the world’s suffering poor. Taxes or carbon trading is the thing I didn’t mention. But limiting the great atmospheric experiment is not automatically about taxes – there is a false science/policy nexus here that should be more openly debated.

        I have tried twice now to engage Martha in a discussion – I fear she is more concerned with dropping by and flaming.

    • This post of Martha’s is unhelpful, disconnected, gormless and incoherent. I’d like my time back.
      Cheers,
      Big Dave

    • Latimer Alder

      Am I right in deducing that you don’t like the guy very much?

      Coz though you tak about ‘obvous nonsense’, you don’t actually come up wth any examples.

      Looks like its personal and emotonal to me.

      • Well, there is this:

        “However, the uncertainty in Quantum Mechanics which Einstein was uncomfortable with, was about 40 orders of magnitude (i.e. 10^40) smaller than the known errors inherent in modern climate “theory”.

      • Latimer Alder

        ??

        My helpful remarks were addressed to Martha. The relevance of your quote from JC escapes me.

      • The quote is not from JC, it is from Nicols. If I understand Martha’s point, it is Nicols’ “obvious nonsense” she was criticizing.

      • Latimer Alder

        Perhaps you’ll have to explan why it is ‘obvious nonsense’. I think understand a little about what Einstein meant, remember a bit of Heisenberg and Planck, and Nicols’ remarks don’t seem to be obviously wrong-headed to me.

        Please elucidate – so that those of us a long time away from our study of QM can be reminded of why Nicols should be so ridiculed by climatologists.

      • Latimer,

        the quote is comparing the minimum uncertainty dictated by quantum mechanics in knowing both the position and momentum of a particular particle under investigation to the error in a computer simulation of a model of macroscopic physics. So the difference in scales makes this comparison ‘trying’, to say the least.

        More than that, the author is clearly picking the minimum uncertainty as though that is some kind of meaningful marker in any real sense of quantum mechanics. In most experiments testing the fundamental implications of quantum mechanics, the error involved is never anywhere near the minimum uncertainty because the wavefunctions involved do not resemble the minimum uncertainty wavepackets and the equipment used in such experiments does not have resolution down to one part in 10^34.

        On top of that, Einstein’s role in the quote is simply to bolster the author’s position and has absolutely no relevance in the context of what is being discussed.

        So, first, it’s nonsense to compare the minimum quantum uncertainty to a macroscopic description of a physics system. Second, it’s nonsense to assume that any real experiment/simulation will attain minimum uncertainty, even for quantum systems. Third, it’s nonsense to bring up that Einstein was or was not comfortable with quantum mechanics when nonsensically comparing quantum mechanics to a climate model.

        Is that a bit clearer?

        If you want a more meaningful comparison, then one should compare the ability of numerical methods (like density functional theory or other ab initio techniques) to calculate and predict the quantum mechanical parameters of atoms, molecules and solids. There are still substantial disagreements between theory and experiment. I guess by the author’s standards, that means quantum mechanics isn’t ‘science’ either…

      • Maxwell said it very well, I think.

      • Latimer Alder

        @maxwell

        No need to instruct me on the uncertanties of quantum mechanics. It was the failure of the real world to act in accordance wth theory that caused us to chuck the theory and me to decide on a career outside academe.

        But I think you are doing no more than arguing about whether the relevant exponent in Nicols’ essay is 45 or 40 or 35. His rhetorical point is a good one and you are doing no more than reinforcing it by arguing about the detail.

        Climatology gets nowhere near – not even by many orders of magnitude – to the level of proof and predictability expected in many truly scentific fields. Whether it s 10^25 times or 10^45 worse isn’t really the point. It is a stupendously large number.

        And yet climatologsts continually overplay ther hand with nonsenses about ‘the science is settled’ ‘there is no debate’ ‘why should I show you my data- you’ll only try to find somethng wrong with it’……….

        And each time a lay person like I once was decides to take a look for myself, they see that the clams are vastly overblown, the physical evidence does not match the theories and that the climatologists are blithely adopting some very dubious positions. More like spivs than scientists.

        I lke Nicol’s analogy. If it’ll make you happy I’ll move ten thousand millon times in your direction, and agree that the relavant exponent is 30, not 40. You can’t say fairer than that, can you? It’s still a very, very big number

      • Latimer,

        you’re demonstrated a clear lack of reading comprehension in this conversation.

        No where do I make a point that the number of orders of magnitude of uncertainty limited by the minimum uncertainty value matters in Nicols’ argument.

        I was pointing out that it’s a stupid comparison because one is a quantum system (ie very small) and the other a very large macroscopic system. From a purely physical perspective, the two situations are not comparable.

        More than that, in a realistic comparison of the error from numerical calculations of quantum mechanical parameters of material systems to the climate models we find that the errors are very much similar! So not only is it a stupid analogy and comparison, it’s wrong too!

        ‘Climatology gets nowhere near – not even by many orders of magnitude – to the level of proof and predictability expected in many truly scientific fields.’

        Maybe you could give us an example or two of currently researched calculations that have many orders of magnitude less noise or better agreement to experiment than climate models. As someone who routinely attends lectures and seminars on such calculations in the context of the quantum mechanics of molecules, I have a hard believing that any exist, but am ready to be proven wrong.

        This will be a great test of the scientific process. Latimer’s hypothesis: other currently researched fields have much more accuracy in determining experimental results from theory. Null hypothesis: other currently researched fields have about the same amount of error in such calculations, let’s say within an order of magnitude. Let’s find out if he can sufficiently reject the null.

        But if you don’t have concrete examples of current research (which is what would be comparable to current climate research), then your argument is just as meaningless as Nicols’.

        It would be par for the course though.

      • Latimer Alder

        @maxwell

        Thanks for your halpful remarks about my comprehension ability.

        And thanks also for reinforcing my point yet again that by arguing about the detail of Nicol’s rhetoric you are missing the big picture.

        No point in continuing this sterile discussion any further. Apart form to note that in QM it is normal to make predictions and then test them against reality., It is this last step that helps to build credibility of its answers – however bizarre they may seem.

        But in climatoloy the sequence seems to be ‘write model. Get it to compile. Run it once. Issue press release about dreadful consequences. Ensure that there are no backups then modify it so that the original cannot be rerun. Run version 2. Issue press release (citing version 1 as evidence) that things are even worse than expected….and so on and so on.

        I don’t see anywhere in that sequence called the ‘check against reality’ which is the bit that you need to do to test the ability (error) of the model. But then..its climatology. I don’t expect much. Lots of self-congratulation, but no actual science.

        I seem to remember that even Newtonian mechanics can do things to six plus significant figures and be show to be right. How does that compare to your climate model?

        A

      • Latimer,

        this is how uninformed on these process you are.

        ‘I seem to remember that even Newtonian mechanics can do things to six plus significant figures and be show to be right. How does that compare to your climate model?’

        ANY MODEL CAN GIVE ANY AMOUNT OF SIG FIGS UP TO 16! The amount of significant digits that a given physical model can calculate a parameter up depends on a computer’s ability to write any given number. Currently, the standard highest precision that a number can be calculated on a typical process is to within 16 significant digits. That’s the same no matter what kinds of mechanics is being used.

        In terms of agreement to a specific amount of significant digits, the bar to be held to is on the experiment side. Classical mechanics is known to such high precision because the experimental equipment is very simple and, therefore, can be known to several significant digits. The same is not true of satellite based measurements on climate.

        What’s more funny to me, and anyone else that actually knows what he/she is talking about, is the fact that you think that climate models are different from Newtonian mechanics.

        CLIMATE MODELS ARE BASED ON NEWTONIAN MECHANICS! Pick up an actual book or actually investigate ANY of the claims you’re making. For someone claiming to know what is and what is not science, you certain play an informed person quite well.

      • Latimer Alder

        @maxwell

        Sorry Maxwell.. you must be doing ‘post normal science’ or ‘pre-menstrual tension’ or something.

        The bit that you seem to have missed (but then you are a climatologist after all so not much is really expected of your ability to understand science) is the ‘and be shown to be right’

        You may not have heard of these things caled ‘experiments’ They are where the theory (eg models) are compared with actual measurements. And we have a funny strange old convention that if the theory and the experiment don’t agree we change the theory!!!

        Not like in climatology where you just say ‘the data is consistent with our theory’ or ignore it altogether a la Hockey Stick. I know you will find this concept difficult to grasp…but then new and exciting things often are.

        As to climate models being based on Newtonian mechanics, well this doesn’t come as a shock to me. What would have been a bolt from the blue would have been if you had said ‘and we regularly evaluate them against reality and our reproducible error is 10% or 20% or 100% or 10^-34%’.

        Just so long as your model gives you the results you wanted before you started…enough to get a pal-reviewd paper, a mensh in the citation idex and the next grant…that seems to be all the checking that is done.

        And having done 30 years in IT – including writing some early high atmosphere reaction kinetic models – I’m tolerably familiar with the numerical accuracies of computing equipment.

        But just doing the sums ain’t making a good model…it actually has to reflect reality as well. A shocking concept for you ..but you will, with time and a break from academe, come to accept it.

      • Perhaps quantum mechanics has more in common with dynamically complex spatio-temporal chaotic Earth systems than imaginable at 1st glance.

        One is the ontological evolution of a probability density function in the Schrödinger wave equation. The other is predictable only as a probability density function of Earth systems – I am refusing any more to use the term climate as it has been so debased – inhabiting a finite phase space in the ontological evolution of planetary standing waves.

        Much better than the tomfoolery of reductionist and correlative alchemy – the latter as a description of Earth systems has real meaning and substance.

      • Chief,

        I think that was the point of Tomas’ post from some, several weeks back. I think he was very much ‘inspired’ by the formalism of quantum mechanics.

        One question, what is the physical relevance of ‘planetary standing waves‘ in terms of the system? Are they ‘real’, so to speak, as in heat waves or ocean waves? Or are they more like a physical abstraction that would be more akin to a wavefuntion in QM?

        It’s interesting nonetheless. Maybe ‘climate’ has its own multivariable wave equation. Is it homogeneous or inhomogeneous? Weird.

      • Hi Max

        I was joking about quantum mechanical similitude.

        Standing waves in the coupled ocean/atmosphere system are another thing. These are long standing patterns – the AO, SAM, PNA, PDO, ENSO etc – that are persistent and change state occasionally.

        Cheers

  17. someone please clarify.

    is the morality/immorality referred to here general good character such as the foundations of judeo/christian morality? Or is it rather the expectation of advocacy in promoting a political agenda through ‘science’ – such as “gotta save the planet BS”?

    • Yeah. That’s the problem, right there.

      • Depends on who’s talking about it.

        And which part they’re talking about.

        And how little they know about the subject.

        And a whole lot of IFs.

        And a whole lot of assumptions.

        IOW – nobody knows. I’ve been through that mess more than once and I’m not gonna do it again here.

  18. Alexander Harvey

    There does seem to be a low signal to noise ratio in one segment of the published literature. Not in the basic science in which I would include people publishing work in their sphere of expertise but in the free for all areas such as climate sensitivity and its side kick attribution. Which I guess is the bulk of what feeds the internut debate.

    The criticism over the blend of real and simulated data is probably justified in many such papers, and it does make the arguments difficult to follow and sometimes leaves one to wonder what if anything had been demonstrated.

    On the subject of the models I do wonder if the scientific community is overly polite about some of the groups and whether this tends to discredit the better groups by association. Put more brutally I think that if asked those that know about these things would rank the models (and hence the groups) and be very rude indeed about those at the bottom of the pile.

    Criticisms that I have heard are ones such as that those models that don’t demonstrate blocking ought to be ignored for climate reasearch. Also those that don’t provide information on how real world data has been used to characterise their model should not be allowed into ensembles that are used to access model skill. Perhaps the oldest complaint which related to flux adjustments has finally led to such models being ruled out of order.

    Regarding these weak models I am not implying that those modellers that say that their models are not heavily characterised by long term real world measurement data are being untruthful. We only ever hear from very few of the modelling groups but that does not mean that all groups behave in the same way. Now maybe I am wrong but if that is the case perhaps someone would vouch for the other modelling groups. What little I do hear is that some of the models are more or less useless but even then the models are not named but I guess those in the business would know which ones they would be.

    I do understand that climate modelling is to some degree a matter of national prestige and that, due to the way it is constituted, the the IPCC system is in a bit of a bind if it snubs some nations’ efforts. I also know that the CMIP people have been a bit rude about some aspects of model performance even at quite a basic level e.g. getting the global mean temperture within a even few degrees of what is the case (in some of the CMIP3 models we were toast in pre-industrial times). I do not think that the due and valid criticism that the models recieve from the modelling community itself are sufficiently communicated outside what is a relatively small group of scientists.

    As far as the scientific method goes, I would welcome a more critical eye being cast on the models but cast in a detailed fashion designed to aid progress and understanding.

    As I have often stated I am very concerned about what the next round of models will have to say about regional impacts, which in some cases may be national impacts. If the scientific method is not addressing itself to the awkward task of separating the modelling wheat from the chaff then it is doing all of us a disservice.

    Alex

  19. While the data sets do not agree on the actual measurements, none of them support a tropical upper trop warming at over twice the rate of the surface. The idea that there is even some support for the models and that the models have some reality is simply wrong. The continued carrying of this corpse simply shows how some will not allow ANYTHING to get in the way of their narrative.

    The scientists, if there are any still allowed to work in the CLimate Arena, need to start over with their modeling using ALL the most recent data on the atmosphere and influences. Until they do this is a total waste of time and money.

  20. Have you noticed?

    The magnitude of global cooling rate since 2002 (http://bit.ly/dRdMYP) of 0.08 deg C per decade is less than that for the period from 1940 to 1970 (http://bit.ly/g2Z3NV) of 0.03 deg C per decade!

    Will this trend continue?

    • The rate of cooling was faster over the last decade than from 1940 to 1970.

    • I don’t know, will this trend continue?

      http://www.woodfortrees.org/plot/gistemp/from:2002/trend

      Looks like choosing a different data set provides exactly the opposite conclusion.

      • Latimer Alder

        ‘Looks like choosing a different data set provides exactly the opposite conclusion’

        Well that’ll be a first in climatology :-) . Never happened before.

        Every other piece of data is entirely unequivocal.Even the errors all point in the same direction. Which is what led to the undeniable conclusion that the ‘Science is Settled’. There is no room for doubt. If any, its just that the facts are worse than we thought. Only a few short years ago we had Fifty Days to Save the Planet

        /sarc

      • I was cherry picking to show that cherry picking is wrong, anybody pick up on that?

  21. I’m looking forward to your post on the troposphere hotspot.
    How substantial an issue is it? Frequently I hear the lack of a hotspot dismissed as not important, just one part of the model outcome. Is this the case, or is this a major discrepancy that calls a whole model into question? If a model is an enhanced weather model being run for a longer time frame, can the model be considered reliable for anything, and if so, why?

  22. It is my understanding that the “enhanced” greenhouse effect has to do with positive feedback from increased temperatures from carbon dioxide warming, particularly the positive feedback of increased water vapor. From AR 4

    The so-called water vapour feedback, caused by an increase in atmospheric water vapour due to a temperature increase, is the most important feedback responsible for the amplification of the temperature increase.

    From Roger Pielke Sr. Blog today

    “there are a number of other studies which conclude that the multi-decadal global climate models as reported by the IPCC are incorrectly simulating the water cycle which includes the amount of water vapor in the atmosphere.”

    He was also referring to a post by Marcel Crok where he said:

    ….All we can say at present is that the preliminary NVAP data, according to the Null Hypothesis, cannot disprove a trend in global water vapor either positive or negative.
    In addition, there are good reasons based upon both Sampling / Signal Processing Theory and observed natural fluctuations of water vapor ( ENSO’s, Monsoons, volcanic events, etc. ) to believe that there are no sufficient data sets on hand with a long enough period of record from any source to make a conclusive scientific statement about global water vapor trends.”

    If you have a theory which can not be measured can you

    a. input unmeasurable data into a computer model and expect robust results?

    b. can you in fact claim to have a theory at all if the criteria by which the theory is to be proven can not even be measured. ?

    Just asking

    http://jer-skepticscorner.blogspot.com/2011/04/case-of-vapors.html

    • Thank you Jerry. That is the most compelling information I have seen so far!
      Pielke Sr. has truly ‘hit the nail on the head’. No wonder the IAC declined to comment on IPCC’s conclusions.

      • Since he has a hammer
        He hammers in the morning,
        He hammers in the evening
        He hammers all the livelong day.
        Can’t ya hear the whistle blowing
        For a dollar and a dime a day.
        ==============

    • Latimer Alder

      Theories about anything are only useful if they can make reliable predictions about what *will* happen. That they can explain what *did* happen is nice, and may be interesting for their own sakes. But without being able to tell us something about the future, they are merely curiosities.

      In the real world, outside the hallowed halls of academe (with all its quaint shibboleths and rituals and traditions and deference that make the Court of Queen Victoria look like an informal and rumpus room), the people who make their living by being proved right or wrong each day are the professional horse racing tipsters. They lay down their predictions in black and white each day…1st Fast Thoroughbred, 2nd Great Runner, 3rd Pretty Good Horse..and by next evening their success or failure is visible to all. No excuses, nowehere to hide…either the horses came in as predicted or they didn’t. If they did, great, if not the punters follow a different guy’s wisdom.

      Seems to me that climatology is no more than tipstering writ large and on a bigger scale. But with one crucial difference….its predictions are deliberately couched in such terms as to be non-testable. Or at least with plenty of wiggleroom and ‘getoutofitability’. Other fortune tellers do the same. Gypsy Rose Lee predicts meeting a tall dark stranger, unless they should short and blond. The old Scottish weather forecast ‘if you can see the mountain it’s going to rain..if you can’t it is already raining’ is another.

      Which is all a long way to say that I agree with Jerry’s point B.

      ‘can you in fact claim to have a theory at all if the criteria by which the theory is to be proven can not even be measured?’

      And leads my nasty suspiciosu mind to wonder why the climatologists persist in trying to pull this trick..of ensuring that their predictions are never testable. You couldn’t get away with such sleight of hand in the saloon bar of the Dog & Duck for more than ten minutes before even the most befuddled drinker would cotton on to the fact that he was being played. Perhaps they should spend less time in the academic world and more on the racecourse..or even the D&D.

      Or even…. though I fear I will never live long enough to see the day…..make some testable climate predictions ahead of time and stand by their results. Doing so – and being consistently right – would rebuild their credibility considerably…even with a hardened sceptic like me. And might even lead to a new career in racing journalism :-)

    • The very fact that they can not prove the basics of the theory ie. increased water vapor or the tropical troposphere signature etc.is the very reason that they make such absurd pronouncements as regards to other events being “consistent” with their theory.

      This is also why, and obviously so to people who pay attention, that the climate science community is riddled with corruption. The very fact that people who claim to be “honest brokers” such as the esteemed Richard Muller buy into the theory while simultaneously condemning the methods and persons responsible for the “promotion” of the theory is not only hypocritical it is knowingly continuing the promotion of a scientific fraud upon society. If the scientist behind the theory are using unscientific methods to prove their theory why would anyone with an ounce of common sense or integrity for that matter buy into the theory that these corrupt scientist using corrupt methods are so determined to defend? $$$$$$$$$$$$$$$$

      • Latimer Alder

        That is an interesting point.

        It may be that climatology is not riddled with corruption. And that we outsiders are just being misled into thinking that because it gives every appearance of being largely run by immature and childish individuals acting in a way designed to give rise to those suspicions, that these suspicions must be true. It may not be so.

        But Jeez they don’t go out of their way to demonstrate that their field is whiter than white. Everywhere you turn there are examples of sharp practice,,,from pal-review to FOI evasion. From hockey hockey sticks to imaginary Chinese data. From rigged ‘independent’ whitewashes to ‘grey literature’ in the IPCC report. The latest scandal from UEA is that they won’t comply with FOI because if they showed how their CRU guys worked, nobody would ever publish their work again (see WUWT and CA today).

        All this stuff just stinks to high heaven. As the man says, if it walks like a bunch of shysters, talks like a bunch of shysters and quacks like a bunch of shysters…………….

  23. There is a so called greenhouse effect. The ‘enhanced’ refers to adding gases and water vapour. The models ‘calculate’ water vapour on the basis of constant relative humidity. There is no shortage of water vapour and warm air holds more water vapour than colder air.

    Minschwaner 2004 – suggest that relative humidity might not be quite c0nstant. ‘Their work verified water vapor is increasing in the atmosphere as the surface warms. They found the increases in water vapor were not as high as many climate-forecasting computer models have assumed. “Our study confirms the existence of a positive water vapor feedback in the atmosphere, but it may be weaker than we expected,” Minschwaner said. http://www.nasa.gov/vision/earth/lookingatearth/warmer_humidity.html

    Cherries for everyone.

    • This is the biggest problem for me. The assumption that average water vapour concentration is *only* a function of temperature (which, in turn, is *only* a function of GHG concs). The reality is, of course, quite different, as water vapour concs are dependent on a whole host of other variables, which have variability going out from decadal to centennial to millenial etc.

      The secondary problem is diagnosing a closed loop system. For my sins as a researcher into remote sensing systems, I have had to debug linear closed loop systems in the past. That is, a correctly designed closed loop system that perhaps has an unknown (e.g. software error) problem causing it to break. Observing the parameters of the system is just not a good way to diagnose these systems; a single software error will cause all parameters to drift together. It is almost always impossible to identify causality in this way in a closed loop system. Debugging is typically achieved by breaking the loop and running the system in an open loop form.

      For the atmosphere, let’s say people do find an increase in water vapour to match recent warming. How do you work out whether the increase in water vapour was caused by the warming, or caused the warming?

      My experience here refers to systems which are linear (by design). Add on the complication that the atmosphere is complex and non-linear, including the discussions on this blog on chaos, difficulties in diagnosis become orders of magnitude more difficult – perhaps even intractable. Yet climate scientists lack any kind of humility when faced with these problems; they are just dismissed, and advocates place them on a pedestal not to be questioned. And they wonder why they are failing to convince so many “Joes with science degrees”.

      • Hi Spence

        I am not one to argue nonlinearity and Dragon Kings. The more people like ourselves the farce that is attribution and prediction – the better.

        My point was exactly that water vapour is a complex thing and a view should not be based on one presentation by a journalist. That said – other things being equal – warmer air does hold more water vapour.

        Cheers

      • Hi Chief

        I fully agree with your comments here.

        I would note though, whilst warmer air has the capacity to carry more water vapour, this is not a constraint. (Conversely, it can be a constraint that cold air must carry less). I know you realise this but I am a sucker for stating the obvious :)

        It is also interesting to note that the driest places on earth come from the coldest and hottest regions – e.g. deserts of antarctica at one end, death valley at the other end. You can’t escape from those nonlinearities.

      • Spence,

        Your comment articulates quite nicely what I have been referring to as ‘diagnosis by exclusion’ i.e “we cannot account for the temperature data unless we invoke large positive feedbacks and high climate sensitivity”. What is going under the radar of course, is that this approach assumes that ALL the relevant variables (including positive and negative feedbacks) have been identified, quantified and accurately represented in the computer model on a spatiotemporal scale which is meaningful when compared to the real world. Given the track record of climate science to date (and to plagiarise their terminology) I consider this ‘extremely unlikely’.

      • Absolutely! There is much reason to be sceptical. :)

    • Chief

      The M&D 2004 report, which you cite, even produced a nice chart showing how water vapor content (specific humidity) increased with temperature in their satellite observations.

      A closer look at this chart shows that the IPCC assumption of “constant RH with warming following Clausius-Clapeyron” gives greatly exaggerated values of specific humidity, and that even the M&D model showed much higher SH increase with warming than the actual observations (I have extended the graph to show the full 1K impact).
      http://farm4.static.flickr.com/3347/3610454667_9ac0b7773f_b.jpg

      Max

  24. Christopher Game

    The use of the IPCC “forcings and feedbacks” formalism as a vehicle of argument continues. It is a method of calculation, not a physical theory. It simply carries the previously well known law of conservation of energy into climatology. No one can object to that.

    But to create a science of climatology , laws of nature specific for climatology are needed. Global scale laws that say something specifically about the energy transport process of the earth.

    Why is it that, for the year-round global average, the atmospheric radiation emitted to space is twice the radiation emitted by the solid earth? Kiehl and Trenberth 1997 figures are 390 W m^-2 and 195 W m^-2. Balfour Stewart’s steel shell model has this property too. Is this a meaningless coincidence? Can you prove it to be so by some systematic argument? Pray tell.

    Talk of “feedback” is talk about the mathematical structure of a method of calculation, not talk about laws of physics. The relevant law of physics seems to be that the global water vapour content of the atmosphere tracks its global average temperature rather well. Can you explain why these two variables track seem locked together? Can you give a precise physical reason why the tracking is at a relative humidity of about 50% ?

    A hard one is to explain, in general physical terms, why the clouds are at about 50 or 60% . Do they also track temperature? Why?

    When would the current interglacial period drift into a glacial period, supposing that there were no man-made greenhouse gas emissions? Why?

    These are questions about physics. In general, to answer a question about the precise magnitude of the effect of a small perturbation to a physical system, one needs to know something specific about the dynamical structure of that system. Christopher Game

  25. “What scientific methodology is in operation here?”

    Maybe it would be easier to say what is not the scientific method when it comes to climate science. From what I have read since beginning to frequent climate blogs, when engaging in climate science:

    You do not have to posit a null hypothesis;

    You do not have to formulate your theory in a way that is falsifiable;

    You do not have to include adverse data in your graphs;

    You do not have to work with statisticians when creating new statistical methods in doing your research;

    You do not have to validate your climate models;

    You do not have to be accurate in your predictions;

    You do not have to understand the forcing effect of water vapor in constructing your model;

    You do not have to make public your data and code after publishing (at least not to those who might disagree with you);

    You do not have to comply with FOIA;

    You do not have to conduct experiments to prove elements of your theory;

    You do not have to properly site your weather stations;

    You do not have to adjust (much) for UHI;

    You do not have to actually measure temperature over vast swaths of the land, oceans and atmosphere to calculate global average temperature (or to publish dramatic graphs thereof);

    You do not have to adjust your theory when your predictions are wrong or other adverse data becomes available;

    You do not have to notice, let alone criticize, dishonesty by your fellow climate scientists (at least not the ones you agree with);

    You do not have to engage in fair, open and reasoned dialogue with those who disagree with you;

    You don’t even have to read material written by those who disagree with you;

    and best of all

    You don’t have to admit you are wrong about anything…ever.

    • Latimer Alder

      @Gary

      Please don’t call it ‘climate science’. Whatever these guys do has few connections wth science.

      ‘Climatology’ will do. Like astrology or UFOlogy.

      • Latimer,

        Your comment reminds me of when I read Steve McIntyre’s quoting of Esper, stating that the ability to select samples (i.e., cherrypicking) was an advantage “unique to dendroclimatology”. My immediate thought was that it is not unique to dendroclimatology – it is also used by astrologers, water diviners, and all manner of other pseudosciences and quackery.

        A Quote from Esper et al 2003

    • Gary M,
      Ouch.

    • While GaryM’s list may seem to be an attempt at humor, I think what he has highlighted is a growing perception of Climate Science and Climate Scientists among technical people who work in other sciences and fields like engineering particularly outside of academia. The people Judith once referred to as tier 2 if I remember correctly. I would be worried about this perception if I were in the climate science field – it’s very real and growing.

      • Latimer Alder

        Humour???

        I thought it was the draft syllabus for ‘Climatology 101′ as drawn from the experiences of the ‘leaders’ of the field. And, for once, I thought they’d made a good job of it. Very true to life.

      • Just “an attempt at humor”? …well that hurts…

        But more seriously, except for the last one, I have read comments making each of those arguments in various formulations within the last 6 months. Each of them alone can sound innocuous enough. When you list them all together (and there are others) it does not paint a pretty picture.

      • I just meant that alarmists will probably see it as nothing more than someone trying to be funny, when in fact I believe it is a very real and growing perception – accurate too.

      • Latimer Alder

        Has anyone ever come across an alarmist who demonstrates a sense of humour? I thought that the two were mutually exclusive. Surely they are obliged to see everything in the world through the prism of imminent thermageddon. Far too serious a matter for joking about.

        I imagine the IPCC review meetings are a real bundle of fun and helpless hilarity…….especially the final wrap up with Chucklechops Pachauri laughing all the way to his bank.

      • Q: How many alarmists does it take to screw in a light bulb?
        A: That’s not funny!

      • I’ve been told that alarmists don’t use light bulbs because they’re reducing their carbon footprint. :-)

      • Latimer Alder

        Alarmists don’t do screwing.

        They are too busy saving the world and berating the rest of humanity for its sins and lack of love for gaia to have amy spare time for recreation/procreation.

      • Latimer – have you actually ever met an alarmist? They’re really pretty much like normal human beings (which is to say, they vary quite widely in their appetites and character traits).

      • Latimer Alder

        @paul

        Yes. I have met many alarmists. The mildly green-tinted ones are quite tolerable. If they wan to worry about their carbon footprint that’s fine with me.

        It is the single-minded committed earthsavers that I lampoon. I have the misfortune to know one very well. In olden times he’d have been a hell and brimstone ‘you are all evil sinners. Change your ways or you will die’ sort of preacher..full of hatred and bile.

        Come to think of it he is a hell and brimstone ‘you are all evil sinners. Change your ways or you will die’ sort of preacher..full of hatred and bile.

        Nothing changes..just the god of their imagination. It is the hatred and bile that is constant through the years……..

      • That is actually funny. We have a saying in my country – ‘the only reason angels can fly is that they take themselves so lightly.’

        I can see both sides. My core position is that changing the composition of the atmosphere is, ipso facto, probably not the most prudent use of human ingenuity. On the other hand, carbon taxes and the like just seem so hideously malformed. The Hunchback of Notre Dame of economics. Ludicrously ringing the bell but succeeding in nothing else.

        Have you read the Hartwell post – http://judithcurry.com/2011/04/10/hartwell-paper-game-changer/ There are approaches that involve multiple paths and multiple objectives – that I call the MDG’s plus approach.

      • Paul –
        On reflection, how did those get into that light bulb to do whatever it was that they were doing? :-)

      • A: One, to stick his finger in the socket and test whether or not the power is on, and two to carry him to the hospital when he finds out it was.

      • I too used to think that warmism and wit could not coexist, until the Richard Curtis splatter video thing – I haven’t yet quite recovered.

    • GaryM

      and best of all

      You don’t have to admit you are wrong about anything…ever.

      They do admit their wrongs and doubts. Unfortunately, they do it in private:

      I believe that the recent warmth was probably matched about 1000 years ago. I do not believe that global mean annual temperatures have simply cooled progressively over thousands of years as Mike appears to and I contend that that there is strong evidence for major changes in climate over the Holocene (not Milankovich) that require explanation and that could represent part of the current or future background variability of our climate.
      http://bit.ly/hviRVE

      The verification period, the biggest “miss” was an apparently very warm year in the late 19th century that we did not get right at all. This makes criticisms of the “antis” difficult to respond to (they have not yet risen to this level of sophistication, but they are “on the scent”).
      http://bit.ly/ggpyM1

    • GaryM
      “You don’t have to admit you are wrong about anything…ever”.

      Have you been speaking to my wife?

      • Latimer Alder

        Your wife must be in secret contact with my g/f.

        Unless you live in London, this culd be the only actual evidence for the otherwise bonkers theory of teleconnections.

      • Latimer,
        Hopefully, your g/f either
        1. Doesn’t read these posts
        2. Has a wonderful sense or humor, or
        3. Does read these posts and is not your only g/f :)

        My sweetie and I just had our 41st anniversary so by now could care less about what I write :)

      • Latimer Alder

        @Kent

        At least one of the above is true :-)

        Happy Anniversary

  26. The question of constant relatively humidity in the tropospheric column is crucial to the cAWG debate and has not in my opinion received adequate treatment. Quoting from the abstract of Paltridge et al (NCEP reanalysis data study 2009):

    The upper-level negative trends in q are inconsistent with climate-model calculations and are largely (but not completely) inconsistent with satellite data. Water vapor feedback in climate
    models is positive mainly because of their roughly constant relative humidity (i.e., increasing q) in the mid-to-upper troposphere as the planet warms. Negative trends in q as found in the NCEP data would imply that long-term water vapor feedback is negative-that it would reduce rather than amplify the response of the climate system to external forcing such as that from increasing atmospheric CO2.

    • Except this study found that specific humidity is increasing, confirming a positive water vapor feedback, although it also found that relative humidity was decreasing.

      http://www.nasa.gov/vision/earth/lookingatearth/warmer_humidity.html

      “”The increases in water vapor with warmer temperatures are not large enough to maintain a constant relative humidity,”

      If relative humidity goes down, then the amount of clouds go down, which further reinforces the positive water vapor feedback.

      • Of course there are inconsistencies between the radiosonde measurements and satellites which need clearing up, the following comment from your reference is pertinent:

        Using the UARS data to actually quantify both specific humidity and relative humidity, the researchers found, while water vapor does increase with temperature in the upper troposphere, the feedback effect is not as strong as models have predicted. “The increases in water vapor with warmer temperatures are not large enough to maintain a constant relative humidity,” Minschwaner said. These new findings will be useful for testing and improving global climate models.

        Something worth exploring, no?

      • True, if models assume that relative humidity is flat with increasing specific humidity, and data shows that relative humidity decreases with increasing specific humidity, then models could be improved by taking that into consideration when developing new models.

  27. Martha, isn’t ‘fast moving settled climate science’ an oxymoron?

    • Rob – it is certainly a poorly constructed sentence (of which I am frequently guilty) but if it means “dynamic and universally accepted” it need not be an oxymoron. I’ll give Martha the benefit of the doubt, but just this time!

  28. steven mosher

    Another mistake you should flag

    “So how do our IPCC scientists deal with this? Do they revise the theory to suit the experimental result, for example by reducing the climate sensitivity assumed in their GCMs?”

    Sensitivity is not assumed.

    • Steven Mosher

      You are stepping onto a slippery slope with your statement that “sensitivity is not assumed”.

      The IPCC estimate of 2xCO2 climate sensitivity is a result of GCM simulations, which are based on many inputs, which, themselves are based on hypothetical deliberations and interpretations of selected paleo-climate data, all of which entails a lot of “assuming” along the way.

      And we all know how to spell ASS-U-ME.

      Max

      • Actually sensitivity is not “assumed” in global numerical climate models. sensitivity is a net result of the integration of the model equations. Of course you can “tune” the model parameterizations and end up with a different sensitivity (e.g. change the cloud microphysical parameterizations). My previous post on what can we learn from climate models describes how/why climate models are tuned
        http://judithcurry.com/2010/10/03/what-can-we-learn-from-climate-models/

      • You’re assuming a complete understanding of the feedback mechanism though, which practically amounts to the same thing.

      • actually no. climate models are a black box when it comes to producing the sensitivity, since it is impossible to trace things through owing to nonlinearities and complexity. Say if you tweak the microphysics parameterization, it will probably change the sensitivity, but you can’t predict in advance the sign of the change.

      • batheswithwhales

        And what exactly do you mean by a “black box” in this context?

        And in what way do you think that climate models “produces sensitivity”?

        Please explain.

      • steven mosher

        let me make it simple. a two variable model

        Inputs;
        1 TSI forcing
        2 C02 ppm

        You spin the model up and run it till you achive equillibrium.
        temperature = 14C

        Doubling test:

        Inputs
        1 TSI
        2. 2X C02

        you run the model till it achives equillibrium

        Output: temp = 17C

        What is the equillibrium sensitivity response to doubling C02

        17-14 = 3C. 3C for doubling. It’s an output, not an asumption.

        http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch8s8-6.html

        Climate sensitivity is a metric used to characterise the response of the global climate system to a given forcing. It is broadly defined as the equilibrium global mean surface temperature change following a doubling of atmospheric CO2 concentration (see Box 10.2). Spread in model climate sensitivity is a major factor contributing to the range in projections of future climate changes (see Chapter 10) along with uncertainties in future emission scenarios and rates of oceanic heat uptake.

        Definition

        http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch8s8-6-2.html

      • batheswithwhales

        So climate sensitivity is not assumed, because it is a result of the integration of equations?

        Excuse me, but:

        Climate forcing for each forcing variable has to be assumed, as their forcing capability and their interactions are not known.

        Are you saying that these are known? Or that they can be destilled from the real-life climate using different sorts of equations?

        Please be clear.

      • Forcing is not feedback. Forcing is something that is applied externally in the models (e.g. CO2 concentration, incoming solar flux). The model and its equations respond to these forcings, and the model response to these forcing may be amplified or damped by feedbacks from internal processes in the models.

      • Applied externally to what? This is where the confusion begins. For example, the CO2 is inside the atmosphere. The ocean is external. Chaotic systems oscillate under constant forcing solely due to feedbacks, so no external forcing causes the changes. And so it goes, the distinction does not work.

      • Its about how you define the system. The sun is external to the earth and its atmosphere, but internal if you define the system as the solar system. CO2 is external to the system if it does not include chemistry; once you start including interactive biogeochemical cycles, then CO2 is not external to the system. Same for aerosols. So the distinction does work provided that you are clear about how you are defining your system.

      • batheswithwhales

        I am aware that forcings are not the same as feedback.

        But your point was that climate sensitivity is not assumed in the models.

        Both forcings and feedback are part of what we call climate sensitivity.

        So as long as the different models and the different model runs have different values for each forcing component, and each feedback component, how can you claim that “sensitivity is not assumed” as long as it is composed by two variables, both assumed?

      • Climate sensitivity is a measure of how responsive the temperature of the climate system is to a change in the radiative forcing. So if the forcing doubles then the temperature response doubles, but the sensitivity remains the same. So by changing the forcing, you do not change the sensitivity but rather the response.

      • 8.6.2 Interpreting the Range of Climate Sensitivity Estimates Among General Circulation Models

        http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch8s8-6-2.html

        Despite its limitations, however, the climate sensitivity remains a useful concept because many aspects of a climate model scale well with global average temperature (although not necessarily across models), because the global mean temperature of the Earth is fairly well measured, and because it provides a simple way to quantify and compare the climate response simulated by different models to a specified perturbation. By focusing on the global scale, climate sensitivity can also help separate the climate response from regional variability.

        I guess in scientific jargon an assumption is called a “useful concept” :)

      • batheswithwhales

        To make it simpler:

        How can “climate sensitivity” not be an assumption of the model.

        As long as “climate sensitivity” consists only of two factors:

        1. Climate forcings.

        2. Climate feedback.

        If these two factors are assumptions of the model, how can climate sensitivity not be an assumption?

        Of course you could call it something else than an assumption, but that would only be wordplay, not worthy of the last hours of the Easter holiday.

        The fact is that climate sensitivity is an assumption, since it is a direct product of other assumptions,

      • Nope. If you want to force a global climate model to give you a climate sensitivity of, say, 3C, you will need to run many experiments tweaking a number of parameters until you land on 3C. So sensitivity is not an assumption in climate models.

  29. steven mosher

    “So how do our IPCC scientists deal with this? Do they revise the theory to suit the experimental result, for example by reducing the climate sensitivity assumed in their GCMs? Do they carry out different experiments (i.e., collect new and different datasets) which might give more or better information? Do they go back to basics in preparing a new model altogether, or considering statistical models more carefully? Do they look at possible solar influences instead of carbon dioxide? Do they allow the likelihood that papers by persons like Svensmark, Spencer, Lindzen, Soon, Shaviv, Scafetta and McLean (to name just a few of the well-credentialed scientists who are currently searching for alternatives to the moribund IPCC global warming hypothesis) might be providing new insights into the causes of contemporary climate change?

    Of course not. That would be silly. For there is a scientific consensus about the matter, and that should be that.”
    ##################
    I’m sorry Judith but this last paragraph contains a lot of silliness.
    Sentence by sentence:

    1Do they revise the theory to suit the experimental result, for example by reducing the climate sensitivity assumed in their GCMs?
    Wrongly assumes that sensitivity is an assumption put into models. Further, the history of GCM development shows that the models (theory) IS revised to better suit the results. For example, it’s recognized that the models under predicted the ice melt, nobody looks the other way on these matters.

    2.Do they carry out different experiments (i.e., collect new and different datasets) which might give more or better information?
    Yes they do. Didnt we just lose a satillite on launch that was going to collect aerosol information? Haven’t people been working on creating a harmonized land use dataset? (Hyde) ?

    3.Do they go back to basics in preparing a new model altogether, or considering statistical models more carefully?
    This question clearly shows a lack of understanding of complex model development. Does going back to basics include re discovering basic physical laws? rewrite the orbital mechanics? As for statistical models, the biggest issue is that statistical models are not comprehensive. A statistical model may by chance do a better job of predicting ice melt, but you cannot replace a comprehensive model that predicts temperature, pressure, winds, currents, surface sea salt, ice cover, snow cover, precipitation (the climate) with a narrow statistical model that “explains” a single feature. That is not the scientific method.

    4Do they look at possible solar influences instead of carbon dioxide?

    False dilemma. It is not either OR. it is not solar INSTEAD OF C02. The right question is do they look at possible solar influences IN ADDITION TO C02. The answer is yes.

    5.Do they allow the likelihood that papers by persons like Svensmark, Spencer, Lindzen, Soon, Shaviv, Scafetta and McLean (to name just a few of the well-credentialed scientists who are currently searching for alternatives to the moribund IPCC global warming hypothesis) might be providing new insights into the causes of contemporary climate change?

    Yes they allow the likelihood. Its a LOW likelihood primarily because these authors ( for the most part) are not really offering ways forward to improve our understanding. Take Scafetta. What he would need to do is provide a physical theory that one could incorporate into a GCM. The inputs to scafettas model are positions. the output is the average temperature of the globe. Problem? The average temperature of the globe isnt a GCM input.

    There is a problem with ending a piece with questions that you think are rhetorical, when they are not.

    • steven,
      If I am reading your correctly, are you stating that you believe the ‘consensus’ is properly recognizing the sensitivity?

      • steven mosher

        There are three methods, roughly, for estimating sensivity.
        1. Paleo studies. see hansen
        2. empirical studies, see schwartz
        3. OUTPUTS from GCMS.

        Each of those methods gives a different range of values with substantial overlap. The point I am making is that Sensitivity is NOT an assumption of the models. In fact, the models, are probably the least certain evidence of the range, as Hansen notes. For a cartoon version of how sensitivity is an output of the GCM. You set your forcings (TSI, aerosals, GHGs) to some initial values. You DOUBLE C02. you let the climate model respond (takes decades) and it reaches an equillibrium at some point.
        You note the temp when you started: X. you note the temp when the system reaches equillibrium: X+2.7C. That’s the sensitivity to doubling C02. Not an INPUT, not an assumption. Its an output. ModelE from Giss is 2.7.

        As hansen notes the sensitivity output by models is the most uncertain measure because we know that models dont get all the processes represented.

    • “1Do they revise the theory to suit the experimental result, for example by reducing the climate sensitivity assumed in their GCMs?
      Wrongly assumes that sensitivity is an assumption put into models.”

      Well, as long as the specific climate sensitivity to different forcing factors is not known, and the interaction of these forcings are not known, how else are these to be incorporated into climate models, if not through assumptions of different forcing capability?

      Are you suggesting that the forcing capability of all climate forcings are already known and therefore treated as constants in the models, rather than assumptions?

      • steven mosher

        “Well, as long as the specific climate sensitivity to different forcing factors is not known, and the interaction of these forcings are not known, how else are these to be incorporated into climate models, if not through assumptions of different forcing capability?

        Are you suggesting that the forcing capability of all climate forcings are already known and therefore treated as constants in the models, rather than assumptions?”

        This is wrong . Lets just take a simple forcing. TSI. total solar irradiance is a forcing. It is expressed in watts. those watts drive the the physics models which are also in watts. The assumption is not the sensitivity, but rather this:
        1. we know past TSI figures
        2. The effect of the sun is limited to this
        3. we can predict future forcing.

        With other forcings, say C02, the issue is modelling how changes in C02 change the transmission of radiation. With aerosals for example you have a bunch of problems? what were the levels? do they reflect or absorb? how much, etc.
        the forcings are not constants, you can see what the forcing inputs look like by using google GIYF

        in general skeptics dont get that the models are not heavily relied on as the best evidence of sensitivity. But they are useful for executing what if scenarios. Since the sensitivities of the models overlap with other non model based estimates of sensitivity, there projections can be used a guide for policy.

        Think of it this way. The best evidence for sensitivity is the paleo record and empirical studies of observation records. Those two give you a wide range. Then you model the climate. You check its output and calculate a sensitivity. The sensitivity of the models is consistent with the other methods, therefore, its reasonable to use it in what if studies.
        If you dont like models, then use hansens response function. A much cruder tool since it only outputs temperature

    • Thanks, well put.

  30. I think the heart of this matter(in the article) is Big Science often undermines the scientific method (like the LHC, NASA, etc. ).

    The Wikipedia entry on ‘Big Science’ is interesting.

    here

    Also, mathematicians, system engineers and computer scientists know that some problems just have too many variables to solve accurately. Just look at weather prediction – a 30% chance of rain is a statistic, not prediction but the weather scientist that says the temperature is going to 15 degrees c the next day and then it goes only to 5 degrees is common in my area, and that is a prediction.

  31. Judith Curry

    I liked your comments on the “Science without Method” article from the Australian Quadrant.

    The discussion of “real” science versus the “strange mix of empirical and virtual reality reasoning that characterises contemporary climate change research” is interesting.

    But the quote below by Bohm on “post-modern” science bothers me.

    If science is carried out with an amoral attitude, the world will ultimately respond to science in a destructive way. Postmodern science must therefore overcome the separation between truth and virtue, value and fact, ethics and practical necessity

    This introduces a “moral” component to science. Morality is, by definition, subjective rather than absolute. Depending on whom you ask, it can be based on religious dogma, philosophical deliberations, socio-political ideology or what-have-you.

    “Medical science” is cited as an example where the principles of “post-modern” science could apply.

    This is based on an intrinsic fallacy. Medicine, as practiced, is not a “science”, it is an “art”. It starts with the Hippocratic oath, which, itself is moralistic credo (to protect human life above all). It uses “science” (as do many non-scientific fields of endeavor, such as civil engineering, politics, sociology, economics, environmentalism, etc.), but it is not a science per se.

    IMO we do not need a bunch of philosophers, sociologists, moralists or anyone else redefining the very straightforward scientific method. Science should be kept pure (as William Newman stated on the “Polyclimate” thread).

    Worst of all, we certainly do not need self-appointed moral guardians warning us that it is sinful to emit greenhouse gases since their climate models tell them this could lead to the destruction of humanity and our environment, and even if their models might be wrong, we should still stop emitting GHGs for moral reasons “just in case” because the result could be so horribly disastrous (if their models happened to be right).

    The article’s historical treatise on the development of physics was nice, but I’ll certainly not be foolhardy enough to argue with you about the significance of “Newton’s second law, the ideal gas law, the first and second laws of thermodynamics, Planck’s law, etc.” all of which “provide the foundation for reasoning about the Earth’s energy budget and the transport of heat therein”.

    As to the differentiation between the “natural” and the “enhanced” greenhouse effect, I would agree with you that the author got that wrong. (Since the “enhanced” GHE is what everyone is talking about, I think he can probably be excused.)

    And the sentence:

    The one modern, definitive experiment, the search for the signature of the green house effect has failed totally.

    Is incorrect (as you stated) and should probably have been reworded to say:

    the definitive empirical evidence to support the notion of alarming greenhouse warming resulting from human GH emissions has not been found to date.

    OK. The “missing hotspot” argument is a valid observation even if it may not be conclusive as far as the validity of the GH theory versus the actual physical observations is concerned. (I would put this into the “unexplained” category, like the observation that the tropospheric warming rate as measured by satellites has been slower than the warming as measured at the surface, where the GH theory tells us the opposite should have been true.)

    On the point about the “human signal” or GH warming effect in the GMTA record either being “lost in the noise” or being “very likely”, I’d agree with you that this is a wash in actual fact (i.e. one statement is as questionable as the other).

    However, I’d say that there is one major difference here: the “very likely” claim comes from the global “gold-standard” authority on climate science today, while the other comes from an op-ed in an Australian journal.

    Just saying that they are both probably wrong does not help the credibility of climate science as embodied by IPCC. It simply confirms (what you and others have already stated) that the IPCC “very likely” claim is based on false overconfidence.

    But, all in all, I think the Quadrant op-ed was interesting and your comments were spot on.

    Thanks for sharing this with us.

    Max

    • “This introduces a “moral” component to science. Morality is, by definition, subjective rather than absolute.”

      The second sentence should read “Morality is, by ONE definition, subjective rather than absolute.”

      Modern society attempts to define morality as each person’s subjective judgment. That concept is what gives rise to multi-culturalism, legal realism and other progressive dogma. Which is where many of our current societal problems come from.

      The better definition of morality is: “a doctrine or system of moral conduct.” Whether originating from a divine source, or a code developing over time through the experience of trial and error, an objective system of moral conduct is essential to any human undertaking. See for instance: “We hold these truths to be self evident, that all men are created equal.”

      “IMO we do not need a bunch of philosophers, sociologists, moralists or anyone else redefining the very straightforward scientific method.” No one I know of claims morality “redefines the straightforward scientific method.” What objective morality defines is what may and may not be done within such a method. At the extremes, Joseph Mengele may not use humans as guinea pigs in his experiments, and the U.S. government may not let syphillis go untreated in black males so they can gather data on the course of the disease.

      At a more mundane (but also important) level, what good is science of any kind if it is not done in accord with the objective moral requirement of honesty? How about humility? What good is science if you can’t trust the results? Imagine how today’s debate might be different if these archaic notions of morality had actually been practiced throughout the scientific community. Whenever anyone says morality is irrelevant, what they rally mean is that other people’s concept of morality is irrelevant.

      Science and morality are not only not antithetical, they are symbiotic. It might do some well to remember that the first Western universities, the University of Bologna and the University of Paris, were founded by the Catholic Church. Many others, including Oxford, grew up around monastic communities. There are apparently currently 244 universities and colleges in the U.S founded by the Catholic Church (a few of them are even still Catholic), and 219 christian colleges or universities. There are something like 7000 Catholic elementary and high schools in the U.S. as well. In many cities, they are the best chance some children have of being competently educated in anything, let alone building the foundations of a study of science.

      The attempts to divorce science from morality, and vice versa in some religions, is nothing new. But it is a fool’s errand. Religion uninformed by science is naive and ignorant of the real world. Science uninformed by morality is at at best unreliable, and at worst tragic.

      • Gary

        You brought up an example of “morality” at work in “science” with Mengele.

        The “morality” was defined by the political movement of the time in the country of question.

        Morality is not absolute, unfortunately, Gary. It is human, by definition. It is very subjective. And it can become very political.

        Would “bending” the scientific results a bit “for a good cause” be more “morally” acceptable than doing so “for a selfish motive”? If so, who would define what “a good cause” would be?

        Honesty is paramount in science. But this is nothing new. It is a given in the scientific method and does not need to be introduced by way of “morality”.

        Sorry, Gary. No sale.

        Max

        Or is it better to stick with the “scientific method”

      • As usual, we seem to be getting tangled up in semantics. I agree that talking about moral science (or more specifically talking disparagingly about amoral science) is dangerous territory. But I think some people are hearing “ethics” when they read “morality”. Ethics is something else. We need ethics. Morality is too slippery, and as you say can lead to “end justifies the means” that results in abandonment of ethics. I submit that that’s one of the primary causes of the climate science train wreck.

        Science needs to be performed with ethics. But the pursuit of the facts have no moral dimension. The facts are what they are.

      • Now this is pure semantics. Ethics and morality are not somehow different concepts.

        ethic, noun, plural but sing or plural in constr : the discipline dealing with what is good and bad and with moral duty and obligation
        http://www.merriam-webster.com/dictionary/ethics

        ethics: –plural noun 1. (used with a singular or plural verb) a system of moral principles
        http://dictionary.reference.com/browse/ethics

        I know it is so uncool to believe in “morality,” with its embarrassing past associations with religion and God, but let’s leave the language as it is please.

        “Right and wrong” is not a scientific concept. The concept urged by Manacker and ChE here is actually what is referred to as situational ethics. And it has not served mankind well.

        situation alethics: a system of ethics in which moral judgments are thought to depend on the context in which they are to be made, rather than on general moral principles

        I can assure you Michael Mann does not consider his hockey stick to have been dishonest. He defines “honesty” as portraying the data in a way most likely to lead its viewers to reach a correct result. Like Dan Rather who assured his viewers that his reports based on falsified documents was accurate even though those documents were admittedly forged. Or the ABC reporter who attached pyrotechnics to the bottom of an automobile to show how dangerous its gas tank was.

        Those who reject the very notion of an objective morality have no rational reason to judge the behaviors of others as “wrong.” Yet they all do. Which is why in my original comment, I wrote “Whenever anyone says morality is irrelevant, what they really mean is that other people’s concept of morality is irrelevant.”

      • “Honesty is paramount in science…It is given on the scientific method.’ Why? Sez who? If there is no objective morality, each scientist can decide for himself whether to include adverse data or not in his graphs. Who are you to judge? I can rationally criticize “hiding the decline” as being objectively dishonest. But I have made a value (ie., moral) judgment in doing so. Yet I have not read you as defending, or being neutral on, dishonesty in science. How do you explain it?

        Mengele did indeed represent the morality “defined by the political movement of the time in the country of question.” But those who practiced that politically defined morality and were caught tried as war criminals and were hung. It seems the rest of us did not agree.

      • Gary

        You are still missing the point here.

        No one is for “war criminals” who misuse Csience.

        No one is for “immoral” science.

        No one is for “dishonest” science.

        Scientists “fudging” the data (for whatever justification) is “dishonest” (and could, therefore, be described as “immoral”).

        Scientists withholding data from FoI requests is illegal (and possibly “immoral”, as well).

        But the “scientific method” itself does not have any room for dishonesty or morality. It is absolute.

        Introducing “morality” opens the door for introducing “religion” or “politics” or (in the worst case) introducing the warped morality of a Mengele.

        Keep it out.

        Max

      • “Morality is not absolute, unfortunately, Gary. It is human, by definition. It is very subjective.”

        If morality is not absolute, ie. objective, then you have no rational basis for judging the behavior of others.

        You are simply wrong when you claim that “No one is for ‘war criminals’ who misuse science. No one is for ‘immoral’ science.
        No one is for ‘dishonest’ science.” My examples were of those who were “for” just such science. If you really believe there is no such thing as objective morality, then “immoral science” is an oxymoron, and that part of your comment a non sequitor.

        How can you say Mengele’s morality is “warped” if there is no objective standard against which to judge it? Mengele is a classic example of keeping objective morality out of “science.”

        And this is not a semantic point. I can assure you that your belief that morality and ethics have no place in science is shared by the many of the very people you criticize for “dishonesty.”

        To be precise, you make an objective moral judgment when you write that “the ‘scientific method’ itself does not have any room for dishonesty or morality. It is absolute.”

        You still haven’t answered my question. Who says the scientific method does not have room for dishonesty? Where does this objective standard come from, let alone such an “absolute” standard?

      • Gary M

        You ask:

        Who says the scientific method does not have room for dishonesty? Where does this objective standard come from, let alone such an “absolute” standard?

        We are drifting dangerously into abstract philosophy.

        That there can be “dishonesty” in the practical application of “science” became painstakingly obvious with the Climategate revelations as well as the many exposes of IPCC exaggerations and outright fabrications in a (so-called) “scientific” summary report.

        That this is deplorable and should be discouraged is also obvious to me.

        Inasmuch as the guilty parties were misusing public funding or violating FoI requests, there should be some corrective action taken (if nothing more, simply cut off this funding).

        Does that make “climate science” per se “immoral”?

        No.

        According to Webster’s New Collegiate Dictionary, the definition of science is

        knowledge attained through study or practice

        knowledge covering general truths of the operation of general laws, esp. as obtained and tested through scientific method [and] concerned with the physical world

        The steps of the “scientific method” have been listed as follows”:

        - Observation/Research
        - Hypothesis
        - Prediction
        - Experimentation
        - Conclusion

        IMO there is no place for “morality” as such in this process. Doing it honestly without “bending” the results to “fit” the hypothesis must be a given or “condicio sine qua non”.

        It is like counting ballots.

        Either they are counted honestly or there is voter fraud. Skewing the count in order to favor the “better qualified” or “nobler” candidate is just as fraudulent as skewing them for the other candidate.

        The concept of “post-modern” science introduces “morality” into the scientific process itself, as a separate consideration. IOW, it allows for the “bending” of the scientific principles if the basis for doing so is “morally correct”.

        This would make it perfectly OK for climate scientists to “bend” the results of their studies in order to stop humanity from self-destruction through GHG emissions.

        I think this would be a dangerous bastardization and politicization of science and the scientific method.

        But let’s wait until Judith gets a separate thread started on this to get some other viewpoints.

        Max

      • I have not said science is immoral. You have claimed that it is amoral, while still maintaining that “Doing it honestly without ‘bending’ the results to ‘fit’ the hypothesis must be a given or ‘condicio sine qua non’.” My point from the start was that the scientific process is not moral or immoral in and of itself, but must be practiced infromed by morality.

        Yet again you do not address WHY honesty is a sine qua non of science, if science is amoral. You claim that it is obvious that the dishonesty involved in climategate “is deplorable and should be discouraged.” A sentiment with which I agree, and which confirms my point.

        My original comment was not that “science must be moral,” it was as follows: “No one I know of claims morality ‘redefines the straightforward scientific method.’ What objective morality defines is what may and may not be done within such a method.” How do you “deplore” anything without making a moral judgment.

        Your description of post modern science reflects your desire to view morality as merely subjective. The “bending” you fear is an aspect of progressivism’s “the end justifies the means” situational ethic. It is not a product of objective morality. What you fear is in fact immoral behavior conducted under the guise of morality. But that is no logical basis for claiming that science should be done without regard for morality.

        We aren’t “drifting” into abstract philosophy, we plunged right in when you wrote “Morality is, by definition, subjective rather than absolute.” My criticism of that still stands, and the rest of my comments have just been meant to show that in practice you hold scientists to an objective moral standard, honesty, yourself. You, not I, read the moral imperative of honesty into the scientific method itself, which seems a fairly objective standard to me.

      • Gary M

        We are going around in circles.

        Following the “scientific method” rigorously and honestly in science is a “condicio sine qua non” , which should not even need to be mentioned.

        Fabricating, eliminating, “cherry-picking” or ignoring data points would all be violations of the “rules of the game”, as would hiding data sources, stealing others’ work without citing references, etc.

        If you call this a “moral” constraint on how the process is physically applied, so be it. I would just call it the established “rules of the game”.

        I just do not believe that “morality” should be injected into the process itself (as is being suggested for “post-modern” science) for the reasons I pointed out above.

        But we have, indeed, beaten this dog to death and should wait until Judith opens a specific thread on “morality as a part of the scientific process”.

        Max

      • Ah, I see you’re an “I need to have the last word” kinda guy. Very well, I will let your “established rules for the game” stand in place of an “objective morality,” though I am at a loss to see a difference. Oops, sorry, shoulda waited til the next thread to write that, as you have twice suggested (at the end of rather long comments)….

  32. I don’t know how anyone can think genuine science is conducted when it is impossible to audit or replicate any work. Steve Mc is being stonewalled again. Not a surprise. The surprise is that some who pretend to be scientists aren’t bothered by the stonewall. Genuine scientists would be appalled.

    • In reality, Stanley, real scientists are appalled by the popularity of amateur, industry-driven drivel from Stephen McIntyre. :-(

      • Wishful thinking, Martha :)

      • Latimer Alder

        Does it ever occur to ‘real scientists’ to turn their supposed Giant Brains to wonder why Steve McI’s work is so popular?

        Could it be because he lays out his working for all to see? Is pretty polite but never a patsy? That he likes to really shake a problem until he understands it? That he doesn’t give up? Or just that he gets right up the noses of Real Climatologists? And frightens the s… out of them that he will find out all their little shortcuts and dodgy dealings..and then publish them for all to see.

        Maybe its becasue he threatens their insulated closed little self-congratulatory world of pal-review and patronage and of deference and nepotism?

        Or maybe they just don’t think that anyone who has grubbied their hands outside academe to make a living should be allowed to touch the hems of their robes……..’He was in trade my dears’ ……said the appalled Vice Chancellor before fainting dead away into the vintage port. Only a swift application of another round of caviar could revive him before the reading of the sacred IPCC texts……..

      • batheswithwhales

        Stephen McIntyre is probably the most all-round respected climate scientist in the world today.

        He is not a climate scientist, you say?

        Why bother to go through the process of gaining a Phd, when he knows he already outperforms anyone who could ever give him such a degree?

        Don’t get hung up on degrees. They are only an “ok” from someone.

        It is fully possible to learn as much as any Phd on your own, and more, without any “ok” from someone, or from any institution.

        Even though they would have you think otherwise….

      • Latimer Alder

        Which rather begs the question of what qualifications does a Real 100% You Bet Your Ass Yes Sirree Bob Accept No Substitiute Qualified Climate Scientist have? And how does she get them?

        Medical doctors is easy..they have degrees in medicine. Engineers is easy they have Chartered Engineers and stuff.

        But Climatologists are harder. No university anywhere seems to offer a degree in it. There isn’t enough real content for it to be anything more than a sideshow…

        So if I had the great misfortune to be accosted in the street by sombody claiming to be a Climate Scientist, how would I know whether he was the real McCoy? Shoud I slip him a quid and send him on his way with a cheery adminition not to spend it all on hockey sticks? Or cut the impostor dead with the withering phrase…’you are not a real climatologist because you do not…xxxxxxxx…’

        What is xxxxxxxxx in this context?

  33. Speaking of science without method, I just got this ad for easy & simple climate simulation: “Real-time Climate simulation Made Easy” including free Webinars.
    http://hosted.verticalresponse.com/715701/bf6b7ba42e/TEST/TEST/

    Perhaps this is the science behind international climate policy.

    • Thanks for the interesting link, David.

      This suggests that the AGW crowd is now getting seriously worried.

      If the Climategate iceberg fully melts – and exposes the decades of misappropriated public funds beneath – the consequences for leaders of the Western political and scientific communities may be greater than the recent Japanese tsunami.

  34. Harold Pierce Jr

    RE: Preliminary Analysis of Temperature Data from China Lake, CA Falsifies the Enhanced AGW Hypothesis.

    Here are the results of the analysis of temperature data for June 21 for the sample range of 1950-2009:

    Mean Tmax +/- AD = 38 +/- 1 deg C
    Mean Tmin +/- AD = 20 +/- 0 deg C

    where AD = average deviation and resolution of therometer = 1 deg C

    Temperature Data Source: The Weather Underground.

    For this analysis sunlight is constant over the sample interval of 1 day. Specific humidity is generally low and the sky is mostly cloud free. We do not know the actual change in concentration of CO2 over the sample range at this site. We can estimate that it will increase by ca 25% based on data from Mauna Loa which is only valid for _ highly-purified, bone-dry air_.

    I concluded from this preliminary analysis of the temperature data that increasing concentration of atmospheric CO2 causes no “warming” at this site.

    Note that N= 2 for this test of the enhanced AGW hypothesis. To complete this preliminary analysis I shall do ths analysis for March 21, June 21 and December 21. A complete analysis would use every day of the temperature record for the sample range of 1950-2009.

    A major criticism of this method of analysis is the resolution of therometer must be greater (e.g., 0.1 deg C) to detect any “warming” of the air at this site. However, such data is not usually available from US weather stations especially for the early part of the record.

    Comments? Take your best shots at “Old Weird Harold”!

    BTW Does anybody remember “Old Weird Harold”?

  35. Dr. C – it seems like this subtopic of morality, immorality, amorality, and ethics in science stirred up a lot of discussion. Perhaps this could be rolled into another thread in the future. Some excerpts from Feynman’s famous “cargo cult” lecture [ http://www.lhup.edu/~DSIMANEK/cargocul.htm ] would tie in nicely. He used the term “integrity”, but in this context, it’s fairly close to ethics, particularly this bit:

    I say that’s also important in giving certain types of government advice. Supposing a senator asked you for advice about whether drilling a hole should be done in his state; and you decide it would be better in some other state. If you don’t publish such a result, it seems to me you’re not giving scientific advice. You’re being used. If your answer happens to come out in the direction the government or the politicians like, they can use it as an argument in their favor; if it comes out the other way, they don’t publish it at all. That’s not giving scientific advice.

    • ChE

      I would second your suggestion to run a separate thread on the ethics in science would be a good thing.

      It would be important to me that one separate out the concept of introducing “morality” into science as suggested in “post-modern” science (to me, a dangerous aberration of the scientific method) as opposed to following the established scientific method rigorously and honestly (which, to my way of thinking should be a “given”).

      Max

      • I agree this would be a good topic, i will put it on my list and keep on the lookout for some good material

      • The concept of introducing “morality” into science as suggested in “post-modern” science is like introducing morality to algebra so that a mother could divide two breads to her three hungry children one each. (2 divided by 3 is equal to 1)

        In short, post modern science is corruption of science. Science deals only with the objective truth.

        Here is what Feynman said about science:

        The principle of science, the definition, almost is the following: The test of knowledge is experiment. Experiment is the sole judge of scientific truth.

        From Six Easy Pieces by Richarrd P. Feynman

  36. batheswithwhales

    Judith, you said:

    “Nope. If you want to force a global climate model to give you a climate sensitivity of, say, 3C, you will need to run many experiments tweaking a number of parameters until you land on 3C. So sensitivity is not an assumption in climate models.”

    Ok. GCMs are supposed to be good representations of the global climate system.

    To represent this system in a model, one would have to assume different values for forcings and feedbacks. because these are not known. There might even be forcings and feedbacks which we are not aware of at the present moment.

    But you seem to treat sensitivity as an outcome of the model, as if there are inputs available beyond forcings and feedbacks, which would allow the model produce something called “sensitivity”.

    So from where would this sensitivity come from, unless it is a direct product of the forcings and feedbacks already fed into the model?

    Which other factors are important enough, besides forcings and feedback, to produce something called “sensitivity”, far removed from any petty assumptions?

    A mathematical equation? Hm…..

    • Bathes–if you are trying to get someone to defend the current state of GCM’s it will be very difficult to find anyone to defend them as currently being reliable for more than VERY short periods. Certainly Judith does not seem to see them as more than works in progress at this time.

      • batheswithwhales

        Of course.

        I am only trying to establish one thing:

        Climate models rely on inputs, and for the most part these inputs are assumptions of the behavior of the global climate system.

        These assumptions can be classified as forcings and feedbacks. Forcings being outside influences (sun, cosmic rays, orbital shift, etc), and feedbacks being sulphates, cloud cover, etc.

        But to my surprise, I discover that there is a belief that somehow, mysteriously, global climate models can add to this understanding through mathematics.

        Now mathematics always does what it is told. It never adds anything where it is not told to add.

        So how can it be claimed that climate sensitivity is not an assumption of the model, as long as all inputs are assumptions?

        In effect: what knowledge of the climate system does the model add, which was not already put in?

      • Bathes

        You wrote- ” But to my surprise, I discover that there is a belief that somehow, mysteriously, global climate models can add to this understanding through mathematics. “
        I am not sure who has such a belief, but there is no “special mathematics” for climate models. As with other computer modeling it is a case of understanding the process of the variables that impact the climate and the weights to apply to each of those variables under different circumstances. In regards to GCMs, it appears we may not yet understand all the variables or the impact of each. We are a long way from reliable climate models at a regional level for more than short term use

      • batheswithwhales

        Exactly.

        as you say: it is a case of understanding the process of the variables that impact the climate.

        This process involves the input of assumptions such as climate forcings and feedback. Since none of these are known, quantitatively or qualitatively, one has to make assumptions as to how each of these impact the climate system on their own, and through interaction with other forcings and feedbacks.

        These are the assumptions of a climate model, but these are also all the components of what we normally call climate sensitivity.

        So you feed all these assumptions into a model, and out comes what? Climate sensitivity?

        But that was exactly what was fed in!

        No. Climate sensitivity is what is fed in. This is the assumption. Not the outcome.

      • Are climate models invertible functions?

      • Ole Humlum has a section on climate models here – http://www.climate4you.com/

        Well worth several visits.

        The models are trained to match the observed temperatures but each results in a different climate sensitivity for doubling of CO2. A paradox? The individual models are themselves chaotic and what we get is single run from amongst many plausible formulations that is arbitrarily selected as being about right and graphed together by the IPCC as an ‘ensemble’.

        That this is almost universally accepted to be the case – Fred is a sorrowful exception – and that this exercise is still promoted as a meaningful exercise says something about something. Fair enough – explore the physics but don’t try to sell me the Sydney Harbour Bridge.

      • steven mosher

        feedbacks are not inputs either.

        You would do well to spend some time looking through GCM code.
        I will suggest modelE because the source is online and Gavin has done some nice work in the past few years cleaning it up for public viewing.
        It’s not that much code, but give yourself a few months. It will help if you know fortran. or you can look at the MIT GCM. they have nice documentation.

        A simple way to think about it is this. Imagine you build a very simple model. land sea ice. each of those has an different albedo and different area. You have one input TSI in watts. The ice is represented by a model that takes in Watts and calculates change in ice area. Area changes albedo due to ice. albedo changes watts in. The “feedback” : more watts in = less Ice=lower albedo due to ice area= more watts in. is not an input or assumption. the feedback is observed as a consequence of the physical models.

        The assumptions in the model:
        1. the historical forcings are correct
        2. the physics models capture all relevant processes.

        sanity check for the model?
        1. does it hindcast ok?
        2. does its output for sensitivity match other more reliable estimates?

        yes. yes.

  37. >If you want to force a global climate model to give you a climate sensitivity of, say, 3C, you will need to run many experiments tweaking a number of parameters until you land on 3C.

    Wait a minute. Tamino told me the models aren’t curve-fitting, they represent everything we know about the physical world.

  38. If you want to force a global climate model to give you a climate sensitivity of, say, 3C, you will need to run many experiments tweaking a number of parameters until you land on 3C.

    “Tweaking a number of parameters” seems to me to come pretty close to “entering a number of assumptions”, but maybe I am missing something (not being a climate modeler).

    Calling a computer model run an “experiment” in the scientific sense of “experiment” also seems a bit of a stretch.

    To me a computer is nothing more than a very expensive and very fast version of the old slide-rule. What comes out depends entirely on what went in.

    So IPCC’s model-derived 2xCO2 climate sensitivity of 3C (on average) is the result of many “assumptions” and is, thus, itself an “assumption” rather than the result of a scientific “experiment”.

    Max

    • BTW, I have had this discussion with Gavin, who disagrees completely with me that the 2xCO2 CS of 3C (on average) is a model-derived “asssumption”; he prefers the word “result”.

      But he agrees with the remark (I think from Judy) that it may take a few “tweaks” to get there.

      Max

    • The point is that you cannot prima facie make an assumption of 3C and enter that into the model. You can get that particular result from the model by varying a number of parameters, so this is not an assumption that goes into the model. But the model can be tuned to give that result. Further, note that very few models give exactly 3C: the 3C is more or less the average of a number of different models.

      • Latimer Alder

        I think you are being rather naive here. Anybody who works with any sort of model for long enough will get to know quite a bit about how it will react to input data. Especially the guys who wrote it.

        And – if it sosmething that is supposed to enhance our understanding of climate rather than just an unknowable magical black box – they will have payed around with it a great deal. It wouldn’t be the work of Eimstein to quickly come up with the variosu settings of your input data to come up with the value for sensitivity that you want.

        ‘No!’ you may cry ‘That would be cooking the books’ you may assert ‘and no climatologist worth his salt would ever do such a thing. The professional reputation of anybody involved would be in tatters at even the merest hint of impropriety and they would never work again’

        And I might point to the flying pigs passing overhead and the pope’s public conversion to Hinduism.

        So yes – explicitly sensitivity is not entered directly as a parameter. But implicitly is another matter. I guess that the models can be tweaked ot give the right answer. As long as its two or more. Otherwise we all just yawn and go home. Bad for the career…………..

      • Sorry, in a complex nonlinear model, the model does not have simple linear responses to tweaking parameters. The issue here is to understand how complex nonlinear models work. If you didn’t catch my earlier posts the first time around, for starters see
        http://judithcurry.com/2010/10/03/what-can-we-learn-from-climate-models/
        http://judithcurry.com/2011/02/03/nonlinearities-feedbacks-and-critical-thresholds/

      • Dear Dr.Curry,

        Sorry, but nobody, including every singe scientist in the climate science field even understands how complex non-linear models work in real time, especially in areas like climate, which has high complexities. Hence my question is a fundamental one. What are climate scientists trying to do here to dictate so called settled science and even asking for policies to be made on that when they have no practical clue of what they are doing in the first place?

      • Judy – The links you cite are very much worth revisiting for a better understanding of model construction.

        Some of the commentary above yours echoes one of the enduring Internet myths about global climate models – that they are “retuned” to make their output conform to observed trends after initial runs shows a divergence. An alternative version of the same type of misconception is that they are tuned to yield a specified climate sensitivity, such as 3 C. Whether, as a combined effort, they could be made both to fit observations and also exhibit a climate sensitivity of 3 C is an interesting thought, since it would tell us something about the validity of the 3 C figure, but modellers will explain that no matter how much they would like to do that, they can’t. The best modellers can do is to create new models with parameters that cause the models to remain faithful to the observed starting climate (before applying a forcing such as rising CO2), and which incorporate data that appear to better simulate reality. An example would be attempts to more realistically model anthropogenic aerosols, based, for example, on the observational data on aerosol “dimming” reported by Martin Wild and others for the 1950′s through 1980s). Whether the models have succeeded in getting aerosols right is still contentious, but they are constrained by the available data and cannot insert arbitrary values in order to yield desired trend projections.

        Because so many misconceptions persist, I wonder whether it wouldn’t be worthwhile to invite a modeller to post here for the narrow purpose of answering questions about global climate models. Gavin Schmidt would be one candidate, but you and he have not been on particularly cordial terms. Another possibility would be Andy Lacis, who has participated here in the past, and who is qualified to discuss model construction, and the virtues and deficiencies of current models.

      • To add to my earlier comment, there is a difference between the use of GCMs for hindcasting or forecasting projections based on a starting climate within recent decades and their use with paleoclimatologic data for estimating climate sensitivity. In the former case, once models are parametrized to a starting climate and then forced with an agent of interest (e.g., increasing CO2), the results must stand; the modeller does not reparametrize the model and discard the projections because they diverge from observations, but must accept them as indicative of model skill – there is no curve fitting involved. If additional data later yield a better match, they must be reported separately, with an explanation as to the greater accuracy of the new data.

        In contrast, paleoclimatologic applications often involve the testing of multiple different parametrizations to determine which best fit the recorded paleoclimatologic evidence. The climate sensitivity of that model is then used as one estimate of the true value of climate sensitivity, with a range of values ultimately derived from the application of this principle to multiple different models and paleoclimatologic datasets. Examples are found in AR4, Chapter 9. Other climate sensitivity estimates are based on more direct calculations rather than attempts to match sensitivity estimates to observed data, and are based on estimated forcings, feedbacks, and temperature change constrained by observational data (see e.g., AR4 Chapter 8).

        For aerosol forcing (see Steven’s cited link), some of the studies have resembled the paleoclimatologic applications in testing a multitude of different aerosol forcings to determine which created the best match with recorded temperature trends (but constrained by available aerosol data). However, they differ from the paleoclimatologic applications in that what was tested were not the model parameters but rather the forcings that were applied to an unchanged model.

      • Section 3.4 discusses briefly the use of the inverse approach.

      • steven mosher

        Ill suggest Dr. Held.

        he has a blog and may be open to it.

        The myths of tuning

        Funny at AGU the guys who ran models noted the two biggest knobs. Slab ocean:on/off. sulfur cycle: on/off. Also, some Ar4 models ran without volcano effects

      • The difficulty IMO is 1) semantics and 2) the implication or inference that varying assumptions to force the models to better match observations is done to produce a desired output. The latter is curve fitting, though “curve fitting” while often meant as a pejorative isn’t necessarily nefarious.

        I support other posts in disagreeing (with hesitation…) with Dr. Curry about the “assumed climate sensitivity.” She says it is not assumed which is literally true. But it is a semantic no-op since almost all of the physics and mathematics that produce the sensitivity in the models are themselves assumptive to some degree or another. For example the non-feedback forcing from CO2 concentration, which feeds the sensitivity, is based on an assumption of projecting what seems to be the past observable forcing (which has varied somewhat over the years as we get more observations) into future environments. Is the forcing from doubling CO2 the same at the 1000ppm to 2000ppm level — which has never been observed — as at the 100 to 200 level? Such projection may be quite reasonable and based on learned and considered physics, but it remains an assumption none-the-less. Then they have to add their best guess (another learned assumption piled on top) for a number of feedbacks like for clouds, aerosols, etc.

        Is the sensitivity output a result from assumptions? No doubt. Does that make sensitivity an assumption? For all practical purposes, yes; though strictly probably not. Is it purposely made misleading to match a belief? No. Though the more they get questioned the more the modelers and physicists dig their heels in.

      • Correct. Instead of a big knob, there’s a row of trim pots. Distinction without a whole lot of difference.

      • steven mosher

        The problem is that you simply dont have enough time to adjust the various trim pots. Thats because model runs take an enormous amount of time. You have a parameter space that consists of over 39 variables. So exploring the parameter space (tuning the knobs) is practically impossible. Some advances are being made in statistically emulating a GCM, that is running a few points in the paramter space and then filling in the responses with statistical model. But tuning? not practical. Except with aerosols which are not know that well. Turning that knob is standard what if analysis.

        On the sensitivity issue it works like this.
        From emprical studies of the response to volcanic forcing the sensitivity can be constrained to 1.5 to 6., for example

        The models currently output sensitivities of 2.1 to 4.4.
        So, the idea is that the models produce sensitivities that comport with other estimates. See the paper I linked for all the various ranges that are estimated independent of the models.

      • Jeffrey T. Kiehl, 2007. Twentieth century climate model response and climate sensitivity. GEOPHYSICAL RESEARCH LETTERS, VOL. 34, L22710, doi:10.1029/2007GL031383, 2007

        Tuning happens on big loops and small loops. Small loops are people twiddling dials and trim pots. I might accept there is limited tuning on this scale. Big loops include stuff like publication bias. That’s one hellofa big loop which is still a tuning loop. People don’t always see it, but it’s there. Kiehl’s results pretty much confirm it.

      • Rod – Dr. Curry is correct that no assumptions, either explicit or implicit, are involved in the climate sensitivity that emerges from current GCMs. There is no curve fitting, and no retuning of a model to bring its climate sensitivity closer to modeller’s wishes, nor does the modeller know what sensitivity value will
        emerge from the model as contructed.

        It is also untrue that the core of the models involve “assumptions” that can be conveniently chosen in hopes to make climate sensitivity “come out right”. The core is based on irrefutable physics principles including those involving fluid flow, and the conservation of mass, energy, and momentum. These are not assumptions. Added input includes physical parameters such as those involved in radiative transfer (e.g., spectroscopic absorption coefficients), as well as observationally derived data on wind, temperature, and so on. For processes occurring on scales too small to be modeled within the large grid cells, parametrizations based on known principles and observations are used to produce the most accurate approximation possible, but these allow little leeway for adjustments that significantly alter sensitivity. Other variables are addressed in the links cited by Dr. Curry, but the point is that assumptions needed to create a particular range of climate sensitivity outputs are not available to the models.

        The concept of forcing has been misconstrued by a number of commenters above. One can apply different forcings to a model, either based on observational data or as part of an attempt to determine forcing values that best represent a particular climate. However, changing the forcing does not change the climate sensitivity but merely the temperature response. Therefore, accuracy of forcings is irrelevant to a model’s climate sensitivity. Perhaps a good analogy would be a sales tax – say a 5% tax on the purchase price of a produce. The 5% is the climate sensitivity, the product price is the forcing, and the tax owed is the temperature response. If the sales price is raised from $100 to $200, the tax (the temperature change) rises from $5 to $10, but the tax rate of 5% (the climate sensitivity) hasn’t changed.

        Finally, just as climate sensitivity is not an assumed input to models, neither are feedbacks (clouds, water vapor, ice/albedo, etc.). These too are emergent properties from the dynamics described above. It is true that they are affected by parametrizations, but the latter are not assumed either – they are based on the best combination available of principles and observations, and they are not readjusted in the models so as to make the climate sensitivity conform to a desired value – in fact, their ability to influence model climate sensitivity is marginal.

        If assumptions and retuning were tools available to modellers, the models would match observations better than they do, and match each other’s climate sensitivity far better than they do.
        The claim that climate sensitivity is in even the most indirect sense based on assumptions is unequivocally wrong. Again, the links cited earlier by Dr. Curry should help provide a more accurate picture of how models are constructed and used.

      • Christopher Game

        Above here I asked: “A hard one is to explain, in general physical terms, why the clouds are at about 50 or 60% . Do they also track temperature? Why?” No one answered.

        The so-called ‘climate sensitvity’ depends most importantly on the cloud response. It is a “parametrized” element of the AOGCMs. “Parametrized”, though it may refer to something constructed “based on known principles and observations are used to produce the most accurate approximation possible”, means actually guessed: no more and no less. The ‘climate sensitivity’ is no more and no less than a guess. Christopher Game

      • steven mosher

        Thanks fred, All well put

      • However, changing the forcing does not change the climate sensitivity but merely the temperature response.

        That’s an assumption; i.e. that the feedback response is linear, i.e. the gain is constant wrt temperature. Hanson’s “tipping points” imply that that’s not true. To get a tipping point, you have to have a gain that increases with temperature. Otherwise, you just get a fixed multiplier.

        I suspect you’re right and Hanson’s wrong, but you can’t both be right.

      • ChE – To the best of my knowledge, climate sensitivity is fixed in each of current GCMs, so that applying a different forcing will alter temperature but not sensitivity. In this sense, the models don’t anticipate tipping points, but there is no reason a model could not be designed to do that. For example, it might anticipate a large increase in feedback from a massive methane release induced by rising temperatures. Given the challenges models face in simulating even “untipped” climate change, this will probably not be the next item on a modeller’s agenda, but I don’t think modellers would state that there is an inconsistency between designing models with the goal of simulating a smoothly changing climate and the theoretical possibility that a future climate could exhibit an abrupt change in response to a small perturbation.

        The potential tipping points that Hansen and others have mentioned generally involve large shifts in the same direction as a smoothly changing climate. A possible exception might be a dramatic slowing of the meridional overturning circulation, but this is considered highly unlikely for any foreseeable scenario over the rest of this century.

      • It might help to define a GCM as there seem to be many misconceptions floating about.

        ‘Global climate models (GCMs) are comprised of fundamental concepts (laws) and parameterisations of physical, biological, and chemical components of the climate system. These concepts and parameterisations are expressed as mathematical equations, averaged over time and grid volumes. The equations describe the evolution of many variables (e.g. temperature, wind speed, humidity and pressure) and together define the state of the atmosphere. These equations are then converted to a programming language, defining among other things their possible interacting with other formulations, so that they can be solved on a computer and integrated forward in discrete time steps.’
        Professor Ole Humlum

        Climate and models are initial value problems and respond non-linearly to small changes in in initial conditions. This is quite simply understood in that the models use the same partial differential equations of fluid motion that Edward Lorenz used in his early convection model – to discover chaos theory. Models also suffer from something called structural instability. This refers to the range of plausible values for aerosols, ozone or natural forcing factors that can tip the model into a different phase space. Together these create the potential for chaotic bifurcation in models. That is points at which the solution space shifts abruptly as a result of non-linear interactions of component parts.

        ‘Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically
        answerable. (McWilliams 2007) The extent of instability in climate models is yet to be systematically explored.

        Ideally with models there is sufficient data on which to calibrate the model – and calibration is a necessity with all models – and to validate the model. With climate models this is typically the temperature data to 2000 for calibration and since for validation. Although there may be a problem emerging in the latter.

        ‘Over the past decade one such test is our ability to simulate the global anomaly in surface air temperature for the 20th century…Climate model simulations of the 20th century can be compared in terms of their ability to reproduce this temperature record
        (Kiehl 2007)

        Sensitivity is determined by the models – although it remains a problematical concept. The models are run to determine a delta T for a doubling of CO2 and this by definition is the climate sensitivity. There is a subjectivity involved in this – a potential solution from an almost infinite solution space of plausible formulations is sent to the IPCC where it is uncritically graphed alongside other similar ‘solutions’.

        ‘Global climate model simulations of the 20th century are usually compared in terms of their ability to reproduce the 20th century temperature record. This is now almost an established test for global climate models. One curious aspect of this result is that it is also well known that the same models that agree in simulating the 20th century temperature record differ significantly in their climate sensitivity. The question therefore remains: If climate models differ in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy?

        The answer to this question is discussed by Kiehl (2007). While there exist established data sets for the 20th century evolution of well-mixed greenhouse gases, this is not the case for ozone, aerosols or different natural forcing factors. The only way that the different models (with respect to their sensitivity to changes in greenhouse gasses) all can reproduce the 20th century temperature record is by assuming different 20th century data series for the unknown factors. In essence, the unknown factors in the 20th century used to drive the IPCC climate simulations were chosen to fit the observed temperature trend. This is a classical example of curve fitting or tuning.

        It has long been known that it will always be possible to fit a model containing 5 or more adjustable parameters to any known data set. But even when a good fit has been obtained, this does not guarantee that the model will perform well when forecasting just one year ahead into the future. This disappointing fact has been demonstrated many times by economical and other types of numerical models (Pilkey and Pilkey-Jarvis 2007). ‘

        ‘The world’s perhaps most cited climatologist, Reid Bryson, stated as early as in 1933 that a model is “nothing more than a formal statement of how the modeller believes that the part of the world of his concern actually works”. Global climate models are often defended by stating that they are based on well established laws of physics. There is, however, much more to the models than just the laws of physics. Otherwise they would all produce the same output for the future climate, which they do not. Climate models are, in effect, nothing more than mathematical ways for experts to express their best opinion about how the real world functions.’

        The failure to understand the appropriate uses for models is one of the key problems in climate science. ‘Using climate models in an experimental manner to improve our understanding of how the climate system works is a highly valuable research application. More often, however, climate models are used to predict the future state of the global climate system.’ The limitations of models in ‘predicting’ climate are well known and this is very poorly communicated in the community more generally. The models may not be fraudulent but the uses of models by activists most certainly are – either deliberately or naively.

      • Fred Moolten said:

        The core is based on irrefutable physics principles including those involving fluid flow, and the conservation of mass, energy, and momentum. These are not assumptions.

        I think this is yet another example of over-simplification that represents some of the difficulties associated with open and free discussions of the issues. In my opinion the statement is almost at the same level of those that proclaim that all glaciers everywhere are melting and that is concrete proof that the global-average, near-surface temperature of the Earth has increased.

        I think a more nearly accurate description is as follows.

        At the continuous-equation level, the core is constructed with models that are approximations to irrefutable physics principles including those involving fluid flow, and the conservation of mass, energy, and momentum. Assumptions leading to simplifications of the fundamental principles are invoked in order to obtain a tractable problem.

        This is not to say that the resulting model equations are necessarily less than useful. I think it is simply a more nearly accurate description.

        Additionally, the statement completely neglects to mention that the numbers produced by the models are not solutions for the continuous equations. They are instead approximate solutions to the discrete approximations of the continuous equations. It is well known that convergence of the approximate solutions of the discrete approximations to the solutions of the continuous equations has not yet been demonstrated for any GCM. Papers appear in the peer-reviewed climate science literature even to this day in which the dependency of the approximate solutions on the size of the discrete spatial and temporal increments is the subject.

        Actually, it is also well established that numerical solutions of the Lorenz models-of-the-models temporal chaotic ODEs have not yet been attained, and very likely will remain out of reach for quite some time. Convergence has been demonstrated beyond a few tens of Lorenz time units only when extremely high order discrete approximations plus extremely large finite-digit representations of real number are both employed at the same time.

        Finally, in process models of complex physical phenomena, ( the GCMs are process models, they are not computational-physics representations ) the parameterizations are the absolutely critical link in obtaining a somewhat correct response from the models. The large macro-scale aspects of atmospheric circulation are set by the rotation of the planet and its, almost, spherical shape and the period of its rotation. Macro-scale currents in the oceans are a strong function of the same process with influences set by the stationary continents. The approximate continuous equations will reproduce these motions. The phenomena and processes occurring within these macro-scale motions are the key to getting a somewhat realistic representations. And these critically important processes and phenomena occur at scales smaller than the spatial, and temporal, resolutions used in the GCMs.

        The parameterizations are seldom, if ever, mentioned whenever statements like that at the top of this comment are stated. The parameterizations is where the focus of such statements should be. The parameterizations and the lack of demonstrated convergence. Then, maybe, discussions might move forward with the focus on what is important. While the parameterizations are somewhat heuristic, and some even ad hoc, descriptions of the irrefutable physics principles of mass, momentum, and energy are again the objective, but they are further removed from these than are the approximate models of the macro-scale motions.

        Again, all this does not say that the GCMs are less than useful. Fruitful discussions require an accurate specification of the topic.

      • Dan Hughes – I don’t disagree with you, but my comments had to be summaries rather than long explications. The issue of dicretization and numerical solutions to continuous differential equations is addressed in the links Judith Curry provided. Model designers are of course well aware of these challenges and understand that these approximations make it impossible to simulate reality with 100 percent accuracy even in theory, much less in current practice. They are also cognizant of the value of designing imperfect models that are “useful”. I think readers should visit the linked sites for a detailed description of these issues.

      • Fred, this is false. I have worked with some climate models. MIT’s earlier EPPA model, had for each run the explicit input values of clouds, aerosols, and ocean sensitivity. Properly set, you could get the climate sensitivity to 1C, for warming by 2100.

      • steven mosher

        “But it is a semantic no-op since almost all of the physics and mathematics that produce the sensitivity in the models are themselves assumptive to some degree or another. ”

        All physics is assumptive. The task you have in front of you is to find the physics where the assumptions are WEAKEST.

        but the IPCC already tells you that: clouds and aerosols and small scale sub grid processes.

        The notion that assumptions are avoidable is misguided. the real question is what is the sensitivity of the final answer TO our assumptions. We always, for example, assume that the same physical laws that hold today, will hold tommorrow. Big assumption, because if it turns out wrong, then nothing you think today will be right. yet, we dont question that assumption. for good reason.

      • steven mosher

        Sorry Latimer you are mistaken

        I had the joy of sitting through some presentations about GCMs at AGU. The parameter space is so large and the time to compute is so huge that the prospect of tuning knobs is practically impossible.

    • steven mosher

      You should be aware that there are several ways to estimate the sensitivity. There is no way to calculate it directly ( much like the situation with finding “gains” that work in flight control systems)
      I’ve listed the various ways in several comments. using models to estimate the metric is only one way. in reality the models are accepted in part BECAUSE they line up with other estimates. So, one can estimate the sensitivity by looking at the observation record or the paleo record. When the output of the models matches those other estimates, that is +1 for the models. If they did not match the other methods, that would indicate they are not usable. But they match, and are useful.

      Before you talk about sensitivity I’ll suggest this

      http://www.google.com/url?sa=t&source=web&cd=1&ved=0CBoQFjAA&url=http%3A%2F%2Fwww.iac.ethz.ch%2Fpeople%2Fknuttir%2Fpapers%2Fknutti08natgeo.pdf&ei=Uny4TeyDG4G8sAO2x4SpAQ&usg=AFQjCNEC62JNT3AsF3eVUu7kkgszzeFZGA&sig2=w-UrfUzljFCqxCMgw5vUCw

      its an easy read with nice charts.

      • steven mosher | April 27, 2011 at 4:36 pm
        …. with nice charts.
        … with very comic nice charts !

      • steven mosher

        devastating critique. Look, there are real issues in the question of sensitivity. Those issues need to be addressed in an informed manner not an uniformed manner. The best way to inform yourself about the state of the debate and the state of the science is to read what has been written.

      • Steven

        That is a big scare mongering article.

        The globe is cooling at the rate of 0.8 deg C per century as shown in the following graph!

        http://bit.ly/jo1AH4

        The climate sensitivity should be about 0.75, not from 2 to 4.5 deg C.

      • steven mosher

        remind me never to take stock advice from you.

        And you can’t calculate sensitivity that way from observation.
        Read up on how it’s done. start with the refernces in the knutti paper

      • steve mosher

        “How it’s done” is a silly concept, as is “you can’t calculate sensitivity that way from observation”.

        Sensitivity, shmensitivity…

        Girma has simply shown us physically observed data (for what they’re worth), which demonstrate that our planet’s globally and annually averaged land and sea surface temperature has cooled over the past decade.

        Another set of physically observed data (for what these are worth) tell us that the average annual atmospheric CO2 content at Mauna Loa, Hawaii has increased over this same period.

        The logical conclusion to be drawn from these two sets of data (ALL OTHER THINGS BEING EQUAL – which they obviously are NOT) is
        a) globally and annually averaged land and sea surface temperature is not a representative indicator of our planet’s energy balance and should therefore be replaced with a more representative metric, or
        b) increased atmospheric CO2 levels lead to lower global average temperatures, or
        c) atmospheric CO2 is not the principal driver of our planet’s energy balance as measured by global average surface temperature, which would imply that
        d) something else, more important than CO2 in determining our planet’s energy balance and/or the average global surface temperature, is at play.

        There are many who would agree with conclusion a) (although IPCC has not been one of these).

        Hardly anyone would agree to conclusion b).

        Conclusions c) and d) go hand in hand. These may be the most logical conclusions to be drawn from Girma’s data, but IPCC also does not agree with them either.

        So we are left with a dilemma.

        But there has been no shortage on this thread of rationalizations, hypothetical explantations, beautifully-worded side-tracks, etc. on why Girma’s data do not support the IPCC “mainstream consensus” suggestion that CO2 “should be” driving surface temperature” and that we “should have” seen warming of 0.2C over the first decade of the 21st century as a result.

        But the fact of the matter is, it did NOT happen.

        And that, Steve, is Girma’s point in a nutshell.

        Max

      • I don’t need to read theories full of assumptions.

        Observation is the final arbiter.
        http://bit.ly/jo1AH4

        The observation says the global mean temperature is not just a plateau, but actually is cooling as shown in my previous post.

        Phil said FIVE YEARS AGO the following:

        The scientific community would come down on me in no uncertain terms if I said the world had cooled from 1998. OK it has but it is only 7 years of data and it isn’t statistically significant.
        http://bit.ly/6qYf9a

        Here is the comparison of the global temperature trend when Phil made the above statement in 2005 and now. The graph clearly shows that the global warming rate has dropped further from 0.06 deg C per decade in 2005 to zero now.

        Why is the “scientific community” coming down on Phil and us for telling the truth?

      • Here is the comparison of the global temperature trend when Phil made the above statement in 2005 and now. The graph clearly shows that the global warming rate has dropped further from 0.06 deg C per decade in 2005 to zero now.

        http://bit.ly/l3qZq7

      • steve,
        If the entire concept of sensitivity, as used currently, does not seem to match observations, how can that be dismissed?

      • steven mosher

        Girma and hunter. what you dont realize is that there are two system characteristics that matter.

        I slam my acceperator from 0 to full.

        I measure the response a NANO second later and 5 minutes later.

        You see that the response to the same forcing will differ.

        sensitivity is defined as the RESPONSE at EQUILLIBRIUM.
        not a nano second after the forcing. not 10 years, not a century. AFTER the ful force of the forcing is felt by the system. inertia.

        So, you cannot simply look at a few years of data to calculate the EQUILLIBRIUM RESPONSE. CANNOT.

        That would be the TRANSIENT response.

        get it?

      • steve,
        But you do not, in fact, have any way to know how the system will respond after 100 years and if it will respond the same.
        But we do, after ~150 years of steadilyincreasing CO2 know that response to date is not distinguishable for a low CO2 environment.
        So, to use your automobile on a dyno test, we know that we have been allegedly pushing the accelerator but we are now noticing that the results on the dyno are not changing.
        Skeptics are pointing this out, alarmists say the engine is going to blow, you are saying something is happening but not much.
        At most not much is happening.
        Certainly not a dramatic change in the dyno reading.

      • Here’s your problem, though. You knew ahead of time that 5 minutes was more than enough time for the engine to speed up. If you didn’t know that ahead of time, the 5 minute number wouldn’t be meaningful. You can infer the asymptote if you know what the dynamics (first order, second order, etc.) are in advance, by backing out the time constant(s) from the response, but you need to have a very good idea ahead of time what the order(s) is.

        You have too many variables, and with nonlinearity, no way to be sure you’ve pinned down the correct ones, no matter how much data you collect.

      • steven mosher

        Che.
        With the paleo data (see hansen) you have millions of years to estimate the system response. That estimate is roughly 3C.
        with obervational estimates you get other boundaries. (see knutti) It’s easy to bound in on the low end. One thing to look at for the lower bound is the snowball earth stuff. The high end is where you get a long thick tail. with a model its rather easy to tell that you are at an asymtope, moreoever you dont have to run until you get there. If, you run the model for 1000 years and the system response is still increasing , all you’re saying is that within our decision horizen the system response is 3C.. maybe higher after that but its certainly not lower, and if you think it might be lower its really beside the point of the decision in front of you.
        The lack of certainty about a system response is not a bar to action. If you slam the peddle down and after 1 minute your speed is 30mph, and after 2 minutes its 60, 3 minutes its 70, 4 minutes its 75. you have information to make a decision.

        the speed limit is 15. its a school zone. a cop is sitting 2 minutes up the road. Do you slam the accelerator? or argue that the simulation of your car doesnt run to equillibrium? do you complain that it doesnt model drag exactly right, or the changing air pressure of heating tires, or wind gusts?
        OR do you say, “given our limited knowledge of how my car will respond to full accelerator forcing, I better not do that?”

        We always have less than certain knowledge. and we are always acting.

      • With your car’s accleration, will be it be 200 mph in x time or will it top out and just use more fuel to get less and less response in the dyno.
        The cop and school zone raises the question of what cop and what school zone?

      • So you just bought Pascal’s Wager. What if Mr. Bean is right?

      • steven mosher

        Actually I am not using pascal’s wager.

        Lets take the car experiment. I slam the accellerator, eventually I run out of gas and It drops to Zero. You have a variety of metrics to characterize the system. The instanteous repsonse, the transient response, the steady state response, and the lifetime response. Since its a resource limited system, the lifetime response of course is 0. the steady sate response is 200 mph. Operationally, we can say.
        I’m interested in the response assuming an infinite amount of gas. and then the lifetime response is the same as the steady state.
        I can also say, I’m interested in the response of the earth within 100 years, what’s that?

        None of this is a problem for the definition of sensitivity. The only issue is looking at too short of a time frame.

        Hunter: the cop is simple. the cop is the time you choose to observe. Simply put, you might know that the steady state response will take 1000 years with the last 900 years proving a very small delta over the first 100. The school zone? also simple. think about it.

        In short, Che tried to argue that we never know about the asymptope. Not really a strong objection because if we can show that some portion of the response is dangerous then the full response is not important. So, its 3C after 100 years, and 3,5C after 1000 and 3.65C after 10000. Not really a devastating point, to observe that 10K might not be the end of the warming. AND, there no evidence that it turns down after 100 years, AND even IF there was evidence of that it would be beside the point, the practical point, that one has to get through the 3C knot hole.

        A MORE effective argument for you to learn is what can and should we do, if anything, about 3C. That is a way smarter argument than the one you are stumbling to make. So, you dont have to lose this argument, just make the better argument from now on.

      • John Carpenter

        “A MORE effective argument for you to learn is what can and should we do, if anything, about 3C. ”

        Learn to adapt… b/c the likelyhood is we will see 2xCO2.

      • steve,
        It is not clear if we are pushing ‘the’ accelerator. In fact, as I keep pointing out, it is pretty clear that the CO2 lever is not much of an accelerator, and is certianly not ‘the’ accelrator.
        The cop implies breaking something, so I did not take your refeerence as a timed observation.
        But if we take the timed observation, what do we observe?
        Not much at all.
        Nothing out of the historical level of climate change, and nothing out of the normal for extreme or dangerous weather.
        The school zone is a zone of safety that should not be violated.
        So my question stands:
        Are we in a ‘zone’, are we doing something that will violate that ‘zone’, and is CO2 the lever to manipulate to comply with that ‘zone’?
        More and more the idea that we are in a climate system with many forcings and feedbacks that are all moving in comples relationships of direct influence, feedbacks positive and negative and some seemingly randomly contrary or deceptively related.
        The discovery that an Indian Ocean sourced current moving into the Atlantic which was poorly accounted for comes to mind as a nice example of complexity and the climate science community’s non-comprehensive grasp of all significant factors.

      • Steve- Would you agree that the concept of “equilibrium” when it comes to earth’s climate is relatively silly.

        Regarding CO2 sensitivity, it does seem fair to say that models that indicated the earth’s temperature would be sensitive by 3 C for a doubling of CO2 appear incorrect. It now appears that there are other variables that impact temperature changes and mitigate the response to CO2 changes to a greater extent than previously believed.

      • steven mosher

        No, its not silly. Of course its an idealization. In the modelling world the models are spun up for thousands of years with inputs held constant until the drift is less than 2%. so technically its not at equillibrium, but PRACTICALLY, given the time horizen of our decision process it is. Like I said above, if you pulse a system and see it respond by increasing temps 2C with 100 years and 3C within a thousand years and 5C within 10000 years, you have enough information to make recommendations about the advisability of forcing the system that way.

        As for the other factors that may mitigate c02. That is BESIDE the point of the metric in question. If I push my accelerator to the floor I can project that I will go 60pm in 5 seconds.. all other things being equal. If you want to know what happens if you tap the breaks while doing this, that is also an answerable question. BUT that doesnt change the sensivity to pushing the gas pedal. Its a metric of the response due to the ISOLATED impact of that forcing. Want to know what happens if you combine forcings? well run that experiment, double c02 and halve the sun. not a interesting experiment. And you would be measuring the sensitivity to doubling C02. understand that’s just a metric.
        a diagnostic metric. one we use all the time. nothing strange or objectionable in it. Whats the response to full aft stick in f16?
        all things being equal its X degrees per second. But what if there is a massive downburst? well, that’s a different question. It doesnt change the answer to the first, cause its a different question. what if the plane is carrying bombs? different question, doesnt change the first question or its answer.

        Sensitivity isnt a mystery. Its an answer to a question. what happens when you double C02 and hold everything else constant? The answer to this question can be bounded. It’s just that the range is fairly wide. The best arguments for skeptics is to look the evidence at the lower end of the spectrum. Attacking the notion of sensitivity is just uninformed and tedious. So, as a friendly suggestion I’d say go focus on the arguments of spencer and Lindzen. That way you can join the debate. And there IS a debate over the estimate of sensitivity, there’s no informed debate about the meaning or the usefulness of the definition.
        In short, in debate there are generally a few tactics. first and WEAKEST is a definitional attack. better and stronger is the tactic where you work with the definition and attack on other grounds. Otherwise you get disinvited to the conversation.

      • steven,
        That is well and good, but it implies you know what the acclerator is and how it moves.
        After ~150 years of steadily increasing what is considered to be the accelerator, all we have are failed predictions of the accelerator’s influence on the system.

      • Steve- When you write-
        “In the modeling world the models are spun up for thousands of years with inputs held constant until the drift is less than 2%. so technically its not at equillibrium, but PRACTICALLY, given the time horizen of our decision process it is.”
        That statement is only as meaningful as the parameters being used to determine the validity of the model (if we are using the same terms.) To write that there are climate models that can hind forecast over 1k years with only a 2% error rate from observered actuals would be inaccurate.

        Your example of pushing the accelerator on the car is interesting. When related to climate it assumes a strong/direct cause and effect and assumes that other mitigating influences would not normally be expected to influence or mitigate the effect of “pushing the gas (CO2) in the system (climate). It will be interesting to see over the next decade or so what the date shows. The last decade certainly did not seem to meet modeler’s expectations.
        I got to catch a plane to east Asia.

      • The basic non-feedback forcing equation is 5.35 ln (C/C0), though in most of the 90s it was 6.3 ln (C/C0). 5.35 is assumed to be the current best factor based on historical atmospheric curve fitting based on concentration variances between about 250ppm and 380ppm with most concentration measurements made (assumed??) from limited proxies. Some tightly constrained lab tests supplemented by a conflagration of various physics inputs played a supporting role. Part of the assessment involved trying various assumptions to see what the result would be. Fred accuses me of implying that this is tantamount to fixing the model to get the desired results, despite my clear statement that I do not believe that. This process is simply another way to try to scientifically assess their assumption — that’s spelled a-s-s-u-m-p-t-i-o-n!! Fred’s words that, “[there is] no retuning of a model to bring its climate sensitivity closer to modeler’s wishes…” is true IMO, but his other words in the same paragraph, “…that no assumptions, either explicit or implicit, are involved in the climate sensitivity that emerges from current GCMs” is, well, pure hogwash.

        Steven also suggests the modelers don’t have the time to twiddle the knobs. In fact the modelers do do some twiddling (ala Monte Carlo) but, as I said above, not to nefariously force a result but to simply (and properly) test the model(s). Steven also missed my contention about the marginal forcing, which is that while we can get fairly close to estimating forcing factors in today’s 250-350ppm range we have no assurances that this holds accurately for the 1000-2000ppm range, other than some decent clues in a limited lab environment. (As a personal sidebar, I got beat up rather thoroughly at RealClimate over this point.)

        Steven’s point that all physics is assumptive is true but adds nothing to the equation. The concern that the biggest uncertainties stem from the weakest (I would use “loosest”) assumptions is also true and also valid. Though blindly letting the IPCC tell us which are and are not strikes me as very risky.

        Fred also says,

        “The core is based on irrefutable physics principles including those involving fluid flow, and the conservation of mass, energy, and momentum. These are not assumptions.”

        Some of those are pretty ironclad “irrefutable” physics principles, but some ain’t and require a lot of filling in the blanks with conjecture, aka assumptions. If you get underneath the assumptions, there are a pile of uncertainties in the details of atmospheric radiative transfer — hundreds of texts not withstanding (in fact most textbooks agree). Heisenberg said that even God didn’t know all about your irrefutable fluid flow.

        Fred also says, “…changing the forcing does not change the climate sensitivity but merely the temperature response…” HUH?!? “Climate sensitivity” is temperature response, albeit on a per unit basis, isn’t it??

        Steven’s link is a good one; thanks. A cherry-picked intro statement, “The quest to determine climate sensitivity has now been going on for decades, with disturbingly little progress in narrowing the large uncertainty range.” might have some insight. On the other side of the coin it also said, “However, in the process, fascinating new insights into the climate system and into policy aspects regarding mitigation have been gained.” Narrowing down the assumptions? ;-)

        Sorry if my embedded formating bombed.

      • Rod – You are free to believe whatever you wish, but as previously stated, changing a forcing applied to a GCM does not change the climate sensitivity of that GCM. That is not an assumption, but a verifiable characteristic of the models. Your other claims about assumptions have also been addressed previously. There are now good resources available for you to learn about this topic. Some have been cited above, and you might want to visit them.

      • Sensitivity is defined as the change in temperature with a doubling of CO2 equivalent gases in the atmosphere. If you change the rate of ‘forcing’ – the delta T is achieved sooner or later but the delta T (sensitivity) may not be the same. This derives from the intrinsic non-linear dynamical complexity of the models. This can be easily shown by reference to the convection model of Edward Lorenz in the 1960′s. There – a small change in inputs surprisingly resulted in the well known ‘butterfly’ topology of the solution space.

        Your response is the trivial notional solution based on an inadequate linear conceptualisation of the problem.

      • I believe you may have misunderstood the definition of forcing. Delta T can be achieved for a specified forcing, but as long as there is a “rate of forcing” (ie., a changing forcing), delta T is unlikely ever to be achieved beause it has no defined value.

      • ‘Climate sensitivity is a measure of how responsive the temperature of the climate system is to a change in the radiative forcing. It is usually expressed as the temperature change associated with a doubling of the concentration of carbon dioxide in Earth’s atmosphere.’ http://en.wikipedia.org/wiki/Climate_sensitivity

        The forcing is specified – i.e. a doubling. The sensitivity is calculated by GCM as a result of the doubling of forcing. If the rate of forcing is changed – that is the doubling occurs sooner or later than otherwise – the calculated sensitivity is not necessarily the same due to the intrinsic chaotic nature of the models.

        You know very well that I have not misunderstood the simple concept of radiative forcing. What I am saying is that your linear conceptualisation is not adequate to the task of understanding how climate models work.

      • Robert – I think you still misunderstand. Climate sensitivity is specifically defined in terms of an instantaneous CO2 doubling, not a doubling reached over time.

      • Fred,
        The assumption regarding instantaneous sort of makes the rest of the claim for accuracy impossible.

      • First time I’ve ever heard that. Since it’s an equilibrium concept, time shouldn’t matter.

      • One can calculate a temperature response for a change in radiative forcing over time – assuming all else stays the same. This does not imply instantaneous doubling. The GCM use data for well mixed greenhouse gases that evolve over time – and this is how sensitivity is defined in AR4. It was the models we were discussing after all – not linear and simplified equations.

        This is a little like clutching at straws – and evasion of the main point. The problem was one involving the intrinsic instability of GCM that you have failed to grasp.

        I have quoted McWilliams (2007) before.

        ‘Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically
        answerable.’

        The paper itself is very hard going – but provides a very useful summary of the problems and potential solutions. I recommend you read it before making further comment on climate models.

      • steven mosher

        Che.

        Time does not matter.
        In experiment #1 you hold C02 constant. You see that the temp goes to 14C. It takes 1000 years to reach equillibrium.

        In experiment #2, you INSTANEOUSLY double C02. Wham. The system responds slowly, decades, centuries… and finally the system reaches equillibrium. The temp is 17C.

        Sensitivity is a METRIC of system response. 3C per doubling.

        If the system reacted quickly and reached 17C… sensitivity is STILL 3C.

        Its the overall system response to the forcing. It includes both slow and fast feedbacks.

        It’s just a metric. a system characteristic. not an input, not an assumption. it RELIES ON assumptions: assumptions that physics work, that all processes are captured, that forcings are accurate. But it just a metric.

      • The models require time to integrate all the processes in play – clouds, water vapour, etc. The link below from the TAR uses a 1%/year increase in CO2 in the atmosphere. Stabilisation is shown at 2 and 4 times CO2.

        http://www.grida.no/publications/other/ipcc_tar/?src=/climate/ipcc_tar/wg1/345.htm

        The 4AR states that other methods of sensitivity estimation are unable to constrain sensitivity to less than the range of 1.5 to 4.5 degrees C obtained from models.

        The essential problems of models remain – that of solution instability within the range of plausible formulations.

      • Let’s call a spade a spade:

        The “2xCO2 climate sensitivity of 3C” is a model result, based on assumptions fed into the model (how many models or how many parameters or assumptions is not important).

        It is NOT a value that has been determined by repeated reproducible experimentation or from empirical data derived from actual physical observations.

        All the verbiage and rationalization in the world is not going to change this basic fact, folks.

        Max

      • ChE – Time doesn’t matter for the final temperature change, which is approached asymptotically over many centuries. It does, however, affect the extent of change observed at early intervals, which is why equilibrium climate sensitivity is defined with reference to an instantaneous doubling, whereas other metrics are used for progressively changing CO2 concentrations.

      • ChE,
        But CO2 increases in a dynamic multi-dimensional environment. IOW, getting CO2 to double is not occurring without other things changing along with it.
        The biosphere, the oceans, the geology and atmospheric dynamics are all moving along with the CO2.
        Simply doubling it instantly makes me wonder how the dynamic responses to CO2 can possibly be properly accounted for.
        This is not some nice cracking unit under tight control.

      • It is a little difficult to come to terms with even the simplest of concepts when they are constantly being reinterpreted. My original statement was in the context of a the models environment which is non-linear. I was told I didn’t understand forcing for some reason and, subsequently, that the CO2 increase in the model sensitivity runs was instantaneous. Now – well time is immaterial because it takes centuries to approach temperature equilibrium.

        But – I note that Fred is continuing to insist on instantaneous increase for some reason – despite the link provided to UNEP which specifies a 1%/year increase in CO2 and showing time to doubling of about 70 years. What is wrong with this picture?

        It is far from the main game however. The ‘irreducible imprecision’ of James McWilliams is an emergent property due to ‘sensitive dependence’ and ‘structural instability’ in the models. They are fundamental properties of the models in accordance with theoretical physics of dynamically complex systems.

        Until that fact is grasped – we are talking in different languages.

      • Rod – The exchanges about model climate sensitivity and assumptions have grown overlong, perhaps because some of the commenters feel impelled in new comments to defend their earlier ones. It might be best for me to summarize here my perspective – one I believe to be shared by Dr. Curry, Steve Mosher, and others – and then accommodate what I believe is a legitimate point you have made. I would state that:

        1. Climate sensitivity and feedbacks are not assumed values fed into model design but emergent properties of the model.

        2. It would be extremely difficult to tweak multiple model parameters to yield a desired sensitivity value. The modellers don’t attempt this. Changing multiple parameters by itself would change model climate sensitivity rather little, because that value is also inherent in model structure.

        3. The data from which model climate sensitivity is computed are not assumptions, except in the broad sense that the data are assumed to be accurate. They involve basic principles of physics and observed physical properties of the climate system (although as Dan Hughes points out, the differential equations can’t be solved exactly and so approximate solutions are required).

        4. Each GCM has a specified climate sensitivity emerging from its design. If a forcing is applied to that model, the magnitude of the forcing will affect the temperature response, but not the climate sensitivity.

        Item 4, however, is a point where I believe you and I have been talking past each other to some extent – I realized this when you cited the logarithmic relationship. Current models typically (perhaps universally) utilize the forcing equation from the 1988 paper by Myhre et al that estimates a no-feedback response to CO2 doubling of 3.7 W/m^2. That estimate is not, I would argue, an “assumption” in the usual sense, because it is derived from very solid data – specifically the Hitran-based spectroscopic properties of CO2 at the appropriate wavelengths, the Schwartzchild radiative transfer equations, measured CO2 mixing ratios, and observationally constrained (but still approximated) data for lapse rates, for clear sky/all sky ratios, and for latitudinal and seasonal variations. It is not a measured quantity per se, of course, and therefore subject to error, but appears to be a good estimate. Where I would agree with you is that if CO2 doubling does not generate a forcing of 3.7 W/m^2 but some other forcing, then the temperature response to the doubling (i.e., the climate sensitivity) would differ from the values in the models. If that is the forcing you had in mind rather than a forcing fed into existing models as input rather than part of model design, then we have no disagreement.

        My original intent in jumping into the discussion was to reinforce Dr. Curry’s point about climate sensitivity as an emergent property of the models rather than an assumption fed into model design. Some of the participants here may still misunderstand this, and I think they should visit the references in the links Dr. Curry cited.

      • ’1. Climate sensitivity and feedbacks are not assumed values fed into model design but emergent properties of the model.’

        This at least is correct and would that he had stopped there.

        ’2. It would be extremely difficult to tweak multiple model parameters to yield a desired sensitivity value. The modellers don’t attempt this. Changing multiple parameters by itself would change model climate sensitivity rather little, because that value is also inherent in model structure.’

        Models are calibrated to observed temperature using a range of values for ozone, aerosols and for natural forcing. The range of sensitivities obtained is a result – at one level – of the range of values assumed for these relatively unknown factors.

        ’3. The data from which model climate sensitivity is computed are not assumptions, except in the broad sense that the data are assumed to be accurate. They involve basic principles of physics and observed physical properties of the climate system (although as Dan Hughes points out, the differential equations can’t be solved exactly and so approximate solutions are required).’

        The numerical solutions of differential equations can be made arbitrarily accurate to the degree consistent with the accuracy of inputs. Part of my Honours thesis involved comparison of numerical solutions of a differential equation with a special case where the solution was analytically solvable. High order solutions do not a super computer phase. My XT clone however was a different story.

        The major sources of instability in the models relates to sensitive dependence and structural instability. Plausible changes in initial or boundary conditions can cause the solution to bifurcate into a different phase space. These instabilities have not been exhaustively explored in systematically designed model suites.

        ’4. Each GCM has a specified climate sensitivity emerging from its design. If a forcing is applied to that model, the magnitude of the forcing will affect the temperature response, but not the climate sensitivity.’

        Sensitivity typically relates to a doubling carbon dioxide equivalent in the atmosphere. The rest of the statement conflates divergent issues. It should be noted that the sensitivity calculated is not a unique solution within the scope of plausible model formulations. This emerges from dynamical complexity intrinsic to the models.

        ‘Item 4 … Current models typically (perhaps universally) utilize the forcing equation from the 1988 paper by Myhre et al that estimates a no-feedback response to CO2 doubling of 3.7 W/m^2….’

        Mostly irrelevant – to paraphrase the Hitchhikers Guide

      • Fred, et al

        Exactly how does a GCM take as an input CO2 concentration (for example), spin it round and round, and come out with forcing or, in turn, produce the new temperature ala climate sensitivity. If the GCM doesn’t need our input and inherently knows how to do it why do we waste our time studying radiation transfer physics? Why not just ask HAL? ;-)

        To quickly address your points in #65436:

        #1 — see above
        #2 — I have never said parameters are tweaked to obtain desired results.
        #3 & 4 — we evidently just differ over the robustness and precision of climate/atmospheric physics. All of those parameters you mention, including a significant part of HITRAN, derive using simplifying assumptions of some degree or another. As Pierrehumbert says, “Sometimes, a simple scheme which is easy to understand is better than an accurate scheme which defies comprehension.” (Not an exact analogy, but insightful.)

        Standard CO2 forcing is 5.35 ln (C/C0); 3.7 watts/meter2 = 5.35[ln2]. Your right, we’re talking past each other. I just can’t get my arms around the relevant distinction between “climate sensitivity” (as commonly defined and described by Chief H) and “temperature increase” from added CO2.

        Maybe a quote from the Myhre reference can help: “…This work presents new calculations of radiative forcing… using a consistent set of models and assumptions.” (emphasis mine)

      • Rod – Maybe I misunderstand you, but I didn’t find anything new in your latest comment. GCMs utilize concentration/forcing relationships such as the one from Myhre et al to convert CO2 concentration to forcing. They utilize the climate sensitivity parameters they have generated to convert forcing to temperature change. The point about assumptions has been beaten to death by now. As Steve Mosher pointed out, even the smallest measurement involves assumptions (e.g., your thermometer is recording actual temperature), and many climate computations require some degree of approximation (including numerical solutions to differential equations), which assumes that the approximation is reasonably accurate. The original claim (far above) was that climate sensitivity values were assumptions fed into GCMs, or were based on arbitrarily assumed values for their calculation that were fed into the GCMs. That is incorrect, but the notion that assumptions in the broad sense are part of every calculation that involves areas of uncertainty is of course correct. It is not the same as claiming that a particular value is an “assumption”, and if that distinction is understood, I have no problem with invoking the word assumption.

        Incidentally, who at RC insisted that the Myhre-based logarithmic formula breaks down at CO2 concentrations of 1000 ppm? It may be true, but I haven’t seen that claim, and I would not necessarily be convinced unless I saw the data or knew that someone with accurate knowledge of the area (e.g., Raypierre) was the one who said it?

      • Fred,

        the ‘logarithmic formula’ comes from the Beer-Lambert law for linear absorption. The only way it ‘breaks down’ is from a non-negligible contribution from nonlinear absorption of light. That means the higher order material response from the molecule begins to become accessed, ie two-photon absorption, etc.

        This is an intensity dependent process as the induced polarization in a gas of molecules can be expanded out in orders of the incident electric field strength. For higher incident field intensities, more of the higher order contributions of said expansion contribute to the overall macroscopic polarization of the gas.

        At the field intensities, gas pressures and temperatures normal in the atmosphere, however, no such processes should be occurring to the level that makes accounting for them necessary.

        My guess is that whoever claimed that the Beer-Lambert law ‘breaks downs’ was doing so under the auspice that at higher concentrations, there is a smaller change in the light absorbed for any change in the concentration than at lower concentrations. As far as I understand such a statement, it’s basically rephrasing the definition of ‘logarithmic dependence’.

      • steven mosher

        Fred, I may be wrong But i think that GCMs would actually use a band model for RTE. You can see some of the intercomparion work done bwteen GCMs and LBL to see that they check that bthe answers given in the GCM comport with those given by LBL.

        I have no clue what Rod is mubbling about 1000-2000ppm numbers. Even if what he said were true it would have no bearing on a discussion of doubling. Same thing with the silly comments about the accuracy of a formula ( the log formula) that doesnt get used, at zero.

        tedious distractions from the main debate.

        The most annoying thing to me in this conversation is that there IS a debate. a real debate over the range of sensitivity. Skeptics could join that debate and have the debate they want. Instead, they confuse the terms of the debate, deny the usefulness of the best tools we have and then wonder why nobody wants to talk to them. And they make silly arguments about the meaning and importance of assumptions.

      • Steven – Yes, I believe the GCMs generally use band rather than LBL models because the latter are computationally impractical for extended intervals. Myhre et al also compared LBL with two band models and found adequate agreement.

      • Fred, it sounds like we’ve almost gotten past the semantics. Climate sensitivity is whatever the GCMs put out after massaging a whole series of inputs. My point was that (many) of the inputs are based on assumptions, so therefore is the output. I think you agree, though put a different character on “assumptions.” It seems you would assess each assumption or approximation at each level – one on top of another, decide they were all very accurate and continue the judgment that the science (model) is still irrefutable. I would agree that most are very good but a few based on very good conjecture, and overall may rise to “very good” but nowhere near irrefutable.

        My 1st simple example was the use of 5.35ln(C/C_0) to determine marginal forcing going from (say) 500-1000ppm: does the log function hold? The 5.35 factor? It’s a good bet that they do (at least reasonably close) but nobody knows for sure. Only some highly constrained lab experiments have looked at this. No one has ever observed it within our climate system.

        I have more to add (hold the applause ;-) ) but am on a borrowed computer and have to vacate.

        Later.

      • Dr. Curry,
        How can time not matter in determining sensitivity to anything in a dynamic system that ahs many variables?
        The assumption that it does not fails to pass the smell test.

      • Hunter–technically time does not matter. What matters is that the variables do constantly change over time. The earth may have been sensitive by 3 C 10 years ago and only sensitive by .3 C today. (only as an example)

      • Rob,
        Technically why?

      • steven mosher

        hunter you are misunderstanding the meaning of the word sensitive.

        I will make it simple.

        Replace the word sensitive with the word DELTA.

        If you double C02 and ONLY change that, what id the DELTA temp?

        If you slam your accelerator to 0, what is the instaneous response? 0, your tires spin. then your car starts to speed up, accelerating.. these are TRANSIENT responses. when your car hits a top speed say 130 mph and stays there, you say the FULL RESPONSE to that forcing is 130mph.

        Now, it does not matter if it takes you 1 minute or 1 hour to get to 130mph. thats the full response to the forcing.
        before that time you have time sensitive responses.

        Finally, will the time it takes you get the FULL RESPONSE matter? why yes it will. It will prevent you from doing stupid things like looking at short time windows and concluding anything about the full response.

      • steven mosher

        opps I meant slam to full not slam to 0, hunter. edit appropriately

      • Steve, if we embrace you car analogy and apply it to 20th century climate, is it not the case that the “accelerator” has been depressed at all times during that period (Anthro-CO2 was emitted in increasing quantities), yet the car’s speed has risen, then fallen slightly, then risen again, and is now (with the “accelerator” further to the boards than ever before) falling once more. Doesn’t this cast doubt on the precise identity and function of the pedal you are pressing? To extend your analogy, if I took a car out for a test and it behaved in this way, I would confidently say that the pedal I had been pressing, while it may have controlled the addition of some performance-enhancing additive to the fuel mix, was definitely NOT the primary means of increasing the speed of the car.

      • steven,
        Actually I do understand the term.
        The more I read the explanations of it by those who defend it, the more useless I think it is.
        To keep abusing the automobile metaphor- you made a huge assumption regarding the dyno test: that the transmission is automatic.
        Let us revisit the concept using a seven speed manual transmission with say a split-tail three speed differential and manually selected four wheel drive.
        The accelrator would be only one part of getting to a given speed. And the speed/torque question would be massively more complex.
        I think this is a better way to look at how climate would work.
        Punching the gas begs the question of what gear, what rear-end is chosen,and if you are in 4X4 or 2 wheel mode.

      • Hell yes – let’s try it out on a 700 horsepower, turbo charged, 8 litre V12.

      • Chief,
        It is disappointing that so few people see what I am saying.

      • steven mosher

        “Steven also suggests the modelers don’t have the time to twiddle the knobs. In fact the modelers do do some twiddling (ala Monte Carlo) but, as I said above, not to nefariously force a result but to simply (and properly) test the model(s). Steven also missed my contention about the marginal forcing, which is that while we can get fairly close to estimating forcing factors in today’s 250-350ppm range we have no assurances that this holds accurately for the 1000-2000ppm range, other than some decent clues in a limited lab environment. (As a personal sidebar, I got beat up rather thoroughly at RealClimate over this point.)”

        Monte carlo? You are mistaken. In Ar4 there were over 20 models run for a grand total of slightly over 50 runs. A good number of models were run only ONCE, some like modelE were run 5 times. So there is no monte carlo in the sense that I know. At the cutting edge now some people are looking at stocastic physics for parameterizations. You can see the netwon institutes seminars on these types of approaches.

        With regard to marginal forcing it was appropriate for RC to beat you up. I won’t pile on.

      • The postdicting in figure 4 is amazingly accurate except for the 1930 to 1940 time. Why are all the red line squiggles missing after 2000?

  39. Surely the purpose (if any) of reasoning depends on the species of Reason.

    Predicate logic seems well-suited to resolving immediate problems, mainly predictive. It also provides an economic advantage of efficiency due to its reliable reproducibility. “Fruit out of reach. Climbing tree hard work. Stick is here. Stick is long. Hit fruit with stick; make fruit fall; collect fruit from ground. Eat.” Logic appears to be all about stomachs.

    The Principle of Contagion appears to be about avoiding injury: Fire hot. Rocks that touch fire get hot. Don’t touch rocks that touch fire.

    Motion at a Distance appears to have evolutionary roots in threat avoidance: Wave arms and jump up and down here, lioness jump out of tall grass there. Don’t wave arms and jump up and down near trees.

    The Homeopathic Principle may have its roots in early social reasoning, as it has primitive connections to concepts of family and ownership, territory and inheritance.

    The Law of Similarity.. No clue where that one’s from. Maybe it’s a primitive to other forms of reasoning, or purely organic.

    However, as a group, the forms of Reason have in common that they are pleasurable, universal to some degree in all human cultures independent of education or upbringing, obsessive pastimes often pursued for no apparent purpose or gain, closely tied to language, and perceived feats of reasoning impress or bring about respect from onlookers, so Reasoning is important in human hierarchical systems of power. Odd then that so many politicians can’t reason their way out of a paper bag.

    It doesn’t take being right to be considered wise. Argument long ago ceased to literally bear fruit.

    If it had a purpose, it’s largely lost it, replaced by the good it can do the participants in the argument for the social power it imparts, or the pleasure they take in flapping their lips. So long as some few retain the ability to make productive argument and use Reason appropriately often enough for the imitators to be able to mimic them, argument will continue to have in large measure the purpose of enriching the speaker.

  40. Dr. Strangelove

    Dr. Curry,

    You said:
    “Separating natural (forced and unforced) and human caused climate variations is not at all straightforward in a system characterized by spatiotemporal chaos. While I have often stated that the IPCC’s “very likely” attribution statement reflects overconfidence, skeptics that dismiss a human impact are guilty of equal overconfidence.”

    I think in the absence of strong evidence for or against AGW, the skeptics have the advantage over the warmists. Why? Because in science, the burden of proof falls on those who make a positive claim. That humans are warming the planet is a rather extraordinary claim. That nature is warming (or cooling) the planet is a perfectly ordinary claim since human civilization is only 5,000 yrs old but the climate has been changing for billions of yrs. It can only be due to nature.

    As Carl Sagan said, extraordinary claims require extraordinary evidence. You cannot say there must life on Mars because nobody has ever proven there was none. The scientific method says it is perfectly normal to assume there is no life on Mars unless you find an extraordinary evidence. I say the same is true with AGW.

    • Honestly – we are engaged in a great atmospheric experiment for which we have not the wit to determine the outcome. Seriously – the last time CO2 levels were this high was 10 to 15 million years ago and this is the result of anthropogenic emissions. Ipso facto – this is probably not our finest accomplishment. Why the heck is it an extraordinary claim that this is potentially a problem and at the very least we should do practical things to limit the experiment?

      It is extraordinary that you have the complacency to think that this is somehow in the ordinary course of events. It is OK for people to change the atmosphere and it needs to be proven that this is a bad thing?

      I think the world is mad.

      • Dr. Strangelove

        For most of earth’s 600 million yrs of geologic history, atmospheric CO2 was above 600 ppm without the help of man. Now CO2 is 390 ppm and you’re certain it’s due to humans. That’s an extraordinary claim. The complacency is that you accept it without extraordinary evidence.

      • For most of the Quaternary the levels have been 280 to 300 ppm – and we are rapidly heading for 600ppm. That the rise commenced post industrial revolution is an extraordinary coincidence. That there are isotope analyses that suggest a fossil fuel origin seems reasonable.

        No reasonable person disputes that the source is anthropogenic or that greenhouse gases are – well – greenhouse gases. It is a nonsense to suggest otherwise – although I am aware that there are some fringe views on both counts. I consider such views to be certifiably insane.

      • Dr. Strangelove

        In the last 600,000 yrs, atmospheric CO2 fluctuated from 200 to 300 ppm. The last spike from 200 to 300 ppm began 1,000 yrs ago long before the industrial revolution. Man could have contributed above 300 ppm but that is not a certainty.

        The isotope analyses that suggest antropogenic CO2 need critical re-analysis. Roy Spencer provided that in this link.

        http://wattsupwiththat.com/2008/01/28/spencer-pt2-more-co2-peculiarities-the-c13c12-isotope-ratio/

        Even granting CO2 is man-made, it doesn’t follow that global warming is man-made. There are other possible natural causes. No reasonable person views natural causes as certifiably insane. That is a sure sign of fanaticism not rationalism.

      • Chief,
        You keep on that point about the great experiment, but you have a few assumptions that may be worth checking.
        - were things in the environment dangerous to life in the days of 600 ppm CO2?
        - is there any evidence that the climate was more dangerous in a 600 ppm atmosphere than the present era’s?
        I would love to see us using power sources that emit less CO2, all things being equal.
        But so far, not one policy at all promoted by the AGW community has done anything to lower CO2 increases.
        But the policies and procedures the AGW community pushes have the net effect of increasing money to the AGW community and increasing costs to tax payers and consumers for many things including food and power.
        I can understand the application of the Hippocratic Oath implicit in your concern- first do no harm- but I would suggest that if one objectively looks, it is the AGW community that is violating the principle at this time.

      • Hunter,

        It is worse than useless – AGW has been a monumental distraction for a generation when we could step up useful actions – the multiple paths and multiple goals of the recent Hartwell thread.

        I have 2 arms to my argument. The first is that we are changing the atmosphere. That seems abundantly evident despite the quibblings of Spencer and Dr Strangelove – or how I learnt to stop worrying and love 9 billion metric tonnes/year of CO2 emissions.

        As for global warming – if he had been at all interested he might have realised that I think it quite likely that the planet is cooling for a decade or 3 more at least. C’est le vie. This is indeed involves the 2nd arm of my argument – that we have not the wit to understand the outcome. Are you suggesting that more CO2 necessarily equals a warmer planet? Tsk Tsk – I expect better of you. That is linear thinking in a non-linear world.

        Cheers

      • Chief,
        If, IF, all things were unchanged, and only CO2 was increasing, then the chances are there would be warming.
        But all things are not unchanged.
        The biosphere increases with CO2, the ocean buffers CO2, etc.
        If the CO2 is produced by something sooty, then carbon black settles out all over the place, including ice, increasing melting rates.
        and so forth.
        Now the lovely Mrs. hunter needs my attention, so I will get back to this repast later.
        But think of how trivial 9 gt per year really is in a system the size of Earth.

      • One of the simpler solution involves reducing black carbon and tropospheric ozone – with tremendous health, environmental and agricultural benefits.

        I am not overly concerned with warming but sensitive dependence on small initial changes in dynamically complex Earth systems implies that there is an appreciable risk from greenhouse gases. One that I can’t quantify. The risks theoretically include the potential for 10′s of degrees cooling in places within months. Where do you live?

        I don’t want to overplay the risk but the opportunities which I have spoken of ad nauseum. They include biological conservation and restoration. Improvements in agricultural soils by adding carbon. Ensuring that people have access to health, education, safe water and sanitation, that they have opportunities for development through free trade and good corporate governance. The latter would do more than anything else to stabilise population. Simple and relentlessly pragmatic environmental and humanitarian objectives.

        There are lots of people with another agenda and these need to be ruthlessly crushed. As I say a monumental distraction by people who think government can solve things through legislation – but don’t tell Bart I said that. All I am advocating is common risk avoidance involving humanitarian and environmental progress.

        What can be wrong with that?

      • Chief,
        Actually you and I are very much on the same page.
        Too bad the climatocracy and their followers are not.

  41. Dr.Pielke Sr. made some interesting statements on climate sensitivity

    ” The use of the terminology “climate sensitivity” indicates an importance of the climate system to this temperature range that does not exist. The range of temperatures of “1.5 C and 4.5 C for a doubling of carbon dioxide” refers to a global annual average surface temperature anomaly that is not even directly measurable, and its interpretation is even unclear…”

    ” This view of a surface temperature anomaly expressed by “climate sensitivity” is grossly misleading the public and policymakers as to what are the actual climate metrics that matter to society and the environment. A global annual average surface temperature anomaly is almost irrelevant for any climatic feature of importance.”

    All climate models are made on assumptions fed into them. So they can’t be called as ” experiments ” and their output proclaimed as ” evidence “.

    Issues like water vapour feedback and cloud behaviour are very poorly understood.

    The below post by Dr.Pielke Sr. on a paper about water vapour feedback behaviour is worth reading to see how poorly the understanding is about this aspect

    http://pielkeclimatesci.wordpress.com/2011/04/25/water-vapor-feedback-still-uncertain-by-marcel-crok/

    • Venter

      Having read Pielke but not being a climate scientist or computer modeler, I can only comment as an observer, rather than as an “expert”.

      But I’d say you’ve described it pretty well.

      I think one could summarize (possibly in an oversimplified way):

      The GMTA as reported by HadCRUT, etc. is not a representative indicator of our planet’s net energy balance to start off with.

      The climate sensitivity range cited by IPCC is essentially an estimate based on model simulations with many assumed inputs, and the process by which it was derived is quite tortuous.

      But the important point is that it is not based on empirical data based on physical observations or reproducible experimentation, but rather on theoretical deliberations backed by some interpretations of selected paleo-climate data from long ago.

      And Steve McIntyre’s challenge holds: until we have an “engineering quality” confirmation of the IPCC claim that the 2xCO2 climate sensitivity is 3C (on average), we do not know what we are talking about (and certainly should not base any “policy decisions” on this IPCC claim).

      All the talk about “post-normal”, or “post-modern” science, “precautionary principle”, etc. is just diversionary smoke to avoid the main uncertainty here. Let’s settle the basics first.

      Max

      • Max- I believe that you will find that there are currently no climate models that accurately forcasted that last 10 years. The fact that CO2 has risen at very close to the predicted rate over that period, but temperatures did not would seem to indicate that there are variables not yet understood. As I understand it, the models of 10 + years did “hindforcasting” pretty well, but were terrible at actually predicting the future.

      • steven mosher

        Max- I believe that you will find that there are currently no climate models that accurately forcasted that last 10 years.
        #######
        That is actually a testable claim. and its false.
        there were models that had trends which fit the observed trends
        The way to test this is to pick your start date that matches the date
        were the runs started. Then look at all the runs of all the models
        then look at the trends in the runs verus the actual earth trends

        http://rankexploits.com/musings/wp-content/uploads/2011/04/Histogram.jpg

        As you can see MOST model runs were high, some were low and some were very close. You also need to define ACCURATE. that’s not a scientific question as pure science doesnt have an answer to how much confidence one needs? 90% 95% 99% 99.99%. that is a choice based on tradition, not logic. What we see is that on average “models” over estimate the trend by a little bit. Not wrong.

        “The fact that CO2 has risen at very close to the predicted rate over that period, but temperatures did not would seem to indicate that there are variables not yet understood. As I understand it, the models of 10 + years did “hindforcasting” pretty well, but were terrible at actually predicting the future.”

        Huh? There are couple things.
        1. Co2 response is lagged. You slam the pedal down and it takes decades to see the reponse.
        2. The models used a solar forcing (TSI) that ended up being high.
        at the time the models ran most of the models used a straightline forecast for the future using the 2000 figure. Well, TSI went down.
        I’m also pretty sure they missed the methane forecast and were high on the C02 forecast.

        Again, unless you define what you mean by forecast skill, words like accurate and terrible are meaningless.

    • steven mosher

      Venter.

      “All climate models are made on assumptions fed into them. So they can’t be called as ” experiments ” and their output proclaimed as ” evidence “.

      This is largely a misunderstanding. Let’s take a simple engineering example.
      before i build a plane I have to model its flight control system. To do that I rely on a vast array of computer models. these models represent laws of physics. Sometimes in a simple way, sometimes in a more complex way. these models never ever match a real world plane. EVER. anyway, so I build a model of the flight control system. It’s a model of actuators and surfaces. For the surfaces I have models of forces. deflect the flap 10degrees and the model predicts a certain amount of force. Its never ever right. there are things I cannot model, nasty stuff like vortices. I can model them in gross ways, but not in super fine detail. Still we model how an ideal system would respond. we do computer experiments. we feed in assumptions. we play around until we find a range of numbers for the gains in the system. It’s bunches of assumptions. assume the surface models are correct, assume a standard day atmosphere, assume the engine responds as it should, tons of assumptions. its assumptions all the way down to the laws of physics which are assumptions themselves.
      And there is uncertanty. But these experiments give us the best understanding we have about how a REAL plane will respond. Then we build that thing we modelled and we fly it. And we see how good/bad our modelling is. the modelling is EVIDENCE. it’s not physical evidence. It represents our best understanding of how the system will respond. No one who works with models would suggest that its physical evidence. no one would fly a test plane if the model said it would crash. no one would put passengers in the plane without testing it first, in the real world.

      But with climate we have the following. We have models. Limited, but the best we have today. Those models tell us that IF we double c02 the plane would crash. Your answer is to question the model and load the passengers.
      Not good engineering. Of course its even more complicated because of the cost of not loading passengers. But that is a different question.

      • Sorry to just jump in here, but I’ve been following this for a couple of days and I have some questions if anyone would be willing to answer.

        Would we all agree that climate sensitivity as represented in the models are just estimates?

      • steven mosher

        yes. they are just estimates. Plank’s constant is also an estimate and it has a standard uncertainty. 2+2=4 is not an estimate.

        there is no epistemic difference between estimates of sensitivity and estimates of other science claims, EXCEPT the range of uncertainty. when that range is really tiny we call it a fact, but its not really. So, its rather silly to make huge issue about the fact that sensitivity is an estimate. all science is estimates. Some estimates are super narrow.

        The issue is this. At one end of the uncertainty we have a manageable problem. at the other end we have a disaster.

        Averting the disaster is costly. mitigating the manageable problem is less disruptive.

        Arguing about whether sensitivity is an estimate, when ALL science is an estimate, is silly. especially when the important question is “how good is the estimate?”

        That is the real question, the important question, the question where skeptics could actually join the debate. pointing out that sensitivity is an estimate is trivially true. all science is. Now, move onto the real question. what’s the uncertainty and how was it calculated.

      • Steven

        Sensitive aren’t we ? You are the one making a big issue about estimates it seems, I just asked a question. If you want to answer my next question fine, if not, fine.

        What is the range of the estimates used for climate sensitivity in models?

      • Jerry – Climate Sensitivity values are estimates, but with a confidence level attached to them. Typically, the estimated range is 2 – 4.5 C per CO2 doubling as the 90 or 95 percent confidence interval, signifying that based on available data from a very large number of studies, a conclusion that the true value is between 2 and 4.5 C is 90 or 95 percent likely to be correct. We don’t know the true value, and the confidence interval may change with new data, but even if the true value is outside that interval, we have not been “wrong” because the interval includes that possibility at the 5 to 10 percent level.

        For a reasonable discussion of climate sensitivity, see the 2008 review by Knutti and Hegerl, as well as AR4 WG1 chapters 8 and 9 plus references. More recent reports have appeared since then, including those by Clement et al, Lindzen/Choi, Spencer/Braswell, Dessler, and I believe by Lear et al. Do they expand, contract, or shift the 2 – 4.5 C range? I’ve read them, and I don’t see any reason to believe that they do – in fact, some are rather irrelevant to multidecadal temperature responses to persistent CO2 forcing, because they address very short term transient changes due mainly to ENSO variations (Dessler, Lindzen/Choi, Spencer/Braswell).

        We clearly would like to estimate climate sensitivity more precisely, but this can’t be achieved in the blogosphere by selecting (“cherry-picking”) references that support a particular value. That has not been common in this blog but has been a standard feature in some others. In the meantime, it is reasonable to proceed based on the canonical 2 -4.5 C range, with any policy decisions derived from both the available scientific evidence as well as value judgments outsider the realm of science.

      • Fred

        Thanks for your answer. I would point out that we do not elect scientist to make policy decisions, though they seem to have taken that role upon themselves which is not only arrogant but rather frightening and is obviously shading their objectivity.

        Would I be wrong in saying that climate models are basically an experiment? Or is there a better term that a believer would use to describe climate modelling?

      • The guys who wrote and ran the models are real confident that they handled the input and algorithims accurately — or damn close. Is that how it works?

      • RodB –
        The guys who wrote and ran the models are real confident that they handled the input and algorithims accurately — or damn close. Is that how it works?

        According to a previous discussion on this forum, that’s how it happens in climate science. But then they’re “professionals”, whatever that’s supposed to mean.

        I’d have been fired and escorted out the door by an armed guard if I’d tried to sell that stuff where I worked. And rightfully so.

      • Excellent! How good is the estimate of sensitivity?

        If you consider that 1999 to present may be a legitimate trend (yes, I know it is not yet a trend, but when making predictions one might need to make assumptions, say that what appears to be a trend, is predicted to be a climate shift induced trend and what obviously is lower than expected by the models, could be a trend.), that would indicate less than 3 C sensitivity. Based on the paper you linked earlier, the hind cast was good, except for the period 1930 to 1945. That may possibly indicate an unexpected unforced variation. Following the alleged climate shift circa 1999, that model also seems to overestimate sensitivity, due to the not yet a trend trend?

        I believe that less than 10% is a common model estimate of unforced variability. If that weird not yet a trend turns into a “real” trend, how good is the estimate?

      • Dallas,
        The AGW community claims weather events as *proof* of their claims.
        The AGW prmoters had no trouble claiming the 1980 – 1998 period as *proof* of CO2 doom.
        We should have no compunction at all in claiming a >10 year trend as a…..trend.
        The fun thing for years now has been to observe the AGW believers weather watch, claiming an amazing litany of warm, cool, cold, hot, wet, dry calm, stormy, snowy, warming cooling as *proof* of their great enlightenment.

      • Hunter,

        Yeah, it is pretty funny. Then they scream for action, but only seem to shot down what is reasonable under the circumstances. Lots of calls for action but few realistic solutions offered.

      • hunter –
        Some years ago (~2003?) I did some calculations based on a 5-year trend, which is what RC was using. But when I went into RC to check some numbers, I found that they’d changed to an 8-year trend. Since then, of course, the requirement has been increased to 10 and then 15 years. And now they don’t want to talk about anything less than 20 or 30 year trends. Isn’t it amazing how standards change when the data dosen’t fit ones preconceptions?

      • Jim Owen,
        Exactly.
        Both camps are moving, but the believer camp moves away from reality while the skeptic camp moves towards reality.
        5-8-15-20-infintity and beyond for the believers.
        Skeptics embrace CO2 as ghg and warming over time yet point out that AGW policies are failures and that the predictions are not working out so good from an accuracy point of view.

      • Fortune telling has always been a “chancy” business. And the practitioners have rarely, if ever, gotten the respect they think they deserve.

      • steven mosher

        a ten year trend is a ten year trend. it has a confidence bound that is large. Not very much you can tell from that except in extreme conditions

      • No argument there, but it wasn’t me that started with a 5 year trend when it was convenient for their argument and then moved the goalposts when it wasn’t.

      • steven mosher

        yes Jim, there is much silliness about short trends on all sides. I try to be in the middle and avoid making silly comments about trends. plus they are not really talking about trends. they are talking about least squares fits. which are not exactly the same thing as a trend. but thats an ugly can of worms we need not open.

      • Steven,

        I have referenced McWilliams (2007) before – frequently. The models are non-linear. They have ‘irreducible imprecision’ due to ‘sensitive dependence’ and ‘structural instability’. But unless you comprehend the underlying theoretical physics these are just words without a scientific meaning.

        ‘Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable.’

        Irreducible imprecision in atmospheric and oceanic simulations
        James C. McWilliams*
        Department of Atmospheric and Oceanic Sciences and Institute of Geophysics and Planetary Physics, University of California
        http://www.pnas.org/content/104/21/8709.full.pdf

        The answer to your question is that we do not know what the uncertainty is. We have opportunistic ensembles but lack the methodically designed model suites needed to systematically explore the extent of irreducible imprecision.

        Tim Palmer – head of the European climate computing centre – is of the opinion that these types of estimates are not theoretically justifiable at all. The best we can hope for are estimates as probability density functions.

        ‘Prediction of weather and climate are necessarily uncertain: our observations of weather and climate are uncertain, the models into which we assimilate this data and predict the future are uncertain, and external effects such as volcanoes and anthropogenic greenhouse emissions are also uncertain. Fundamentally, therefore, therefore we should think of weather and climate predictions in terms of equations whose basic prognostic variables are probability densities ρ(X,t) where X denotes some climatic variable and t denoted time. In this way, ρ(X,t)dV represents the probability that, at time t, the true value of X lies in some small volume dV of state space.’ (Predicting Weather and Climate – Palmer and Hagedorn eds – 2006)

        In fact I would define dynamically complex systems – examples being weather, climate and models – as being epistemologically different from the linear systems we are comfortable in thinking about.

        Indeed – in a dynamically complex environment – sensitivity is one of the concepts that has no essential justification.

      • Chief

        I need your brain here.

        Randomly doodling about with that horribly addictive WoodforTrees thing, I plotted detrended CO2 levels by parts based on ENSO half years, and found correlation between temperature and CO2 levels.

        Temperature trends led CO2 trends ever so slightly and with fair correlation when CO2 was detrended to obtain best fit to the temperature line.

        Please disabuse me of the illusion that I’ve stumbled on anything other than an optical illusion and confirmation bias.

        Or at least assure me what I’m saying is meaningless mumbo jumbo.

        Thanks

      • I preferred you with a moustache and a cigar. ‘A child of five would understand this. Send someone to fetch a child of five.’

        Anthropogenic carbon fluxes are what – 3% of the total. Natural fluxes are biological in origin primarily and these increase with higher temperature. There is a biological law with a name and a formula and everything. I just forget what it is right now. It was in the great thick book I had to buy for ecol901 – which I actually read and it stuck somewhat. Amazing. I’ve still got the book at home.

        So I’m not surprised that temperature led CO2 – if what you say is true. I would hesitate to conclude that 9 billion metric tonnes of anthropogenic CO2 wasn’t in the mix somewhere.

        Hope that sets your mind at rest. Take 2 aspirin and see me in the morning.

        Cheers

      • steven mosher

        Tim Palmer – head of the European climate computing centre – is of the opinion that these types of estimates are not theoretically justifiable at all. The best we can hope for are estimates as probability density functions.

        ######
        I would agree with Dr. P. He is also a pleasant fellow. the difficulty is that to make sense of the system metric for the lay audience you almost always have to resort to a linear type system. just to show the basics: that sensitivity isnt assumed, that you can estimate it, that its different from instaneous response, transiet response ect.

        so, folks who want to discuss the palmer issue, first have this struggle with people who dont even get the basics. Those are usually definitional fights or fights over we know nothing rather than the REAL debate which is where palmer lives.

        I’ve told people before, if you want good arguments against models read or watch Palmer. but these arguments are not anti model nonsense, they are real practical state of the science debates. Basically the best arguments about the models are WITHIN the science, not outside the science.

        in short, the ‘models suck’ meme is a cheap shabby weak attack.

        I note that you dont make that argument

      • It is not that the models suck, it is that the models are not able to predict the future, something the people who believe in them just can not seem to grasp. It is tremendously sad really, but not nearly as sad as the harm they are inflicting on mankind.

      • I also quoted Professor Ole Humlum on appropriate use of models as tools to explore the physics of the system and the inappropriate use as predictions of future climate. At the latter – they do suck. Or at least – there can be no confidence placed in the outcomes for the reasons I cited earlier.

        Can I assume by your approval of Tim Palmer that you have some grounding in dynamical complexity in climate? It is implied in the use of the term phase state of a finite volume dV – and of course elsewhere in Palmer’s work.

        As McWilliams says – it places limits on what is theoretically knowable and these limits are well understood on this site. Apart from by Fred I might note.

        The essential policy problem of anthropogenic carbon emissions is otherwise. As is known here – my native prudence suggests limiting the great atmospheric experiment for which we have not the wit to know the outcome. On the other hand, the planet is likely cooling for the next decade or 3 at least – as keeps popping up in the peer reviewed literature.

        There is a quandery here – and a solution. I highly recommend having a quick look at the multiple paths and multiple objectives in the recent Hartwell thread.

        Cheers

      • If I completely overlook the obvious, I know I’ll always be able to turn to Chief.

        The model of treating ENSO like a reset on an overclocked system is visually compelling but so overwhelmingly complex with the tools I have at hand – it appears to say that with each La Nina or El Nino, rising CO2 levels remain a driver of sea surface temperature rise, but slightly less so with each ENSO, while SST remains a driver of CO2 levels at a pretty persistent level.

        While you +/-AGW-types expect some logarithmic relationship between CO2 rise and SST (that’s how I stumbled on this, trying to detrend by parts to simulate the log trend, but the ENSO’s were such powerful attractors it seemed worth pursuing), this appears to be a more than logarithmic trend, so may reflect feedbacks or other negative ENSO-tied step trends.

        Do successive ENSO’s increase particulates more? Decrease humidity? Storm intensity rise? Probably that last one.

        If we have another longish ENSO-less period, will SST’s shoot up again?

        Can we still have ENSO-less periods?

        Then again, too imprecise a tool, not worth chasing down. This is likely all only artifact of the graph.

      • Steven, you left out that little step called wind tunnel, real world validation of where the model was screwed up — something we can’t do with climate.

      • steven mosher

        The wind tunnel wont help you very much in flight control design, especially with handling quality and a whole host of other interesting things like high angle of attack handling, etc etc. But then you knew that. The point is what I said. Models provide evidence. its not physical evidence. And to let you know a wind tunnel IS A MODEL. so, you basically missed the point again. A wind tunnel and the model in it give you an idealized glimpse at what a real plane will do. its not the real thing. So to your point. You argued that models dont give evidence.
        In fact they do. computer models do, wind tunnel models do.
        That evidence is used to guide decisions. Its not the best evidence, but it’s evidence. here: I have a stupid model that says a human can walk 4 miles an hour, on average. Lets build a model of how far a human can walk D = 4*t. really simple model. Note it doesnt have terrain in it, so it would give the wrong answer for uphill walking or downhill walking. Now, using that model can you answer the question : can I walk across the united states in an hour? of course, the model, crude as it is, can give you evidence about a claim. Is that evidence physical?
        nope. do you trust what the models says? do you think terrain could make a difference? Simply, you made an imprecise statement about what models can and cannot do. sharpen your language and move on.

      • Now you’re groping. There’s a world of difference between a computer model and a wind tunnel. The wind tunnel can be made as accurate as one wishes through use of dynamic similarity. The computer model is still just an approximation of unknown accuracy, depending on exactly what you’re trying to get out of it. Like Lucia said, the computer model does a pretty good job on the airplane flying forward and level. It does a much worse job on an airplane flying sideways. The wind tunnel doesn’t care. It’ a scalable analog, if you know how to do the scaling.

      • steven mosher, I agree with the other posts that your equating physical lab models with computer simulation models is at best defensive semantic hand waving and at worst flat wrong and misleading. Try to convince test pilots of that. Designers still breadboard electronic circuits despite SPICE runs which have physics and mathematics accuracy and certainty magnitudes better than GCMs.

        None the less, I do agree with the thrust of your post. GCMs are indeed helpful evidence and I didn’t mean to imply otherwise. What they are not are iron clad irrefutable near iconic evidence, as is pushed by many in the climate science circles.

        I’ve deduced that “steven mosher” and “steven” are different folk; is this correct?

      • Mosher,

        The fundamental assumption you make is wrong. The model of a plane and it’s flight characteristics is an engineering model which has undergone engineering audit and and extremely precise and repeatable levels of testing. And it has withstood practical tests starting from wind tunnel tests to actual flight tests and perform as they are predicted to perform. And the technology and models constantly evolve and develop, based on one rigid criteria which is sound testable hypotheses backed by empirical performance. And the airline and aircraft industry are willing to admit mistakes, admit they were wrong when they went wrong, take correctional steps, improve their product and deliver results.

        Please don’t even compare climate modes with aircraft industry
        as it is a joke. The climate models are not even kids toys compared to aircraft industry models, engineering level audits and performance. It’s a joke.

        Aircraft industry know what they are doing. They are honest and follow the scientific method.

        An aircraft built with the science displayed to the equivalent of your models wouldn’t even take off.

  42. What I get from this argument so far is:

    The climate sensitivity in models is determined by a series of equations based on laws, theories and hypotheses. This leads to the equation:
    Law x Theory x Hypothesis = Hypothesis

    The models are not adjusted to change the climate sensitivity, rather they are adjusted to explain/eliminate differences between the hypothetical values derived and the observations. The typical input adjusted to explain these differences are things not well understood such as aerosols since these allow a wide range of adjustments while still remaining in their wide range of derived values.
    These adjustments make perfect sense if the climate sensitivity as derived by the model is accurate. If that value is not accurate these adjustments make an inaccurate value appear accurate. Am I keeping up so far?

    • “The models are not adjusted to change the climate sensitivity, rather they are adjusted to explain/eliminate differences between the hypothetical values derived and the observations.”

      Steven – I think that’s correct to an extent, but within limits, with aerosol forcing the best example. Models generate their own individual climate sensitivity values. Before being forced with a changing condition (e.g., rising CO2), they must be calibrated to the starting climate, which involves some tuning of model parameters so that the model will reproduce the existing climate. The model is then run with the imposed forcing, and whatever the result, the modeller must live with it – he or she cannot go back, change the parameters, rerun the model, and then publish only the “corrected” results. If modellers could do that, their data would match real world observations better than they do.

      On the other hand, temperature responses to forcing require knowing both the climate sensitivity parameter (an unchanged attribute of individual models) and the forcing that the parameter translates into a temperature change. Some forcings are known to reasonable accuracy (e.g., the 3.7 W/m^2 response to doubled CO2 – estimates of this value have changed over the years but as the comparisons between observations and the estimates have multiplied, it seems unlikely future revisions will result in dramatic further changes). Others, however, are less well known, and so disparities between model outputs and observations legitimately raise questions as to whether the forcings utilized in the models were accurate. Aerosol cooling has been one of the more vexing challenges in this regard. If the forcing attributed to it is wrong, the model output will be wrong, and so modellers have tested a variety of forcings to determine which yields the most accurate model output, and have reported both the better and poorer matches. What they have also done, however, is attempted to use observational data on aerosol optical depth, as well as model-based calculations of indirect aerosol effects on cloud formation and persistence, to ensure that their applied forcings are not wildly inconsistent with reality. We know from extensive data on solar transmittance to the surface during the 1950s through 1980s that aerosols exert substantial cooling effects, and so a particular choice of forcings does not represent the invention of a fictional cooling effect to make things “come out right” but rather an effort to assign a quantitative value to a real cooling. Considerable uncertainty is involved. On the other hand, the magnitude of the observed “dimming” appears sufficient to exclude a null or warming role for aerosols. This in turn will tend to exclude very low values for climate sensitivity to CO2, although it leaves a wide range of values as remaining possibilities, including the 2 -4.5 C canonical range estimated from a large multiplicity of studies – a range that has changed little in recent years despite occasional paper reporting values outside of the range in either direction.

      If we can acquire a more accurate picture of aerosol forcing, the range of compatible climate sensitivities will be reduced. Unfortunately, the recent loss of the Glory satellite will delay that objective.

      • Oops – did I make an italics error?

      • Fred, I don’t really see where you are disagreeing with me regarding aerosols. The fact is the best we know leaves a lot of room for adjustments. It was a shame about Glory.

      • steven mosher

        yes in fact if you read my other comments to people concerned about tuning I refer them to chapter 9 of Ar4 ( the supplumental information if I recall its been a couple years) to look at how aerosol tuning is done. The aerosol issue is one of the reasons lukewarmers exist.

    • steven mosher

      Law x Theory x Hypothesis = Hypothesis

      ############
      no. All laws are themselves contingent hypothesis. Every last bit of science is contingent. not true by definition, not mathematically true. contingent. It all could be wrong. 2+2=4 is a mathematical truth. you cannot imagine it could be otherwise. F=MA is not a mathematical truth. it is taken as true to DO THINGS. It is useful for some things and less useful for others. I can imagine a world where F does not equal MA. To be sure, we are always taking certain parts of science and holding them to be more certain than others. That is, we have certain statements in science which we choose to accept in order to see what follows from them. If I choose to accept that F=MA, then I can make a prediction. Take a Mass, accelerate it, then the force should be MA. +-e. After a while, people stop thinking about F=MA as a contingent truth. they call it a law. That means one thing: they wont waste time trying to disprove it. But its still contingent, and someday somebody might suggest that it’s not really true and revamp a bunch of mechanics.

      “The models are not adjusted to change the climate sensitivity, rather they are adjusted to explain/eliminate differences between the hypothetical values derived and the observations. The typical input adjusted to explain these differences are things not well understood such as aerosols since these allow a wide range of adjustments while still remaining in their wide range of derived values.”

      read chapter 9 AR4 for a better explanation of tuning aerosols for attribution studies.

      “These adjustments make perfect sense if the climate sensitivity as derived by the model is accurate. If that value is not accurate these adjustments make an inaccurate value appear accurate. Am I keeping up so far?”

      Not really. The climate sensitivity exhibited by the models is from 2.1 to 4.4.
      Other estimates from observations and paleo are much broader, say from 1.5 to 6 or 10. One could ( hansen does) argue that the models are conservative.

      here’s an example; hansen’s ModelE, his GCM estimates the sensitivity at 2.7. hansen’s paleo work estimates it at 3C. Hansen thinks the paleo work is more solid than the model estimate.

      Put another way. If criticisize the models for having too high a sensitivity, you still have the problem that the other stronger forms of evidence suggest higher values. You’re attacking the weakest evidence. bad debating approach.

      • The determination of law vs theory vs hypothesis designates a level of confidence in their output. In this way they are definately different.

        I posted a very good reference on aerosol modeling earlier. I think if you examine it you would find it more complete then AR4.

        It depends on what the purpose of the debate is. If it is to show that the best that can be done is only as good as the weakest link then the weakest link is what should be pointed out. If there is some other purpose to the debate then it would depend on the purpose as to what the best debating technique is.

        As far as climate sensitivity goes, I see no reason to assume it is higher then current measurements indicate it is as per “why hasn’t earth warmed as much as expected”. Of course I didn’t mention climate sensitivity in my question so this was an answer to a question left unasked, but I am perfectly willing to share my views on this topic also.

      • I should say I didn’t mention climate sensitivity in terms of declaring it too high or too low as modeled.

      • steven mosher

        the problem is you cannot use current observations to estimate sensitivity. why? its very simple.

        Sensitivity is defined as delta T, AFTER suffiecient time has passed for the system to respond to the forcing.

        Again, with my car experiment. You want to know the DELTA Mph of apply full throttle. You stomp on the accelerator. INITIALLY, for a nanosecond the car doesnt move. you dont measure delta mph then.
        the car speeds up.. you can measure, but you dont see the FULL EFFECT in a short time period. then the car hits top speed. 100 mph.
        and it stays there. Thats your steady state response.

        So current observations dont really help you that much, you have to let the full effect play out. MORE IMPORTANTLY, in current observations OTHER VARIABLES are not constant.

        BY DEFINITION sensitivity to doubling means everything else stays the same. So, that is why you cannot SIMPLY use observations. You need lots of assumptions (uncertainty) to reduce the observations to a form that can be used for sensitivity questions.

      • steven,
        With all due respect that is an example of circular reasoning.

  43. No model, however sophisticated, can predict climate change for number of very simple reasons:
    -CO2 is only a very minor factor of forcing.
    -Solar activity oscillations leave an imprint but not commensurate with major changes such as the MWP, LIA, 1950-80 decline, 1980-2000 rise.
    - Natural forcing is not supported by cyclical regularity; AMO, PDO, ENSO etc. have no reliable periodicity.
    We just have to accept the fact that the climate long term changes are not predictable!

    • So, uh, you wouldn’t expect another glacial period in tune with the Milankovic cycles?

      • No, it is not what I meant. There are certain natural causes affecting climate to which science is unable to assign ‘timeline of future occurrence’, hence no viable predictability.
        Milankovic cycles, as the other well known orbital parameters, are forward calculable.

    • steven mosher

      they certainly are predictable. the question is HOW ACCURATE and how useful is the prediction.

      I predict by 2100 the avearge temp will be between 0C and 28C.
      right now its 14C. I just made a prediction, therefore it is possible to make one.
      the question is How accurate and useful that prediction is.

      That question is not answered or addressed by anything you have ever said

      • Mr. Mosher
        -Since it happen to be a science discussion, I referred to possibility of a prediction based on scientifically recognised facts (not the crystal ball one), which is likely to materialise.

        That question is not answered or addressed by anything you have ever said.

        -I am flattered that you keep record and review of ‘everything I ever said’, but of course you are entitled to your views and opinions, however wrong they may be.

        Crude methods of indoctrination work with many, subtle ones with more, but free thinking mind is impermeable to either.

  44. I wonder how the models of 10 years ago did on estimates of climate sensitivity.

    We have a lot of data on actual forcings over the last 10 years.

    The amount of black carbon.
    The amount of TSI.
    The amount of methane.
    The increase of CO2.
    etc.

    Have we (or can we) take all the measured forcings over the last 10 years and plug them into the 22 climate models (as they existed 10 years ago – not as of today) – let them run out 10 years (and to equilibrium) and see what the sensitivity is?

    I know we have only increased a fraction of a doubling of CO2 in the last 10 years – but it seems we should be able to evaluate the models of 10 years ago doing this – is that correct? Or has it already been done?

    Would that provide any useful information or check on the models? Would this provide any insight into the projected climate sensitivity?

    • steven mosher

      10 years would not get you an steady state response. Putting in actual forcings for the past 10 years would not give you the answer to the question.

      The question is. If you double C02 and hold all other variables at their present values, what is the DELTA TEMPERATURE you see when enough time passes that the model reaches stability again.

      Run the model for a thousand years till T doesnt change. Then double C02 and only C02. run the model for another 1000 years. Observe DELTA T.
      the change in T due to the doubling of C02 is the “sensitivity”

      Put in 10 years of real observations of forcings, some going up some going down, run the model for 10 years? you get an answer, but thats a TRANSIENT state.

      • steven,
        But CO2, nor any other variable, exists in a non-dynamic state.
        That renders the results to offer very little, if any, value.

      • steven mosher

        On the contrary. Of course C02 is “dynamic” in a controlled experiment I tell you that flooring your car will lead to a top speed of 100mph.

        You complain that real accelerators vibrate and it will vary between 99% and 100% floored. You complain that sometimes the wind is in your face, that the road might be bumpy. All true.

        The question is this. Is it wise to floor the car when the speed limit is 25? well, no. You can’t count on unknown forces to slow you down.

        So, you cannot say the result CATEGORICALLY has little value without first understanding the kind of decision being made.

        this type of diagnostic exercise is absolutely science at its best.

      • Yes – that’s the sensitivity to CO2 – only.

        But it’s not the “climate sensitivity” which is the quantity in question. And for that, one needs more than just variations in CO2. You know the list – ALL those forcings and feedbacks that have been ignored, neglected or just blown off as insignificant. But some of them may be proving to be not as insignificant as has been previously assumed.

        Do you remember the breakdown on that word? ASS-U-ME. I learned that while designing nuclear power plants – it was reinforced by the Green Machine – and then hammered home by a group of atmospheric scientists and a really hardcore spacecraft systems engineer. Some of today’s “climatologists” would do well to learn it.

      • I’ll apologize for the bold. It wasn’t supposed to be that way.

      • steven mosher

        But it’s not the “climate sensitivity” which is the quantity in question.
        #####
        that is ALL we have been talking about since my first post which was a correction to the huge error in the article which claimed that scientists INPUT a sensitivity. they DONT. any sensitivity is an output METRIC .

        You should ALSO NOTE that I said the real uncertainity was two things

        1. FORCINGS ASSUMED
        2. MODEL COMPLETENESS.

        To repeat. DELTA T (sensitivity) is not assumed. it is not an input to the models it is an output METRIC. It is dependent, as ALL OUTPUT IS, on certain assumptions. All physics, all experiments, all oberservation is RIFE with assumptions. Assumptions come at every level and in many forms. from assumptions about physical quantities, to assumptions about laws of physics holding. but DELTA Temperature, is just that temperature 1 – temperature 2. Not an input, its a output metric. Not assumed, but rather the result of assumptions.

        Let me put it a different way. If hansen ran his GCM and the result was 14C, and then he doubled C02 and the result was 15C, very few people on this thread would have ANY issue saying the sensitivity was 1C per doubling. But, the think that since he says its 3C that there must be something wrong with this simple concept.

        Put ANOTHER WAY, roy spencer has calculated rather low sensitivities. You do not see skeptics attacking the notion of sensitivity when he uses it. why? cause they like the magnitude of the answer. plain and simple.

      • steven,
        No, I like Spencer’s estimate because it does not invoke apocalypse and calamity, which means it is much more likely to be true.
        It is the likelihood of it being useful that is attractive.
        The question you raise is why so many believers glom onto the historically rare apocalypse?

      • hunter,

        ‘…I like Spencer’s estimate because it does not invoke apocalypse and calamity, which means it is much more likely to be true.’

        That’s a pretty amazing assumption about the relationship between those two ideas. Are there any physical reasons why we should believe that the level of ‘calamity’ involved in particular person’s interpretation of any situation correlates with the likelihood of that interpretation being true? It would seem like a rather novel finding to say the least.

        Or is that just a convenient conclusion to draw in this context?

        Pielke the Elder produced a paper a few years ago tying La Nina events to family outbreaks of tornadoes, while many others have made more conservative connections between those events. I guess Roger’s conclusion is more likely to be wrong then, huh?

      • maxwell,
        To answer your question, “Are there any physical reasons why we should believe that the level of ‘calamity’ involved in particular person’s interpretation of any situation correlates with the likelihood of that interpretation being true?” The answer would be ‘Yes.”
        The Earth is big, robust and has been around a long, long time. If it was prone to apocalyptic calamaties, we would not be here now. This is the dog not barking.
        Additionally, history is full of apocalyptic predictions of global destruction due to man’s wickedness.
        The track record for them is 100% wrong.
        These tornados, as tragic as they are locally, are not apocalyptic in scope. They are examples of how historically dangerous weather actually is.
        It is the parnoic/religious/magical thinking perspective that sees events like this as some sort of proof of punishment for some evil, whther it be sin or CO2.

      • hunter,

        ‘The Earth is big, robust and has been around a long, long time.’

        So because the earth, as a rock floating through space, has been around a long time, we should believe people’s less calamitous interpretations of physical reality concerning our own behavior? Even with respect to our lives, which are significantly more fragile than the just the physical persistence of the earth?

        Moreover,

        ‘If it (the earth) was prone to apocalyptic calamities, we would not be here now.’

        is not right. We are here precisely IN SPITE of apocalyptic calamities. There have been substantial extinction events in which fossils records seem to show large portions of the entire animal population have been wiped out, never to be seen again. It’s actually a recurring theme on earth, in fact.

        All have been ‘natural’ calamities in the sense that they have not been caused by humans. But they have calamities nonetheless.

        I like this one though,

        ‘Additionally, history is full of apocalyptic predictions of global destruction due to man’s wickedness. The track record for them is 100% wrong.’

        Again, this conclusion in the context of global warming necessitates a real connection between those past predictions and any current predictions about humans’ behavior, ie burning stuff. You are merely assuming that such a connection exists, despite the fact that basically none of the predictions to which I think you are referring are based on modern physical science. The vast majority of the predictions for worldwide collapse I am aware at least are based on religious belief. That is, little more than superstition and often as a way for power to be gathered by a smaller and smaller group of people.

        So you are comparing predictions based on power-grabbing and superstition to predictions made using the same knowledge that makes your computer work. That doesn’t seem like a robust manner of making this connections.

        I agree with your general skepticism toward many of the more extreme predictions with respect to the climate response to an ever increasing greenhouse effect. But the answer to those extreme predictions, whose foundations are very much debatable, is not to make equally undefended statements that are cast as truths despite the fact that for which there is little to no evidence to support.

        If the whole idea of this post is hold practitioners of science, both professional and amateur, to higher standards when expressing concerns over the ‘threat of climate change’, again, we (skeptics) ought to be held to a higher standard as well.

        So far, much of this thread is failing on that front.

      • maxwell,
        You make some good points, however, I would point out:
        We are here because of everything that has happened in the past. The apparent collision of the Mars-sized planetoid that led to luna, tides and probably active plate tectonics, etc.
        But we are talking about the history of Earth, and over its multi-billion year history, most epochs have been mostly boring. It has been ~60+ MY since the Yucatan area strike apparently ended the dinosaurs and let small furry critters have a go at it, for example.
        We are here as aresult of these events, and that is a significant difference from ‘inspite of’.
        The era of h. sapiens has been notable in its lack of global apocalypse, and I think I will continue to place my marker on the bet that has that continuing for the forseeable future.
        As to quality of predictions, I would point out that each era that makes them finds them made by the elites of that day. Noah and the other flood stories were not the stories of the ignorant, but of the ruling classes.
        The apoclayptic literature of 2000 years ago was done by literate educated people.
        I see little difference between AGW, Lysenkoism, eugenics and the older failed apoclayptic stories. They have more in common than they have in difference.
        I think this thread is doing just fine.

      • hunter,

        I have little doubt that you think there is a great similarities between extreme predictions of climate catastrophe and Noah’s Ark. I’m simply pointing out that there is no rigorous evidence that such a comparison is worthwhile or informative as a predictive measure.

        That is, you’re point is that because past predictions of catastrophe have been wrong, for you and yours at least, the future predictions of similar catastrophe are also wrong. Yet, the only connection between these predictions is the, likely highly disputable, fact that ‘intellectuals’ wrote the predictions down. Even if we assume this connection to be true, it’s still a very poor standard for correlation.

        On it’s flip side, why would intellectuals’ predictions, like those of Dr. Spencer, be correct? What is the operational difference between an ‘intellectual’ who makes a catastrophic prediction versus the ‘intellectual’ who does not?

        You have provided nothing in terms of evidence that shows any distinctive difference in reasoning between those two scenarios. You simply assume that your conclusion is true, then present it as truth. It’s exactly the same logic employed by ‘alarmists’ who are willing to believe their own chosen predictions.

        In a thread about the scientific method, such a methodology just does not cut it.

        Moreover, how is eugenics an example of a catastrophic prediction?

  45. I am not sure where this thread is supposed to be about climate snesitivity, but the fact is that we have absolutely no idea what the numeric value of climate sensitivity is. We cannot estimate no-feedback climate sensitivity (delta T = Delta F/lambda is nonsense ). When Fred talks about climate sensitivity as if it has some sort of meaning in physics, I want to shout, “Where was no feedback climate sensitivity ever measured?” Since the answer is never, then climate sensitivity is a meaningless, hypothetical concept.

    • steven mosher

      we certainly do have an idea of what the numeric value is.
      If it was too low, you’d never get out of the snowball earth.
      Since we had a snowball earth and since we got out of it we can in fact use that fact to help bound the problem

      http://astroblogger.blogspot.com/2010/02/snowballs-snowjobs-and-lambert-monckton.html

      For a list of papers and the estimates you can view this page

      http://www.skepticalscience.com/climate-sensitivity-intermediate.htm

      here is a nice one that uses ice core data

      http://www.agu.org/pubs/crossref/2008/2007GL032759.shtml

      1.3 to 2.6

      this uses observations to estimate.
      http://www.sciencemag.org/content/295/5552/113.abstract

      so saying we have no idea what it is is wrong. the range of estimates is large, but look, I’m pretty certain you are between 3 ft tall and 8 feet tall.
      That’s useful knowledge. not perfect knowledge, not useful for every decision, but useful for some.

      The question is: what decisions do we have to make if the figure is low versus what decisions do we have to make if it is high? that’s a practical question

      • Steve McIntyre

        Steve Mosher says: “Since we had a snowball earth and since we got out of it we can in fact use that fact to help bound the problem”.

        A number of Canadian geologists do not agree that that deep-time “snowball earth” has been established. They observe that the deep-time geology interpreted as evidence of glaciation can be more plausibly interpreted as turbidites.

      • Cites please?

      • Steve McIntyre

        Look for articles by Nick Eyles of the University of Toronto or by E Arnaud of U of Guelph. I don’t recall which ones are most apt for your purposes offhand.

        I note in passing that Richard Alley, familiar from climate change debate, has been a very strong advocate of snowball earth.

      • “I note in passing that Richard Alley, familiar from climate change debate, has been a very strong advocate of snowball earth.”

        I note in passing the subtle insinuation of a relation of cause and effect. Note also the term “advocate”, by contrast with those (the two Canadian geologists) who “observe” and “plausibly interpret”.

  46. Another question I have is related to the .8C of warming we have had since 1850.

    Is there an journal article which trys to estimate what portion of this is UHI, what portion of this is black carbon, what portion of this is methane, what portion of this is CO2, etc.?

    When I read Steven Mosher’s explanation of determining climate sensitivity I get a little confused about the holding all other forcings except CO2 constant.

    What use is determining a climate sensitivity number for a doubling of CO2, assuming all other forcings stay constant, when in reality all of the other forcings will change over whatever period of time it takes to double CO2.

    Wouldn’t it be more useful to provide estimates of each forcing for the next 30 years (or 10 or 50 or whatever interval is desired), and then look at what the climate models say the increase in temperature will be for 30 years?

    At a minimum, if the estimates of the forcings were correct, and after 10 years the climate model results, compared to actual observations of temperature, turned out to bedead on – or really off, either way this would be useful information.

    Maybe this makes no sense – but this was the thinking which lead to my last question about what would happen if we plugged the known forcings over the last 10 years into the existing climate models of 10 years ago.

    • steven mosher

      “When I read Steven Mosher’s explanation of determining climate sensitivity I get a little confused about the holding all other forcings except CO2 constant”

      the doubling experiments are designed to isolate the effect of C02. to provide diagnostics of system performance.

      For forecasting, you build a dataset of all forcings. That’s a projection based on an emission scenario. thats different than the doubling experiments.
      Take a look at the design of experiments for Ar4 or Ar5

      DOUBLING experiments are DIAGNOSTIC. what happens if we slam the peddle down. It’s idealized. it simulates a controlled experiment. Its the kind of basic test you do with ANY model. They could also do a doubling TSI experiment, but that would be boring.

      • Steven,

        In the UNEP page I linked to earlier – CO2 was increased at 1%/year – and stabilisation shown at double and quadruple CO2. It is done this way so that feedbacks in clouds and water vapour can be included in the output.

        Cheers

      • steven mosher

        Thanks for that I neglected to mention all the various experiments

      • John Carpenter

        Steven,

        Since you answered a similar question I was going to ask about holding all other forcings constant while doubling CO2, you also said…

        “Take a look at the design of experiments for Ar4 or Ar5″

        Can you point me in the right direction with a link…. you seem to have a lot at your fingertips and I would be very interested in seeing the DOE(s) employed.

      • steven mosher

        there are many experiments, include the core diagnostic runs:
        “For the diagnostic core experiments (in the lower hemisphere), there are the calibration-
        type runs with 1% per year CO2 increase to diagnose transient climate response (TCR),
        an abrupt 4XCO2 increase experiment to diagnose equilibrium climate sensitivity and to
        estimate both the forcing and some of the important feedbacks, and there are fixed SST
        experiments to refine the estimates of forcing and help interpret differences in model
        response.”

        But read the whole document.

      • steven mosher

        look at modelE results for a variety of experiments.

        here’s a small example

        http://data.giss.nasa.gov/efficacy/#table1

        If you need help in figuring it out I gave instructions some time ago on climateaudit. GIYF ;steven mosher modelE climateaudit.org

      • John Carpenter

        Steven,

        Thanks for the links… will keep me busy awhile…

  47. Steven, yes I am aware of the concepts of transient and equilibrium sensitivities and I understand their meaning. When I see an attribution that doesn’t attribute all the recent warming to the transient response and instead attributes some portion of it to forcings from earlier in the century is when I will take the argument seriously. Until such time as I see that happen I can only assume one of three things:
    1. The effect is so small the warming of the past makes no difference to the present and therefor the warming of the present will make no difference to the future.
    2. This is a poorly thought out hypothesis intended to to explain why the amount of warming disagrees with that expected.
    3. The world was created 30 years ago.

    • steven mosher

      You can look at Lucia’s lumped parameter model to get a sense of that. Its really quite simple. If you didnt follow that discussion sometime ago, the GIYF

      • Steven, it may seem quite simple to you, it is not so simple to me. If you look at the summary for policy makers from AR4 you will see the models using only natural forcings going down at about 1950 correlating closely with solar cycle strength. This would indicate a short lapse rate with that portion of the forcing having already achieved equilibrium. To mke matters even worse they claim this equilibrium was achieved during the same time period they are claiming aerosols were cooling the earth. Now you can make assumptions about aerosols and you can make assumptions about lapse rates and I don’t have the time knowledge or inclination to learn enough to argue the technical details with you. But what I can argue is that when you use contradicting arguments you need to be able to explain why. So my question is: why doesn’t natural forcing have a lapse rate equal to co2 lapse rates and why don’t aerosols prevent natural forcings from achieving equilibrium?

  48. The meaningfulness of “climate sensitivity” has been questioned here, which is legitimate, because it is also questioned within the climatology arena. It is not the only metric available to assess the temperature response to a change in CO2; for example, some observers prefer to evaluate the “transient climate response” involving a 1 percent annual CO2 increase.
    Furthermore, it is generally not considered convenient to perform the appropriate climate sensitivity experiment – instantaneously double CO2, make sure nothing else changes for the next one thousand years (keep the sun constant, for example), and then measure how much global temperature has risen. Nevertheless, the concept of climate sensitivity is one useful metric among others for evaluating climate behavior.

    One reason is that it is a convenient means of permitting different groups to compare notes on important climate variables. If we instantaneously double CO2, the calculated forcing is about 3.7 W/m^2. This means that if there was previously little of no flux imbalance at the tropopause, an imbalance of 3.7 W/m^2 will now exist (after a brief interval for the stratosphere to adjust), and the climate response can be estimated. This is important because by definition, a forcing describes the imbalance that is created before anything else has had a chance to change. In particular, it is the imbalance that exists before surface temperature changes, and since the feedbacks critical to the full climate response to a perturbation are responses to temperature change (e.g.., water vapor, clouds, ice, etc.), we can ask how temperature and feedbacks will evolve starting from a 3.7 W/m^2 imbalance. Note that a gradual doubling of CO2 would also generate a 3.7 W/m^2 forcing, but because the climate will already be responding while that is happening, the imbalance will be undergoing reduction throughout and will be less than 3.7 W/m^2, and perhaps difficult to estimate.

    If different groups all start with a standardized 3.7 W/m^2 imbalance, they can attempt to model climate responses, including the pace of temperature and feedback evolution. This can then be scaled to the magnitude of imbalances that are actually likely to be observed within reasonable timeframes from a variety of forcings (not necessarily restricted to CO2), thereby allowing the modeled estimates to be compared with observations. This will be particularly useful for assessing such phenomena as ocean heat uptake and the differences in timing among different feedbacks. In that sense, climate sensitivity is a metric of convenience. It is not indispensable, but it provides a useful tool for assessing climate responses and describing them in a way that is easy to visualize, even if impossible to measure in its own right.

    • Fred Moolton writes ” If we instantaneously double CO2, the calculated forcing is about 3.7 W/m^2″

      First a minor correection. The word should be “estimated”, not “calculated”. There is no way to either calculate or measure the change in radiative forcing.

      But it never ceases to amaze me the way people like Fred treat these hypothetical numbers used in estimating climate sensitivity as if they actually mean something. They are completely meaningless. They never have been measured, and they never will be measured. We have no idea what their numeric value is, and we never will know.

      This is like believing that if you repeat a lie often enough, people will believe it. We are never going to be able to estimate cliamte sensitivity. In the end, we will get data as to how much global temperastures rise as a result of adding CO2 to the atmosphere. Until then, any numbers quoted are not worth the paper they are written on.

      • Jim Cripwell, I’m right on board with the thrust of your post, but, if I may, you go a bit overboard IMHO. As I have repeatedly said, the forcing calculations have tremendous looseness and ignored variability, but they are not useless or meaningless.

  49. ‘In particular, the global mean temperature change which occurs at the time of CO2 doubling for the specific case of a 1%/yr increase of CO2 is termed the “transient climate response” (TCR) of the system.’

    ‘The “equilibrium climate sensitivity” (IPCC 1990, 1996) is defined as the change in global mean temperature, T2x, that results when the climate system, or a climate model, attains a new equilibrium with the forcing change F2x resulting from a doubling of the atmospheric CO2 concentration.’

    http://www.grida.no/publications/other/ipcc_tar/?src=/climate/ipcc_tar/wg1/345.htm

    See figure 9.1 – both of these emerge from the same models. One is seen at the time of CO2 doubling and the other occurs some 100′s of years later.

    Both are problematical because the ‘irreducible imprecision’ of models emerges from properties of ‘sensitive dependence’ and ‘structural instability’. Fundamental properties of complex dynamical systems – and these models in particular – in theoretical physics.

    The problem of modelling hundreds of years into the future emerges from the dynamical complexity of climate itself. Climate shifts in 1976/1977 and 1998/2001 show that even short term modelling is fraught with uncertainties. In particular – the potential for cooling for another decade or 3 – as a result of a oceanic influence from both the Pacific and Atlantic Oceans – is an emerging theme in the scientific literature.

    This study which uses the PDO as a proxy to model impacts of Pacific influences on Earth systems. ‘Decadal-scale climate variations over the Pacific Ocean and its surroundings are strongly related to the so-called Pacific decadal oscillation (PDO) which is coherent with wintertime climate over North America and Asian monsoon, and have important impacts on marine ecosystems and fisheries. In a near-term climate prediction covering the period up to 2030, we require knowledge of the future state of internal variations in the climate system such as the PDO as well as the global warming signal. ‘

    ‘A negative tendency of the predicted PDO phase in the coming decade will enhance the rising trend in surface air-temperature (SAT) over east Asia and over the KOE region, and suppress it along the west coasts of North and South America and over the equatorial Pacific. This suppression will contribute to a slowing down of the global-mean SAT rise.’

    http://www.pnas.org/content/107/5/1833.short

    While these authors carefully suggest only a slowing down of surface atmospheric temperature rise – the intensification of the cool La Nina phase of the Interdecadal Pacific Oscillation – may be one of those surprises anticipated in the 2002 NAS publication: ‘Abrupt climate change: inevitable surprises’. It should be remembered that oceanographers and hydrologists have been studying these phenomenon for 100 years. We shall follow developments with amusement.

    However, I can only see this as a problem. It may be somewhat understood that I am neither one nor the other when it comes to the climate wars. There is a risk in anthropogenic greenhouse gases that emerges from sensitive dependence in chaotic Earth systems. But the insistence that the planet continues to warm when it quite obviously is not can only be counter productive. Many people could use a new narrative and a bit of humility and grovelling would not go astray.

    • Chief –
      But the insistence that the planet continues to warm when it quite obviously is not can only be counter productive. Many people could use a new narrative and a bit of humility and grovelling would not go astray.

      So what’s the probability of that happening in our lifetime?

    • steven mosher

      if we can separate certain issues, then we can move onto the real issue which is the one you raise. The red herrings that sensitivity is an input to models, that it is an assumption rather than the result of assumptions, are all issues I’d like put to bed, so that the conversation can turn to the issues you raise, which are recognized INSIDE the science. that’s my whole point.

      • steven mosher and Jim Owen

        Let’s not spend too much time beating around the bush.

        The range of climate sensitivity estimates claimed by IPCC is the result of model simulations, based largely on theoretical deliberations and assumed inputs.

        As such, they are assumptions. They bear very little resemblance to real-life physical observations.

        Theoretical physics is wonderful, but actual physical observations are better.

        And these do not support the high estimates of 2xCO2 climate sensitivity claimed by IPCC.

        That is the crux of the problem we are all discussing here.

        Max

      • steven mosher

        “The range of climate sensitivity estimates claimed by IPCC is the result of model simulations, based largely on theoretical deliberations and assumed inputs.”

        WRONG.
        I provided a link to knutti’s papers. models are but ONE piece of the puzzle and not a very important one for all the reasons Hansen says.

        The evidence from paleo work and from work on observation datasets is the primary evidence. That work gives you ranges from ~1.5 to over 6. If the models did not agree with this observationally based range they models would have to be fixed.

        Let me repeat. Take studies on Volcanos for example. They constrain the value to lie between 1.5 and 6. when the models prodduce a rnage from 2.1 to 4.4 that indicates that the models are consistent with the range established by observation. Get it? If you like observationally constrained estimates.. Guess what? You get higher averages.

      • steven mosher, HUH?!? These thousands of KLOCS that took hundreds of PhDs years to develop pale in the face of solid near-irrefutable paleoclimate data and evidence? Really? Those half-dozen trees and dozen ice cores are top notch, I’m sure, but significantly better than GCMs? Of course, if so, there’s still that little problem of the poor and reverse correlation between paleo CO2 and temperatures. Boggles the mind.

      • Steven Mosher

        I have very much enjoyed your comments here and elsewhere, as well as your book, so don’t get me wrong here.

        You have cited Knutti et al. as a source of information supporting the validity of the IPCC model-based assumption of a 2xCO2 climate sensitivity of 3C on average.

        I have gone through this and am quite disappointed that it does not move out of the strictly hypothetical into the more practical or actually observed.

        Let’s look at one of the first paragraphs:

        Note that the concept of climate sensitivity does not quantify carbon-cycle feedbacks; it measures only the equilibrium surface response to a specified CO2 forcing. The timescale for reaching equilibrium is a few decades to centuries and increases strongly with sensitivity.
        The transient climate response (TCR, defined as .the warming at the point of CO2 doubling in a model simulation in which CO2 increases at 1% yr−1) is a measure of the rate of warming while climate change is evolving, and it therefore depends on the ocean heat uptake. The dependence of TCR on sensitivity decreases for high sensitivities.

        OK. One at a time.

        Note that the concept of climate sensitivity does not quantify carbon-cycle feedbacks.

        CCF is a purely hypothetical “red herring”; it is good that Knutti et al. did not attempt to “quantify” it; they should not even have mentioned it at all.

        The timescale for reaching equilibrium is a few decades to centuries and increases strongly with sensitivity.

        This is a statement of “faith”, as there are no empirical data to support it. The Hansen et al. paper that introduces the concept of energy “hidden in the pipeline” is based on model projections, circular logic and questionable arithmetic.

        The transient climate response (TCR, defined as .the warming at the point of CO2 doubling in a model simulation in which CO2 increases at 1% yr−1) is a measure of the rate of warming while climate change is evolving, and it therefore depends on the ocean heat uptake.

        TCR is a hypothetical value.

        The “ocean heat uptake” as postulated by Hansen et al. has been falsified in real life by the ARGO measurements since 2003.

        The “model simulation in which CO2 increases at 1% per year” is unrealistic to start off with: CO2 has increased at a CAGR of around 0.4% per year since Mauna Loa started and around this same rate over the most recent period, as well. It is foolish to load a model with a rate of CO2 increase, which is unrealistically exaggerated to start off with.

        The dependence of TCR on sensitivity decreases for high sensitivities.

        This is another statement of “faith” relating two purely hypothetical concepts based purely on theoretical deliberations.

        The rest of this report is all about model simulation results. Interesting but not conclusive of anything in real life.

        It appears that these modelers really start believing their models, as if they were some sort of modern oracles instead of simply multi-million dollar versions of the old slide rule.

        Knutti et al. is interesting reading, but does not reflect real life in any way.

        We need empirical data based on actual physical observations or reproducible experimentation, rather than simply model simulations based on theoretical deliberations.

        As Steve McIntyre has stated very clearly: we need an “engineering quality” estimate of 2xCO2 climate sensitivity, which shows that this is 3C (and that AGW is, therefore, a potential problem).

        This does not exist to date, despite at least 30 years of search.

        That is the inherent weakness of the IPCC premise that AGW, caused principally by human CO2 emissions, has been the primary cause of 20th century warming and, therefore, represents a serious potential threat to humanity and our environment.

        Max

      • This discussion is motivating me to do a new post on sensitivity (not sure when I will have time tho). In case you missed it the first time around, here is my previous post on sensitivity http://judithcurry.com/2011/01/24/probabilistic-estimates-of-climate-sensitivity/

      • steven mosher

        Thanks Judy.
        A couple things that might be helpful is to for folks to read knutti.

        here is my main concern. There are many misunderstandings of
        1. the definitions of sensitivity
        2. how it is derived
        3. the various lines of evidence.
        4. that its not an input to models.

        People have learned some stupid pet tricks to get along in web debates. they need to understand that these stupid pet tricks prevent them from having a real debate. and there is a real debate over sensitivity. They can join that debate. But they cannot join that debate by arguing that sensitivity is either
        1. input into models
        2. derived SOLEY from models
        Both if those are demonstrably wrong. THE debate is not over, but arguing about those two points is not a debate. If they want to argue that I suggest they go discuss Obama’s birth certificate or Bush’s plan to destroy the twin towers.

      • Sensitivity is an input to the model if the human model builders are in any way using the results of the sensitivity analysis to adjust the model parameters.

        It may not be an automated input, but as soon as you start making manual adjustments to the model input parameters because the model is predicting CO2 sensitivity too high or too low for what you believe to be correct, then CO2 sensitivity is being fed back into the model as an input.

        Remember “Clever Hans” the horse that could do mathematics? The classic example of unrecognized feedback during training affecting the final results.

      • ferd,

        ‘Sensitivity is an input to the model if the human model builders are in any way using the results of the sensitivity analysis to adjust the model parameters.’

        You’re saying that if the modelers input different observational data that becomes available to constrain their results, that climate sensitivity is then an input? Or that if they realize that they can no longer neglect a term in a particular differential equation for the pertinent forces, by adding that term into the model they are making climate sensitivity an input?

        The model itself is just equations defining the physics involved. Those can be set by boundary or initial conditions or both in the same way that any differential is solved. With so many equations involved, unless you showed me the exact path between the changing of ANY parameter in those equations to meaning changing sensitivity as an input, I am highly skeptical that your comment has any meaning in the context of actual models, model performance or model optimization.

        Yet again, we have a specific scientific claim (any optimization leads to climate sensitivity as an input) about climate models being presented on a thread about the scientific method. Let’s see if the scientific method can validate or nullify the claims being made. My money is on nullify on this one, although I don’t think anyone will take this bet.

      • What is often unrecognized by people outside of computer science is that any computer model that uses learning techniques is subject to the same sorts of contamination as can occur in animal training experiments.

        see:
        http://en.wikipedia.org/wiki/Observer-expectancy_effect

        The observer-expectancy effect (also called the experimenter-expectancy effect, observer effect, or experimenter effect) is a form of reactivity, in which a researcher’s cognitive bias causes them to unconsciously influence the participants of an experiment. It is a significant threat to a study’s internal validity, and is therefore typically controlled using a double-blind experimental design.

      • Models are not purely physical, because many physical effects are at best estimates. For example, cloud albedo.

        When you adjust the parameters and/or their weighting as is done during model backcasting, then you are training the model.

        The model has no way to distinguish between those effects resulting from physics as compared to those resulting from unconcious cognitive bias.

        Like “Clever Hans” the model may deliver the answer it does because that is the answer the modeller expects, not because the model calculated the correct answer.

      • steven mosher

        “Sensitivity is an input to the model if the human model builders are in any way using the results of the sensitivity analysis to adjust the model parameters.”

        the only documentation we have about “tuning” or adjusting models is in regard to hindcasting tests, not diagnostic runs.
        So, until you can document that the results of diagnostic runs are used to adjust inputs, your point is speculation.
        As I noted, the only kind of adjusting that people talk about is in chapter 9 of Ar4.

        Adjustments to forcings, like aerosols, will not change the effect due to doubling C02, because forcings, it has been shown by diagnostic testing, combine linearly.

        Simply: turn off aerosols. double c02. DELTA T ~2.7
        turn on aerosols. double c02. DELTA T ~2.7

        Aerosols are adjusted for hindcast purposes, so that argument is an attribution concern, not so much a doubling c02 concern. Forcings combine linearly.

      • ferd,

        ‘…any computer model that uses learning techniques…’

        There is a distinction you are assuming about the numerical methodology used to converge on a specific solution to the differential equations in the climate model itself. It is not using past solutions in any kind of feedback loop that allows the research access to those solutions for tampering. It is just the digital computation of a finite difference method that is a stable algorithm for solving differential equations.

        So all the points you’re making about experimenter bias are totally off base, especially from a computer science perspective. The climate models are solved numerically and the solutions are then tested against known historical data. Based on the agreement with that data, parts of the differential equations are ‘tuned’ to match that data more accurately. Since the climate sensitivity is NOT a piece of observational data, there is no way that researchers can input that into the model to begin with, but also no way for them to ‘tune’ it in later.

        So again, there is no learning optimization involved in solving the differential equations based on well known physics that are basis of the climate model. It is simply the application of the well-understood and stable technique of finite difference used to numerically solve those differential equations.

        Your point is just out of context.

      • “the only documentation we have about “tuning” or adjusting models is in regard to hindcasting tests, not diagnostic runs.”

        If you check my previous you will find I am talking about hindcasting. (I called it backcasting).

        As soon as you hindcast a model to improve the fit, you are talking about training. This imparts learning to the models and they suffer from the same problems as animals and humans during training.

        Like Clever Hans, they learn to provide the answer trainer expects. Like the horse, the model doesn’t care how it got the answer. The trainer may be convinced the horse is performing arithmetic (forecasting climate), while in reality it is doing something quite different (analysing the trainer).

        Say for example, say you use 30 for input X (eg: cloud albedo), with a 5% weighting. You backcast and find that you can equally improve the hindcast by adjusting input X to 32, or by adjusting the weighting to 6%. Which do you do?

        Is 32 correct? Or a 6% weighting? Or maybe the historical data you are trying to fit is noisy and inaccurate. Each of the little decisions you make in selecting the input adjustments is subject to unconcious cognitive bias.

        These type of decisions are magnified many times over as the number of input parameters increase. You don’t know if the hindcast fit errors are a result of the incorrect input parameters, inappropriate weightings, or historical data errors.

        So in the end, you unconciously adjust the input parameters to deliver a result that is in line with your expectations. In other words, you have trained the model to act like Clever Hans.

        Like Hans, the model satisfies the expectations of the trainer, so we assume it is predicting the climate (a horse performing arithmetic).

        This is what you get with CO2 sensitivity. No matter how much we might wish, cognitive bias is unconciously affecting the models to deliver the answer the model builder expects. If you expect CO2 sensitivity to be > 1, then that is likely the answer you will get, unless you purposely adjust the model otherwise.

        Like the horse, unless and until the model is removed from the bias of the trainer (and audience), you cannot determine if the horse is performing arithmetic (or the model predicting climate).

      • maxwell: “The climate models are solved numerically and the solutions are then tested against known historical data. Based on the agreement with that data, parts of the differential equations are ‘tuned’ to match that data more accurately.

        The question(s) that arises is: how many equations can be ‘tuned’? IOW, how many degrees of freedom are there? And, in turn, how much of this (presumably) multi-dimensional space have we explored? If each parameter (‘equation’) can be tuned to 1000 values, and there are only 3 that can be tuned, that’s a billion runs. I don’t know, but am presuming that there are many more than 3 such tunable parameters. Therefore, given the time required to generate a “run” of the model, we clearly cannot do a complete analysis – we need to prune the number of runs to something more reasonable. Undoubtedly, this has been done in (at least) two ways:
        1) empirical and/or heuristic limitation – eg, we don’t do runs where a forcing is outside reasonable constraints of what we have measured or reconstructed.
        2) vary one at a time to see what looks reasonable and get a “feel” for how it reacts.
        While 1 is reasonable IMO, 2 is not – especially if these are likely to interact, and most especially if those intractions are non-linear (tipping points). For example, if temperature affects evaporation, evaporation affects precipitation and precipitation affects temperature (none of which would be unreasonable to assume IMO), and further more there are tunable parameters for more than one of these effects, it is extremely likely that more than one set of parameters exist that can hindcast to the same accuracy and precision – but we would never know if we restricted ourselves to varying but one at a time.
        And all this before we even consider any issues relating to resolution (scale) – as far as I am aware, there has been very little work in climate models related to the effects of grid size (spacial or temporal) to results and what scales need to be resolved to prevent numerical noise from rounding and numerical resolution limits overwhelming the physics calculations. Clearly then, we are on very shakey ground if we are using the models as “evidence”, as I’ve said before. Perhaps we have been clever or lucky enough to get it right, but would you bet your future and life on it? If so, you are a braver man than I.

      • Judith

        Thanks for link.

        Yes. A new thread dedicated to Climate Sensitivity would be a good thing, as this still appears to be the big unresolved issue here.

        At the time I followed your previous thread on this as a lurker, but have now gone back to it again.

        There were many interesting comments.

        The Schwartz 2010 paper was very interesting in analyzing why the model warming estimates did not materialize in real life.

        We read that current model-derived estimates of climate sensitivity have thrown in the human aerosol “wild card” to justify a high 2xCO2 climate sensitivity.

        At the time of AR4 human aerosols plus a minor negative forcing from land use changes were assumed to essentially cancel out the forcing of other minor GHGs, etc., except for CO2, so that the net forcing (by 2005) from all anthropogenic forcings was equal to that of CO2 alone.

        It now appears from the Schwartz paper that the assumed aerosol forcing might be even greater and could theoretically account for much of the observed discrepancy between the model forecasts and the actual observations.

        This is all interesting but still very hypothetical:

        - Our models assume a certain range of climate sensitivity.

        - Physical observations have shown a much lower rate of warming.

        - We hypothesize a long lag in reaching equilibrium with energy “hidden in the pipeline” to account for the difference b etween the model assumption and the observed value (Hansen et al. 2005)

        - When the atmosphere plus upper ocean appear not to be warming as our models though they should, we hypothesize that larger forcing from human aerosols may be hiding some of the assumed warming from CO2

        I hope a new thread can get to the bottom of some of these strange rationalizations and give us some more solid comparisons with actually observed emp;irical data, rather than simply model simulations (or “assumptions”, as I have called them – to which you, and a few others have objected).

        Max

      • “Our models assume a certain range of climate sensitivity”.

        I can picture Steve Mosher literally pulling his hair out

      • “Our models assume a certain range of climate sensitivity.”

        This made me laugh out loud as I pictured Steve Mosher banging his head on his keyboard in frustration.

      • Oops – oh well, worth saying twice

      • steven mosher

        Dear god. The amount of time certain people waste on the WRONG argument is astounding. What they fail to realize is that the better argument comes through by accpeting the simple truth. Delta T (sensitivity) is a result. If they saw that then they could FOCUS energy, attention, words, time on the real debate.. like assumptions about aerosols, assumptions about model completeness.

        But no. somewhere they learned a stupid pet trick in the echo chambers of contrarian land. these stupid pet tricks keep skeptics from engaging in the real debate. these stupid pet tricks minimize their influence and power.

      • Dear god

        An appeal to a higher authority ?

        http://www.worldscibooks.com/physics/0862.html

        I suspect not, however the implications for Faustian distributions are unclear, as Goethe did say that nature is coauthored by the devil .

        http://i255.photobucket.com/albums/hh133/mataraka/randomwalk-n_37231.gif

      • steven mosher

        ‘The rest of this report is all about model simulation results. Interesting but not conclusive of anything in real life.”

        Wrong. please read the document with the same that we poured over the mails. the paper documents observation studies as well as model studies. or read roy spencer for gods sake who estimates sensitivity from observations. The notion that sensitivity comes EXCLUSIVELY from models is factually wrong. just wrong. Models are but one source to estimate the figure.

        IPCC models give this figure:
        The current generation of GCMs[5] covers a range of equilibrium climate sensitivity from 2.1°C to 4.4°C

        Emprical approaches also used by the IPCC suggest wider ranges

        God, even wikipedians know this:

        “”… examine the change in temperature and solar forcing between glaciation (ice age) and interglacial (no ice age) periods. The change in temperature, revealed in ice core samples, is 5 °C, while the change in solar forcing is 7.1 W/m2. The computed climate sensitivity is therefore 5/7.1 = 0.7 K(W/m2)−1. We can use this empirically derived climate sensitivity to predict the temperature rise from a forcing of 4 W/m2, arising from a doubling of the atmospheric CO2 from pre-industrial levels. The result is a predicted temperature increase of 3 °C.”[11] Based on analysis of uncertainties in total forcing, in Antarctic cooling, and in the ratio of global to Antarctic cooling of the last glacial maximum relative to the present, Ganopolski and Schneider von Deimling (2008) infer a range of 1.3 to 6.8 °C for climate sensitivity determined by this approach.[12]”

        There are MANY ways to estimate the sensitivity. modelss are one way, and not necessarily the best.

        If you look at all the ways you see that they all center around 3.

        Lukewarmers tend to hold that the truth is less than 3 and greater than 1.5. Skeptics tend to fall between .5 and 1.5.

      • Steven Mosher,

        Moving on to other factors and real policy, I would love to see. Land use, wetland reclamation, realistic biofuels, honest near term and long term energy policy. Planning needs better degrees of risk from all factors. James Annan, hardly a skeptic, believes that the IPCC might need to consider a max sensitivity of 4C. That does not eliminate the dragons, but a tighter range does allow for more prudent planning.

        As far as energy sources, solutions are what are needed. Reports tend to damn instead of offer solutions. Nothing is without risk, all we can hope is to minimize risk. It takes a bit of compromise to move forward politically, I don’t see that enough from either side.

      • Solutions are not needed if there is no problem. Nor do we need central energy planning. That is the problem as I see it.

      • steven mosher

        yes.

        I have no problem accept 4 as a planning number for land use and development. I will give you an example. In SF we are looking to allow a huge development on treasure Island. If we use an approach called adaptive governance the local community would look at the plans for Treasure Island and ask.
        1. what will the sea level look like assuming a sensitivity of 4 and a
        BAU emmissions scenario.
        2. Dont build or improve in areas that may be flooded.
        3. Err dont forget Tsunami’s especially on the west coast.

        That’s taking local control of adaptation to a future we cannot precisely control. Its far preferable to taxing people in Kansas on the C02. First things first. Take local action where you have the power and put the cost where it belongs. Worried about sea level rise? stop effin building in places that the best science says will be flooded. Duh!

  50. SCIENCE USING THE METHOD OF PATTERNS

    The global mean temperature anomaly (GMTA) is shown the following graph:
    http://bit.ly/9kpWKp

    The overall global warming rate is about 0.6 deg C per century as shown in the following graph.
    http://bit.ly/kisvrM

    In addition to the linear warming above, the GMTA has oscillating anomaly as shown in the following graph:
    http://bit.ly/lCWDTP

    If this oscillation pattern that was valid for a century is assumed to be valid for the next 20 years, there will be slight global cooling until 2030.

    Here is the global mean temperature from 1880 to 1970.
    http://bit.ly/m4Mqoh

    For 1880, the GMTA was at its peak. For 1970, the GMTA was near its valley. The number of years between this peak and valley are 90 years. From the above graph, the GMTA for 1970s is similar to that for 1880s.

    For 2000, the GMTA was at its peak. For 2090, 90 years latter, assuming the above pattern continues in the 21st century, the GMTA will be near at its valley. As a result, the GMTA for 2090 would be similar to that for 2000.

    As a result, there would be little change in the GMTA in the 21st century!

    • Girma

      Very good analysis.

      I was not aware that the GMTA around 1880 was the same as 90 years later in 1970, but that is, indeed, what the HadCRUT record shows. Thanks for pointing this out (I doubt that many of the posters here were aware of that observed fact).

      I fully agree with you that actual physical observations are much more meaningful than model simulations or other hypothetical deliberations.

      If the 30-year warming / 30-year cooling cycles continue as they have done since the record started, it is indeed likely that the GMTA in 2090 will be at around the same temperature as 2000!

      This would validate the “climate forcing” hypothesis: Natural = 100%; Anthropogenic = 0%.

      Taking the opposite view: If the recent “speed bump” is now over and warming occurs at the rate assumed by the IPCC models of 0.2°C per decade over the remainder of the 21st century, the GMTA will be 1.6°C warmer than 2000 by 2090.

      This seems to be the range one could reasonably expect, taking the two extreme (but still reasonable) views, i.e. 0 to 1.6°C.

      Assuming there will be a minor GH warming effect (as Spencer/Lindzen have estimated, based on satellite observations), and that CO2 levels will continue at their “business as usual” CAGR of around 0.45% per year (to a level of around 560 ppmv by 2090) we would see around 0.4°C warming from the GHE beyond the normal oscillation by 2090.

      C1 = 390
      C2 = 560
      C2/C1 = 1.4359
      ln(C2/C1) = 0.3618
      ln 2 = 0.6931
      dT (2xCO2) = 0.98°C (no feedback, IPCC, Myhre et al.)
      dT (2011-2090) = 0.98 * 0.3618 / 0.6931 = 0.5°C

      dT (2xCO2) = 0.6°C (Spencer/Lindzen)
      dT (2011-2090) = 0.6 * 0.3618 / 0.6931 = 0.3°C

      I’d say that this is probably the most likely estimate, if one agrees that the GH theory is valid at all and that an atmospheric increase of the trace gas, CO2, will have some perceptible effect on our temperature.

      Anything above that is simply model assumptions and hype.

      Max

    • steven mosher

      If you really want to see how this should be done, I’ll refer you to this

      http://arthur.shumwaysmith.com/life/content/predicting_future_temperatures

      Basically, the mistake people like you make is this: You think
      AGW believes that C02 is the ONLY cause. Its not and AGW doesnt believe that it is. So, take the temperature record and decompose it as arthur has above. That will allow you to get at the real trend.

      Fitting a straight line as you do is NOT a trend analysis. why? because by doing a least squares fit, you have assumed that the underlying model is a linear model with no memory, and well we know that there is autocorrelation in the time series.

      Anyway, arthur has made a prediction for 2011. I suggest you and others
      have a cage match and see who does better over the next couple years.
      put up.

      • steven mosher

        Here is a comparison of IPCC Projections with observation.

        IPCC => 0.2 deg C per decade warming for 2000 to 2010.

        Observed warming => 0.03 deg C per decade.

        Look at the deceleration of the global warming rate with your own eyes:

        http://bit.ly/gMAfzS

        When are you going to dump AGW?

      • steven mosher

        Your analysis makes the classical error of believing something that AGW never claims. AGW does not claim that increase in warming will be monotonic or linear.

        basically, you cannot falsify a theory by testing a hypothesis the theory NEVER MAKES. you create a cartoon version of AGW, then apply the crudest of statistical analysis, that ignores everything we know, and expect me to be impressed. Please. When you show some understanding of the complexity involved in finding the GHG trend ( which is not linear and not memoryless) and distinguishing it from internal variation, then I might be interested. But cherry picking start and end dates and mindlessly fitting least squares to them is an exercise in fooling yourself. I watch what you do for entertainment value, not for edification. Start with arthur’s work. make your own prediction and we shall see if you have anything interesting to say.

      • steven mosher

        So am I going to believe in global warming (so called climate change) when it CLEARLY is not?

        An unverifiable hypothesis is not science, it is religion.

        Climate always changes (eg: ice ages, Holocene maximum). Why call man-made global warming climate change? Where is the honesty in this?

      • Steven mosher

        What I say it in public, they say it in private.

        What is the difference?

        1) I think we have been too readily explaining the slow changes over past decade as a result of variability–that explanation is wearing thin. I would just suggest, as a backup to your prediction, that you also do some checking on the sulfate issue, just so you might have a quantified explanation in case the prediction is wrong. Otherwise, the Skeptics will be all over us–the world is really cooling, the models are no good, etc. And all this just as the US is about ready to get serious on the issue. …We all, and you all in particular, need to be prepared.
        http://bit.ly/eIf8M5

        2) Yeah, it wasn’t so much 1998 and all that that I was concerned about, used to dealing with that, but the possibility that we might be going through a longer – 10 year – period [IT IS 13 YEARS NOW!] of relatively stable temperatures beyond what you might expect from La Nina etc. Speculation, but if I see this as a possibility then others might also. Anyway, I’ll maybe cut the last few points off the filtered curve before I give the talk again as that’s trending down as a result of the end effects and the recent cold-ish years.
        http://bit.ly/ajuqdN

        3) The scientific community would come down on me in no uncertain terms if I said the world had cooled from 1998. OK it has but it is only 7 years [IT IS 13 YEARS NOW!] of data and it isn’t statistically significant.
        http://bit.ly/6qYf9a

        4) I believe that the recent warmth was probably matched about 1000 years ago. I do not believe that global mean annual temperatures have simply cooled progressively over thousands of years as Mike appears to and I contend that that there is strong evidence for major changes in climate over the Holocene (not Milankovich) that require explanation and that could represent part of the current or future background variability of our climate.
        http://bit.ly/hviRVE

        5) Whether we have the 1000 year trend right is far less certain (& one reason why I hedge my bets on whether there were any periods in Medieval times that might have been “warm”, to the irritation of my co-authors!
        http://bit.ly/ggpyM1

      • steven mosher

        I believe you were addressing this to Girma rather than myself, when you wrote:

        Basically, the mistake people like you make is this: You think AGW believes that C02 is the ONLY cause. Its not and AGW doesnt believe that it is

        I can’t speak for Girma, but “people like me” just see what I read.

        AGW (let’s say that’s IPCC) has told us in AR4 WG1 SPM that all anthropogenic radiative forcings to 2005 equaled 1.6 W/m^2, while CO2 forcing was 1.66 W/m^2. So CO2 is not the ONLY cause, but it was apparently equal to ALL anthropogenic forcings, in the estimation of IPCC.

        I have nothing against models, Steven, it’s just that I do not necessarily confuse them with reality.

        When IPCC tells me that the forcing from anthropogenic aerosols was essentially cancelled out by that of other minor GHGs, I take that at face value. After all, IPCC is the “expert” on anthropogenic forcing, even if it concedes that its “level of scientific understanding” of natural forcing, including solar, is “low” and that “cloud feedbacks remain the largest source of uncertainty”. But we are not talking about these things here, so I accept what is written.

        I read that IPCC models project warming of 0.2°C per decade for the next two decades. However, I then see that there was no warming in the first decade at all.

        When I read a bit later that the reason that models failed in their predictions was that anthropogenic aerosols had been underestimated in the models and that the warming from CO2 (and other GHGs) was therefore masked by the higher than assumed impact from aerosols, I ask myself, “from which physical observations did this information come?” So I check, and I find out that the information did NOT come from physical observations at all.

        Steven, I think you have to understand why “people like me” start to get a bit skeptical of what I am reading.

        It is really NOT that I do not LIKE models. It’s just that I become skeptical not of the models, but of the modelers.

        I read Knutti’s paper and this does not help matters.

        What is missing in all this is empirical data based on actual physical observations or reproducible real-life experimentation (which does not mean model runs).

        You tell me that Spencer estimates climate sensitivity from physical observations, not models. Yeah. And he gets a 2xCO2 value of 0.6°C (instead of 3.2°C, as the “models” do).

        Let’s have that thread on climate sensitivity as Judith has suggested, when she has time to set it up.

        And then let’s see what comes out of it.

        I’m sure it will be educational for “people like you” as well as for “people like me”.

        Max

      • steven mosher

        http://data.giss.nasa.gov/modelforce/RadF.gif

        you cannot begin to discuss the issue in an informed fashion when you dont aquaint yourself with all the literature.

        “I read that IPCC models project warming of 0.2°C per decade for the next two decades. However, I then see that there was no warming in the first decade at all.”

        Actually, we have a year long discussion of this topic over at Lucia’s
        There is a formal test for the projection of .2C per decade. You have to understand the model error, model noise, and observation model and error. It is NOT a simple matter. First off, the .2C projection was BASED on TSI remaining constant. As you know it dropped. Second, that conditional hypothesis ( if forcing goes AS PROJECTED, then temperature will go up by .2C over the next few decadeS) is not strictly speaking a hypothesis, since the conditions ( forcings going as projected) hasnt really happened. Finally, we can reject that the warming was .2C with high confidence, say 95%. We cannot however, reject other nulls, like the warming should be .15C.
        Simply put, ‘no warming’ in the first 10 years is
        1. an imprecise statement with no error bounds.
        2. inconsistent with strong warming (.2C), but not inconsistent
        with modern warming ( say .15C )

        You can see the importance of #2, by looking at those models which are consistent with a sensitivity of 2.1-2.7.

      • Steven Mosher

        You wrote:

        you cannot begin to discuss the issue in an informed fashion when you dont aquaint yourself with all the literature.

        No disagreement there, Steven, except “all” may be a big word.

        We can reject “with 100% confidence” the premise that the warming of the surface GMTA was 0.2°C over the first decade of the 21st century.

        You have rationalized WHY the model projection of 0.2°C warming per decade was wrong.

        Nassim Taleb covers such a case in his The Black Swan. It is the “my prediction was correct except for…” rationalization.

        I once had a sales manager working for me who told me his forecast was correct, except for the year, which was wrong.

        Steven, believe me, I am not “bashing” the modeling community. They are doing the best they can.

        It is just when they start “believing” in their models, as if these were oracles of some sort, I become skeptical.

        The “high confidence” sounds great, but the “my prediction was correct except for…” leaves me cold.

        TSI was deemed essentially insignificant by IPCC AR4, and now it is part of the “except for…”.

        Human aerosols were deemed to be offset by minor human GHGs by IPCC AR4, and now they are part of the “except for…”.

        Steven, I think you and I are both interested in the same thing, namely finding out what our planet’s 2xCO2 climate sensitivity REALLY is.

        So let’s concentrate on that rather than trying to justify or rationalize poor model forecasts of the past.

        Max

      • The simplest way to understand the “Clever Hans” problem in model building is to consider this simple questions.

        After the model has been trained by hindcasting, what happens when a projection is run into the future? What if the model does not agree with what the model builder thinks is a reasonable projection?

        Will the model builder publish the result or will they modify the model and repeat the hindcast until the model meets with their expectations?

        As soon as the model builder modifies the model to meet their expectations, they are training the model not to forecast climate, but rather to use the model builders own vision of the future as hindcast data.

        Just like “Clever Hans” the model does not learn to forecast the future climate (perform arithmetic). Instead it learns by observing the model builder to perform according to what the model builder expects.

        It is much simpler for a climate model to predict what the model builder expects as an answer than it is for a climate model to survive unchanged if its view of future climate differs significantly from that of the model builder.

        When a model differs significantly from the views of the builder, it will most likely be assumed to be in error and corrections made, regardless of where the true error lies.

      • Steven,
        Which proves exactly my point, however poorly expressed, that climate sensitivity to instantaneous CO2 forcings is a metric with poor value. There will always be something- a Pinatubo, a change in TSI, a great fire or other calamity to look back and decide that as why the prediction failed after the fact.
        I would ask you to reconsider y accelerator analogy, the one with a manual transmission, seven speeds, an engage-able 4X4 and three speed rear end. (There are in fact specialty vehicles like this, by the way)

      • Thank you Max!

      • Steve Mosher – This comment is motivated by the frustration you express. You’ve been the target for some of the recent arguing above, but what I perceive to be your accurate characterization of modeling, climate sensitivity, and other recent topics will probably be appreciated by objective readers of this thread, even if they don’t comment. The various links you cite have also been useful. I hope interested readers go back and visit your comments of the past couple of days to take advantage of the information you provided. Given the prevailing sentiments on this blog, you should not be surprised when someone else claims the last word on these topics. I think most readers know how to judge between conflicting views, regardless of the order in which they are expressed. In fact, in my opinion, an obsessive insistence on the last word often comes across as a liability rather than an advantage.

        My only other point about models is to repeat a view I’ve expressed previously – given the widespread misunderstanding evidenced here about model design and performance, I would welcome a guest post by a modeler, invited for the specific purpose of answering questions (so as not to be subjected to irrelevant arguments).

      • Fred,
        Bringing in a modeler to tell us why what is not working is actually working because he can tell us it works is not very credible.
        Bring in a non-modeler or someone with no climate dog in the hunt to tell us why.
        The arguments offered so far seem to be variations of appeal to authority or telling us- who spend a lot of time following this- that we are too stupid to understand it.
        The fact is the cliamte science community consensus, since ~1988 made a series of predictions that have failed to materialize.
        Starting with that, instead of calling us names- however gently you do it- is not really going to work.

  51. Gira /Manacker

    Thanks, you all have shown empirical values which contrast the theory. But somehow the modelling ” experts ” waffle and talk about abstract theories unconnected with reality, when the model predictions have achieved the equivalent of crap so far. An they even compare that to aircraft industry, for heaven’s sake.

    Climate science abounds with theoretical experts who seem have no idea of real world and seem to live in a time warp whose logic is based upon blind belief and models. They seem to be to be the equivalent of somebody on drugs who seems to have convinced himself that the hallunications he sees are reality.

  52. steven mosher writes “But they cannot join that debate by arguing that sensitivity is either
    1. input into models
    2. derived SOLEY from models.”

    I think we need to be careful here. Let me concentrate no-feedback climate sensitivity. This must surely be used as an input to the model which estimates how much positive feedback there is. How else can no-feedback climate sensitivity be estimated except by models? It can never be measured.

    • Jim Cripwell

      You have asked a very pertinent question:

      How else can no-feedback climate sensitivity be estimated except by models? It can never be measured.

      It is correct IMO that it can never be measured.

      The estimates (Myhre et al., Hansen, etc.) are all largely based on theoretical deliberations. IPCC has chosen Myhre et al., who have put this at around 1°C (= 3.7 W/m^2).

      But calculated estimates were also made before there were any models. Arrhenius estimated that this was 1.6°C (with water vapor feedback = 2.1°C) – see Wiki.

      The no feedback climate sensitivity for 2xCO2 is a purely theoretical figure, as you say.

      But it is only of real importance, as far as the ongoing scientific debate on AGW is concerned, when all feedbacks are included.

      Unlike the no-feedback sensitivity, this can be estimated based on physical observations (in other words by empirical data).

      Earlier estimates of climate sensitivity were made using spotty ocean temperature records or questionable radiosonde data. More recent studies have been made using satellite data (Spencer, Lindzen and possibly others), but there is no agreement so far on what these observations have really shown us.

      Steven McIntyre has lamented that there is no “engineering quality” study confirming the model-based 3°C CS figure of IPCC, which is based on actual empirical data.

      IMO this is where the determination of the REAL climate sensitivity will eventually come from, rather than simply from model simulations based largely on theoretical deliberations.

      I am looking forward to the thread on climate sensitivity, which Dr. Curry has suggested for some time in the future. It should be interesting for us all.

      Max

      • no feedback sensitivity can be measured directly without models by comparing the atmospheres of venus, mars and earth. given that forcings are linear then a very simple numerical solution should be possible. It is my understanding that such an anlysis shows that at similar pressures, the atmospheric temperature of venus (CO2), earth (N2O2) and mars (CO2) vary as their distance from the sun. There is no observed increase in temperature on venus and mars due to their CO2 atmospheres as compared to earths N2O2 atmosphere.

      • This is consistent with the observation on earth.

        We know that CO2 levels follow temperature. This has been clearly shown in the paleo records. If CO2 was to also drive temperature, then earth’s climate would never be stable.

        Any decrease in temperature would also lead to a decrease in CO2, which would lead to a decrease in temperature, which would lead to a futher decrease in CO2, which would lead to a further decrease in temprature, etc, etc until the earth was frozen without CO2 in the atmosphere, and life was extinct.

        Similarly, any increase in temperature would lead to an increase in CO2, which would lead to an increase in temperature which would lead to an increase in CO2, etc, etc., until the ocean’s heated to the point where life was extinct.

        In both these cases, the simple fact that temperature drives CO2, leads to the extinction of life on earth if CO2 also drive temperature. Simple logic dictates that only one can be true or life on earth would not exist.

      • Ferd – Consider a system that responds to a forcing with a response, R (e.g., a rise in temperature in response to CO2). Suppose that the temperature increase induces some further change that itself raises temperature (e.g., a rise in water vapor or in additional CO2), and that adds to the original increase by a factor, f, by which R is multiplied, thereby adding Rf to the original R. In other words, after this feedback, the response becomes R(1 + f). Now let Rf, the added increase, induce its own increase by multiplying it by f to give Rf^2, and let this continue indefinitely, with each increase inducing its own feedback. This gives us a form of Taylor series in which the original response R becomes R(1 + f + f^2 + f^3 … + f^n). As n goes to infinity, the series goes to R/(1-f). As long as f < 1, the series converges to a finite value greater than R. Only if f approaches 1, would the possibility of a runaway climate emerge. In other words, feedback multipliers do not automatically lead to instability. As an example, an estimated water vapor feedback factor f in the climate system appears to be about 0.5, which would double the original response – i.e., R/(1 – 0.5). In the real climate system, multiple feedbacks interact, but yield a final composite estimated f value of about 0.6 – 0.7, which is where a mid-range climate sensitivity estimate of about 3 C/CO2 doubling comes from, based on a no-feedback temperature response of about 1 – 1.2 C. These estimates do not include the type of CO2/CO2 feedback you mention, which if included, would further increase f, but not to the point of approaching a value of 1.0 and a runaway climate.

        (Note that in the post on Hansen's paper, that paper estimates that the CO2 feedback in response to temperature explains a substantial part of the total temperature response to the warming originating in orbital forcing (a change in the Earth/sun geometry), because that initial warming itself was much too small to explain the actual magnitude of warming that actually occurred, and so a feedback multiplier was required. That principle is almost certainly correct, whether or not one agrees with Hansen's quantitation.).

        For one analysis of these concepts, see Climate Feedback.

      • Mathematics Fred – a Taylor’s series no less.

        We have a form of the infinite geometric series:

        ∆ T = R + Rf + Rf^2 + …Rf^n
        => ∆ T/R = 1 + Rf + Rf^2+… +Rf^n = 1/(1-f) for |f| ∆T = R/(1-f)

        But how did we get there? R is the temperature response such that: R = f ∆f – a factor times the change in forcing such that:

        ∆T = f ∆ f1 + f ∆ f2 +… + = R1 + R2 +…+

        Your form is correct if and only if the change in forcing:

        ∆f2 =f∆f1
        => f = ∆f2/∆f1

        but f = R/∆f 1 by your definition and this gives:

        R = ∆f2 – it’s dimensionally and conceptually nonsense I believe.

        It is also nothing like the much more complex form in your reference.

        The transitions between glacials and interglacials in the Quaternary is much more likely to involve ice feedbacks than the minuscule changes in carbon. Forgive me if Hansen wasn’t talking about the last 2.38 million years. Apart from a recent unfortunate experience – I don’t read Hansen.

      • That should be –

        ∆ T = R + Rf + Rf^2 + …Rf^n
        => ∆ T/R = 1 + Rf + Rf^2+… +Rf^n = 1/(1-f) for |f| ∆T = R/(1-f)

      • I am sure I got it right? I will try again.

        ∆ T = R + Rf + Rf^2 + …Rf^n
        => ∆ T/R = 1 + Rf + Rf^2+… +Rf^n = 1/(1-f) for |f|∆T = R/(1-f)

      • I am being sabotaged. Never mind this is just repeating Fred’s formulation.

      • Chief, there is ice involved but there is also water. I am fairly confident the models incorporate the loss of albedo but what they don’t seem to have a handle on is the hydrologic cycle. Glaciers a kilometer high melting will create new rivers, lakes, swamps, and irrigate the surrounding countryside. I’m sure this changes the regional climate and in turn affects the rate of glacier loss. The land would change again once the glaciers had receded. I can only assume there are changes in the climate sensitivity due to these changes beyond just the loss of ice which they already acknowledge (at least my mind insists I have read them acknowledging this). Where I am heading with this is I don’t agree that the sensitivity derived from deglaciation is of any particular use for the modern landscape and wondered if you had an opinion on this.

      • The models I believe use a constant albedo. There are reported changes involving black carbon on snow that may be important. The net ice change is probably minimal – gains in the Antarctic and losses in the Arctic about equal last time I looked.

        Clouds seem to change quite a bit – I don’t know why people think cloud cover should be constant.

        While I am here – there is no infinite series Taylor or otherwise.

        ∆T = f ∆ f1 + f ∆ f2 +….+f∆ fn – is a correct for a forcing feedback. It is not a Taylor series but doesn’t doesn’t grow to infinity because of saturation in the IR bandwidth.

      • Robert – I can’t be sure from the way you wrote the terms, but you may be confusing two different concepts. One is the existence of multiple feedbacks (f1, f2, f3, etc., such as water vapor, ice, clouds, etc.), and the second is the iterative response so any specified feedback (or linear combination, although linearity does not always apply), with each successive feedback inducing its own feedback, so that the Taylor series involves the successive powers of f. It is the latter that converges to a finite quantity, 1/1 -f), as the powers are added without limit, for f<1. For more on this, you and other interested readers should probably review the Roe reference I linked to. Chris Colose at Mathaware also has a similar treatment of feedbacks.

        For more on the Taylor series as a function of an infinite sum of terms, see Taylor Series, including an example in which the term “x” is used in the manner that I used “f” in describing feedback.

      • Well no Fred – you have made an error in your formulation of a Taylor in this – of this I am sure.

        We have R = a change in temperature caused by a change in forcing = f ∆ F – f being some of factor where the absolute value of f’ is less than 1. ∆T is the sum of all these feedbacks.

        ∆T = f ∆ F1 + f ∆ F2 +… + f ∆ Fn

        This is just a simple addition – there is positive feedback causing warming resulting in more positive feedback etc.

        What you have is:

        R(1 + f + f^2 + f^3 … + f^n)

        => f∆ F (1 + f + f^2+…+f^n) – is dimensionally unstable as a result of making a false assumption about the nature of the higher order terms.

        Chris Colose assumes a warming factor for positive feedbacks f which exponentially decreases for an absolute f less than 1. A formalism which is more or less correct as we know that the positive feedbacks decrease exponentially. An exponential reduction is implicit in my simple addition – it is the way the radiative transfer dynamics work.

        If you want to do this – you need to evaluate the Taylor series for f first and then solve for a feedback (as opposed to a no feedback) delta T as Chris Colose does.

        We haven’t got to multiple forcings and feedbacks yet let alone non-linearity.

      • We haven’t got to multiple forcings and feedbacks yet let alone non-linearity.

        hopefully a post on this is coming soon

      • Robert – I think you may have confused yourself again, by mistaking f, the fraction of a response fed back into the system, for λo, the no-feedback sensitivity parameter. The initial, no-feedback temperature response (R in my example) is simply λo x F (where F is the forcing, not a “change in forcing”), and the feedback factor f is not part of it. A typical value for λo is ~0.3 K/wm^-2, but f, the feedback factor, can have any value, is dimensionless, and leads to a stable amplification as long as 0 < f <1. If you look at Chris Colose's equation 19, you will see it is equivalent to my formulation, except that I used R in the numerator instead of λo x F, and then, as Chris did, multiplied this by 1/1 – f, where 1/1 – f is given by the expansion I used, 1 + f + f^2…+ f^n. In addition, f does not decay exponentially, but is assumed to be constant for a first-order case as usually described, although it might conceivably change slightly in a more complex analysis.

      • I’m aware the not all series grow to infinity. This places limits on the rate of CO2 released due to warming, and limits on the rate of warming due to CO2.

        However, none of this explains why the high CO2 content atmospheres of venus and mars doe not show a greenhouse effect as compared to earth with a low CO2 content atmosphere, when adjusted for pressure and distance from the sun.

        The obsession with climate science and models is that climate models use machine learning techniques to improve their hindcasting. As a result the models do an extremely good job of analysing and responding to the expectancy of the model builder.

        The “experimenter-expectancy effect” (Clever Hans effect) is widely known from animal studies. What model builders have been slow to recognize is that this same effect takes place in machine learning situations as well.

        Climate models are modelling the expectations of the model builders, they are not modelling climate. The most glaring example of this is the near universal failure of climate models to predict that the rate of temperature increase would moderate around the year 2000 onwards.

        Over the past 150 years of thermometer records, we have had alternating periods of warming and moderation, 30 years apart. We were due to begin a period of moderation beginning in 1998 if the pattern repeated, and it appears that it has.

        The models did not catch this because the model builders themselves did not expect a period of temperature moderation to occur. Those models that did predict the moderation were rejected as in error, and they were modifed to “fix” the problem.

        Those models that did not match expectations were rejected by the model builders in a process similar to “selection of the fittest”. Not because the models were the “fittest” at predicting climate, but rather because the models were fittest at predicting what the model builders expected would happen.

      • I did indeed – I was wondering where this f came from. I was thinking that f had the dimensions of lamba – but it is a dimensionless as you say.

        Your statement: ‘This gives us a form of Taylor series in which the original response R becomes R(1 + f + f^2 + f^3 … + f^n).’

        The form of the Taylor infinite geometric series is:

        1+f+f^2+…f^n = 1/(1-f).

        The solution converges to a finite value by the exponents of f – which is an approximation of the physical process of IR band saturation. That is – there is no runaway warming not because the Taylor series converges to a finite solution but because there is an underlying physical reality.

        You then multiply it by R to get the temperature response. I blame your sloppy formulation for my confusion because the original expression is not the infinite geometric form of a Taylor series at all.

      • like a lambda to the slaughter – :oops:

      • manacker writes “But it is only of real importance, as far as the ongoing scientific debate on AGW is concerned, when all feedbacks are included.”

        I am afraid you have lost me. Of course total climate sensitivity is what we are trying to estimate, but surely, no-feedback sensitvity and total sensitivity are closely related. I assume the relationship is linear. If the no feedback sensitivity were to be halved, then total sensitivity would be halved as well. So the answer we are looking for is completely dependent on what the numeric value of no feedback sensitivity is. And we have no idea what this value is, since we cannot measure no feedback sensitivity.

      • Jim Cripwell

        You are correct that “no feedback CS” is a number that cannot be measured in practice, AFAIK.

        And yes, it is the theoretical basis for CS with feedback, which could be measured. The earlier estimates were based on radiosonde and ocean data, both of which were considered unreliable. Most recently Spencer and Lindzen have measured radiation balance changes with warming from ERBE and CERES satellites and, using these physical observations, have estimated the 2xCO2 CS with feedbacks to be around 0.6C. These studies were published after IPCC AR4 WG1, so were not included in this report, which estimated the 2xCO2 CS at 2 to 4.5C (average 3.2C) based on model simulations.

        So you see that there is a major discrepancy between these estimates.

        The mainstream “model community” has not jumped on the Spencer and Lindzen findings with glee. In fact, they (Fasullo & Trenberth) tried to falsify Lindzen & Choi 2009, and L&C have since issued an addendum, revising their estimate slightly from 0.5 to 0.6C to correct for calculation errors. Spencer has also critiqued the L&C 2009 calculation method and recalculated the CS using the L&C data, also arriving at a slightly higher 2xCO2 CS of around 0.6C. Spencer has a paper in print now, which uses CERES data and arrives at 0.6C, as well.

        So it looks like there is a major discrepancy between the satellite observations and the model simulations, but I’d say it’s still to early to know which side is right.

        Other more recent model studies seem to be coming up with lower estimates than those claimed in AR4 (Schwartz). Then there have been studies using super-parameterization for clouds (Wyant et al.) which show negative cloud feedback and therefore much lower overall CS.

        Just for a rough comparison, the model-based IPCC estimate (average between the models) is 3.2C. Cloud feedbacks alone are estimated to contribute 1.3C of this (total excluding clouds = 1.9C, of which net water vapor – lapse rate ~ 0.8C and surface albedo ~ 0.1C).

        So it is apparent that the role of clouds is of key importance. If the Spencer & Braswell 2006 observations are correct, cloud feedback is strongly negative, instead of strongly positive, as postulated by IPCC based on model simulations. Just correcting for this alone would put 2xCO2 CS at a figure below 1C (and AGW no longer a potential threat to humanity), so you see how important this all is.

        Until we have a more solid handle on the 2xCO2 climate sensitivity than we do today, it is foolish to make any long-term projections into the future as IPCC has done.

        Steve McIntyre is 100% right. This is the number we need to define, and not based solely on model simulations, but based on empirical data supported by actual physical observations.

        Max

        Will we have a better grasp of this number as we get more satellite data in the future? I am optimistic that we will.

      • manacker, I would like to pursue this a little further. I agree completely that any measured values of total sensitivity are extremely important. But that is not my concern.

        My concern is the estimated value for total sensitivity derived by the IPCC and others from the no-feedback sensitivity. If the no-feedback sensitivity is unknown, which I believe it is, then surely the estimated value for total sensitivity is also unknown. That is the issue I am concerned with.

        Am I correct? Is it true that the total sensitivity estimated by the IPCC and others is completely unknown?

  53. Steven, First I did go look at Lucia’s Lumpy model post. She followed Schwartz 2007, her results were good at hindcasting, and the model output was a sensitivity of 1.7C. Since then Schwartz has published the observations at 1.2C. You state that can’t be done because the observations do not incorporate the sensitivity to equilibrium (my understanding is most models account for between 40-60% of their warming after the transient warming period). I have pointed out that the models do not appear to allow the same period of relaxation for natural forcing and in fact turn it off like a light switch. Can you show or tell me where I am in error or should I continue on with my opinion that I shouldn’t take equilibrium sensitivity seriously until such time as I see some of the more recent warming attributed to earlier forcings?

  54. Ilya Zaliapin and Michael Ghil – found here – use a simple energy balance model to examine sensitive dependence in climate. The relevant results for sensitivity are shown here in Fig. 5.

    It relates to the concept of sensitivity by showing that sensitivity is variable in non-linear systems such as climate. Lower sensitivity away from bifurcation points and extreme sensitivity near such points.

    ‘Sensitive dependence does exist in the climate system, as well as in climate models — albeit in a very different sense from the one claimed in the
    linear work under scrutiny — and we illustrate it using a classical energy balance model (EBM) with nonlinear feedbacks. EBMs exhibit two saddle-node bifurcations, more recently called “tipping points,” which give rise to three distinct steady-state climates, two of which are stable. Such bistable
    behavior is, furthermore, supported by results from more realistic,
    nonequilibrium climate models. In a truly nonlinear setting, indeterminacy in the size of the response is observed only in the vicinity of tipping points.’

    There are multiple examples of bi-stable states in Earth system indices. The PDO, SAM, ENSO, AMO, AO, PNA, IOD, SOI – all are indices of non-linear, non-stationary and non-Gaussian phenomenon with bi-stable states. Indeed, in the analysis of Tsonis and colleagues – at least 4 of these are linked as planetary standing waves in the spatio-temporal chaos of Earth’s systems.

    I am amused by the idea of an engineering quality study however. Engineering is a field where approximations rule – there is rarely enough information for rigorous analysis. We use ‘expert judgement’, rules of thumb, codification from historic practice, factors of safety, empirical or numerical approximations, physical models, etc. Apart from the physical models, sounds very much like climate to me.

    We would never however use simple inductive reasoning, the allegorical or the metaphorical.

    • I am amused by the idea of an engineering quality study however. Engineering is a field where approximations rule – there is rarely enough information for rigorous analysis. We use ‘expert judgement’, rules of thumb, codification from historic practice, factors of safety, empirical or numerical approximations, physical models, etc. Apart from the physical models, sounds very much like climate to me.

      Only for things that aren’t critical. If it’s critical, you can take to the bank that it’ll get studied to death, one way or another. There’s no point in worrying about the diameter of a bolt that will just handle the tension when they’re only available in standard sizes. You pick the next size up. Don’t confuse safety factors and standard sizes with voodoo.

      • Are you an engineer?

        In the real world bolts are not uniform for instance – materials are not uniform, there are size tolerances, stresses change from dead to live loads, stress concentration occur. The design codes specify a factor of safety – the nominal strength has a statistical safety factor – loadings have a safety factor. Why do you think it so rarely falls down?

        Aeroplanes and cars get tested to an inch of their lives and sometimes beyond. Things like a bridge or a dam can’t get tested on a full scale. There are so many variables – wind, waves, ships running into it, mega floods, cyclones, earthquake, tsunami, ground conditions – all known more or less approximately. It works but not how you are imagining. In fact, the only thing that matters in engineering is that it does work and that means primarily that it doesn’t kill anyone.

      • Chief –
        the only thing that matters in engineering is that it does work and that means primarily that it doesn’t kill anyone.

        Long ago and far away in Babylon, it mattered a LOT to the engineer, because if it fell down and killed someone, his life was forfeit. Sometimes I think that might still be a good idea. But then I remember ……I’m an engineer, too. :-)

      • I remember seeing the Tacoma Falls bridge film in Engineering school. The guy who designed it shot himself. I think it’s a bit like being an officer and a gentleman.

      • Just imagine how many engineers were put to death before physicist developed fracture mechanics.

      • JCH –
        Engineers were doing their thing for 3,000 years before the first physicist showed up – what kept you? :-)

      • Chief –
        I remember seeing the Tacoma Falls bridge film

        I got to see Gertie gallop too, but we also had lots and lots of rocket and aircraft failure films to watch. Watching a V2 do loop-de-loops or drop onto the control blockhouse and explode or an aircraft peel off a wing or two gave us a sense of the bad things that could happen if we did it wrong. Later I found out how small a mistake (not mine) it took to dump a mjaor spacecraft into the middle of the Pacific. Just like the OCO and Glory failures.

        Later still, I wrote my thesis on the role (and cost) of failure in getting us to the technological civilization we enjoy today. It’s been a wild 3,000 year ride.

      • I have a personal experience that’s funny only because by the grace of God no one got killed.

        It involved explosives and rocks raining down on the local resort pool deck.

      • Chief –
        It involved explosives and rocks raining down on the local resort pool deck.

        LOL! BTDT – All my high school classmates remember me as the one who tried to blow up the school (not true – but another lesson in the consequences of failure). Today they’d lock me in the dungeon and throw the key away. The world has changed, but not for the better in some respects -

        http://wattsupwiththat.com/2011/04/29/friday-funny-science-safety-run-amok/

      • Galloping Gertie was interesting because a new phenomenon – resonance – surfaced. It had never occurred to anyone before that gusts of just the right frequency could start a bridge rocking and rolling.

        There’s a metaphor in there for the climate debate. Just when you think you understand everything, something comes along and demonstrates how spectacularly ignorant you are. It wasn’t that resonance was hard to understand, it was just that nobody ever thought about the black swan wind gusts of just the right (or wrong) speed, direction, and frequency.

        Such is reality.

  55. The paleo record of earth’s temperature for the past 600 million years shows it has two stable states. 22C and 12C. This argues strongly against the simple linear forcings used by the IPCC and present climate models.

    For most of the past 600 million years the average temperature of the earth has been 22C, which is coincidentally the same temperature human’s find most comfortable, and except for energy savings will likely be the setting for the thermostat in your house.

  56. Steven Mosher

    Simple question:

    Comparing the decadal global mean temperature trends of the last two decades, is the trend accelerating or decelerating?

    Here is my result: http://bit.ly/aH6Xps

    I hope we can at least agree on this!

    I accept we don’t agree on what the result implies.

  57. The simplest test to judge the suitability of any model is to increase the resolution.

    If a model is modelling the physical world, then as you increase the resolution of the data, the accuracy of the answer will improve.

    If however the model becomes unstable as the resolution increases you know that the model is in reality modelling the modeller. Because the model does not yet know what the modeller expects when the resolution increases, the model becomes unstable.

    So, a necessary test of any model should be to increase the resolution and ensure that the accuracy increases, not the instability.

  58. Does sensitivity emerge from the models or is there some subjective choice involved?

    The range of values that emerge from the models of 1.5 to 4.5 suggest that there is indeed a subjective choice involved as these differences emerge from – for one thing – the range of plausible values for ozone, solar irradiance, aerosols, etc. But each of these solutions are in themselves subjectively chosen as being ‘plausible’ by the modellers themselves. The models are tuned to observations and then projected into the future.

    What neither they nor we know is the absolute extent of the model solution space. For each of the models this would require a systematic evaluation of the range of solutions found from the range of initial and boundary conditions – potentially thousands of runs. This can’t be done at this time and is not justified by the state of the art of the models.

    The modelling community is wanting access to 10,000 times current computing speeds to first of all refine the grid and then to perform these sorts of stability exercises. I think they need to understand the role of clouds first of all – far more basic science from physical observation.

    At this stage a single run is subjectively chosen as being plausible by the modellers and forwarded to the IPCC. We can have no confidence in the range of possible solutions – the error bars – as these have not been systematically explored.

    If we remember that small changes in inputs can produce large changes in outputs in non-linear systems – as in the simple EBM above. The significance of this can be seen in Edward Lorenz experience with his early convection model. Lorenz started his calculations by truncating input after the decimal point. This should have made very little difference to the output but instead the ‘solution’ changed dramatically. The solution shifted into a different ‘phase space’.

    The climate models use partial differential equations in x, y and z. The evolution of solutions of 3 ordinary differential equations in x, y and z is shown here – http://to-campos.planetaclix.pt/fractal/lorenz_eng.html – you can push the start button to change the start point. The characteristic butterfly shape of a complex solution space (or phase space) is shown. In Earth systems the phase space has a large number of stable states (butterfly wings) on the solution space.

    These are difficult concepts as we need to stop thinking in terms of simple causality. The human brain is not built for it. 1 plus 1 does not equal 2 in the non-linear universe and the last word belongs to the Dragon Kings. Very few people seem to get it is essential to a more nuanced understanding of Earth systems.

  59. When it comes to the greenhouse effect someone has to explain Ferenc Miskolczi’s observation that the infrared transmittance of the atmosphere has remained constant for the last 61 years. It is an empirical observation based on the NOAA database of weather balloon observations going back to 1948. The title of his presentation about this result to the Europen Geosciences Union in Vienna in April was: “The stable stationary value of the Earth’s IR optical thickness.” During the period covered by the NOAA database that he used the carbon dioxide concentration of the atmosphere increased by 21.6 percent. It is this added carbon dioxide that is supposed to be responsible for global warming that IPCC models use to predict dangerous warming ahead. If it is true that it does not contribute to the greenhouse effect as Miskolczi’s result requires then all predictions of warming by IPCC climate models are wrong. Is anyone going to tackle this issue? It has to be resolved because if true the global warming movement is simply KAPUT. You cannot brush it off by saying that Arrhenius knows best – you must evaluate his science. And whatever you find out about it, put it into a peer-reviewed journal.

    • Arno

      No one has been bothered to answer Ferenc Miskolczi because it seems a bit mad. He uses clear sky radiosonde data to determine that the optical depth of the atmosphere was relatively constant. All things being equal – that can’t be the case with an increase in CO2. So a decrease in water vapour is posited. A decrease in specific humidity is not the case in any of the global datasets.

      He gets there by dubious paths. Various fundamental physical principles are invoked which turn out to impossible – as in energy equilibria at the surface – or approximate and inapplicable – such as virial theorem. It turns out – on enquiry – that these relationships rely on radiosonde data that cannot possibly be accurate enough to the limits required to prove a constant optical depth. It has – I note – been enthusiastically embraced by a few in the sceptic camp I can only regard as hopeful but not having the technical background to realistically assess this claim.

      So we have a theory that is on very shaky theoretical grounds and is falsified by the predicted fall in atmospheric water vapour not occurring. I think it unlikely – but I am far from dismissing this out of hand. The great precedent is Sir Gilbert Walker was looked at vast amounts of data and divined a connection between sea level pressures in the Pacific and the Indian monsoon. This was not widely accepted for 50 years.

      From my perspective though – all sky data is a better option for an alternative explanation. Clouds in other words.

      Cheers

  60. Unfortunately Arno, that’s the issue currently with Climate Science. The people supporting AGW based Climate Science tend to work on the basis that models trump scientific methods and empirical observations. You see classic examples of such arguments in this thread.

  61. Miskolczi’s own reply on another forum, was as below

    ” In this article we are not talking about competing greenhouse theories. The main point of the paper is that in the last 61 years the global average infrared optical thickness of the real spherical refractive inhomogeneous atmosphere is 1.87, and this value is not changing with increasing CO2 amount. This means that no AGW exists based CO2 greenhouse effect.

    This is a very simple statement. To conquer this statement you must come up with your own global average atmosphere and optical thickness, and show the methodology of its computation.

    It is irrelevant what you or K. Trenberth, R. Lindzen, or the RC gurus like G. Schmidt from NASA or R. Pierrehumbert, P. Levenson, ect. may guess, assume or believe about the physics of greenhouse theories. Even my theory which supports the 1.87 value is irrelevant. Here no useless radiative budget cartoons or GCMs, or assumed feedback processes or arbitrary constants are needed. You do not need to worry about what the global h2o, temperature and pressure field is doing and what is the relationship among them. The atmosphere and its radiation field knows exactly what it should do to obey the laws of thermodynamics, or how to obey the laws of the conservation of energy, momentum and mass, or how to obey the energy minimum (entropy maximum) or Hamilton principles on local, regional or global scale.

    If you really want to know what is going on with the global average IR radiation field and you or your experts have some knowledge of quantitative IR radiative transfer, you (or the others) may compute precisely this physical quantity using only first principles and real observations. There is no other way around. The true IR flux transmittance, absorption or optical depth is fundamental for any or all greenhouse theories.

    If you do not trust my 1.87, compute it yourself, see how it is changing with time and verify or falsify my computation. Here there are no theories to chose, but the correct straightforward computation of a single physical quantity which gives the accurate information about the absorbed amount of the surface upward radiation. I am patiently waiting for your results. It is not very easy, but you or your group may give a try. If you can not do this with your resources, then further discussion of this topic here is useless.

    After we agree on this issue, we may start our debate on the theoretical interpretations of the results that was outlined in my 2007 Idojaras article, or on the questions how to relate the absorbed surface radiation to the surface temperature or to the downward IR flux density. ”

    So, if anybody wants to falsify Miskolczi’s theory, instead of pointless theoretical handwaving, they should go and measure for themselves and see. That’s how real science functions.

    But unfortunately, Climate Science seems to rely too much on theories, models and handwavings and seems to discard empirical observations which contradict those theories or models.

    • Venter,

      ‘So, if anybody wants to falsify Miskolczi’s theory, instead of pointless theoretical handwaving, they should go and measure for themselves and see.’

      Miskolcz’s claims have been falsified as incorrect on a daily basis. If the physical model he is proposing is correct, then we should observe constant optical density in the IR of absorbing gases in laboratory experiments in which a condensing gas is present. No such evidence exists. In fact, gas lasers would be a lot harder to use if his theory was right. Therefore, there is not a realizable, reproducible physical mechanism that, on a molecular level, can account for what he is claiming.

      More than that, he also does not provide a physical mechanism for the atmosphere ‘choosing’ water to condense in maintaining constant O.D. in the IR. There are several other gases that condense at temperatures and pressures in the atmosphere that would provide the same response via condensation water. Acetone, CH2Cl2 and methanol are all examples. Why would the atmosphere ‘choose’ water, which, per molecule, costs more energy to condense than these other less concentrated gases?

      I think Miskolczi has really tricked himself using the poorly conditioned humidity data. If he really wanted to convince people that his theory was meaningful, he’d get an IR lamp, fill a air-tight container with an inlet valve with atmospheric, wet air and measure the O.D. Then begin to add compressed CO2 to the container and continue to measure the O.D. If it changes, he’s wrong.

      Anybody could do that test. Maybe as part of the thread on the scientific method someone could actually try to use it to test anything they’re claiming. So far, the thread has been a failure on that front.

      • Maxwell,

        Till somebody goes and does actual measurements and proves him wrong, it is not falsified. Lab level experiments and theorising based on that is not falsification of actual field measurements.

      • Venter,

        ‘Till somebody goes and does actual measurements and proves him wrong, it is not falsified.

        The ‘actual measurements’ have been done. Over and over and over again for 100 years and running now. IR absorption of atmospheric wet air is one of the oldest spectroscopic measurements. No one has ever measured a constant O.D. when adding more and more absorbers. Ever.

        Therefore, there is not an observable physical mechanism that accounts of gas molecules ‘knowing’ the O.D. of the gas in which they embedded and then changing phase in response to said knowledge. That is the fundamental physical mechanism on which Miskolczi’s entire theory rests.

        I think you also have an odd appreciation of the scientific method. It’s not my, or any other researcher’s, job to prove an unproven theory ‘wrong’. It’s the theorist’s responsibility to find an experimentalist who will further and further prove the reliability of the theory. Because as it stands now, Miskolczi hasn’t done the field measurements either. So you ought to be pestering him to prove his own theory ‘right’ rather than me to prove him ‘wrong’.

        That’s actually how science works.

      • Considering that Miskolczi’s paper was published lat year, could you please show who measured and published that Tau is not equal to 1.87?

        That is the precise point of contention that has been raised by Miskolczi in his response.

      • Venter,

        If you’re entire argument concerning Miskolczi’s work depends on your defending a number we computed and published in a rubbish publication, then you’ve already lost. For that matter if his entire argument, personally, depends on clinging to the actual number he calculated, then he’s already lost the argument.

        Basically everyone with any say on this issue feels that way.

        His handling of the argument with Roy Spencer pretty much sealed the deal in my opinion. He wouldn’t even begin to discuss any of the physics involved. He just pouted like 6 year old girl when people just didn’t take him at his word.

        I have no doubt that he knows how to do math and calculate O.D. A high school student could do that.

        What I highly doubt, however, is that he realizes the physical mechanisms implied by his work that are not only unproven by his calculation of the optical density, but fly in the face of 100 years of precisely controlled and highly reproducible experiments exposing the behavior of gas phase molecules.

        Knowing that, I don’t need to publish a paper disputing the number he gets for the optical depth of the atmosphere. By the time he’s calculating that number, he’s already made enough unphysical assumptions about the atmosphere that it’s not worth trying to reproduce his efforts.

        So even if I get the exact same number for the optical depth, which I could do in two lines of algebra, the physics I use to get that number would be very different than what we determined. And that’s what I’m discussing.

        If you want to discuss physics, that’s great. I’ll do that. But I’m not going to give any room on whether or not 1.87 matters in any meaningful way without discussing the physics. Because it doesn’t mean anything right now.

      • Maxwel,

        I did a lot of reading after your post and I believe you’re right. I did manage to understand that Miskolczi did make possible errors in some of the assumptions which could have given his erroneous result.

        Thanks for that.

  62. The radiosonde data is not sufficiently accurate IMO to put much confidence in observations of such fine changes in optical depth – by an order of magnitude.

    The theory demands that the specific humidity decrease with increasing CO2. This does not seem to be happening.

    Everyone has their favourite theory – we need to be open minded.

    • Christopher Game

      The theory does not demand that the specific humidity decrease with increasing CO2. The character of the Planck-weighted greenhouse-gas optical thickness is such that, beyond the effect of CO2, three main factors determine changes in it: temperatures, both land-sea surface and atmospheric, and water-vapour column amount.

    • “The theory demands that the specific humidity decrease with increasing CO2. This does not seem to be happening.”

      On the contrary, I saw a series of graph’s posted the other day on WUWT from NOAA that showed that specific humity had been decreasing for the past 50 years, especially at higher altitudes.

      I didn’t make a note of them but it was within the past week for anyone looking to resolve this question.

      • fred berple

        Here’s the link to the NOAA data from 1948 to today.
        http://www.esrl.noaa.gov/psd/cgi-bin/data/timeseries/timeseries.pl?ntype=1&var=Specific+Humidity+(up+to+300mb+only)&level=300&lat1=90&lat2=-90&lon1=180&lon2=-180&iseas=1&mon1=0&mon2=11&iarea=1&typeout=1&Submit=Create+Timeseries

        I’ve also plotted the data versus the HadCRUT temperature anomaly:
        http://farm6.static.flickr.com/5221/5672878278_5d6f19bab5_b.jpg

        The data have been questioned, but they show a reduction in specific humidity (water vapor content) since 1948 at the same time as the HadCRUT surface GMTA has shown warming.

        Max

      • fred berple

        Sorry.

        The legend got mixed up on the earlier chart. Here is the link to the corrected one.

        http://farm6.static.flickr.com/5190/5672926110_d883c6da85_b.jpg

        Max

      • Max, Am I the only one surprised at the humidity vs temp graphs, or does my simple mind read too much (or not enough??) into it. At the base level it seems to flip the AGW theory of feedback. Can you elaborate on, “The data have been questioned”?

      • Rod B

        You ask regarding the NOAA specific humidity record, which shows a gradual reduction in tropospheric moisture content from 1948 to today, based on radiosonde data:

        Can you elaborate on, “The data have been questioned”?

        IPCC AR4 WG1 Ch.3 (p.271) tells us:

        The network of radiosonde measurements provides the longest record of water vapour measurements in the atmosphere dating back to the mid-1940s. However, early radiosonde sensors suffered from significant measurement biases, particularly for the upper troposphere, and changes in instrumentation with time often lead to discontinuities in the data record (e.g. see Elliott et al. 2002). Consequently, most of the analysis of radiosonde humidity has focused on trends for altitudes below 500 hPa and is restricted to those stations and periods for which stable instrumentation and reliable moisture soundings are available.

        My translation:

        The physically observed data do not validate the hypothesis of increased moisture content with warming to support constant relative humidity; observed data have thus been selected to include only those data points, which confirm the hypothesis.

        IPCC then go on in AR4 WG1 Ch.3 (p.272):

        Trends in specific humidity tend to follow surface temperature trends with a global average increase of 0.06 g kg-1 per decade (1976-2004). The rise in specific humidity corresponds to about 4.9% per 1°C warming over the globe. Over the ocean, the observed surface specific humidity increases at 5.7% per 1°C warming, which is consistent with a constant relative humidity. Over land, the rate of increase is slightly smaller (4.3% per 1°C), suggesting a modest reduction in relative humidity as temperatures increase, as expected in water-limited regions.

        Coming back to Elliott et al.:
        http://journals.ametsoc.org/doi/pdf/10.1175/1525-7541(2002)003%3C0026%3ALTHTRI%3E2.0.CO%3B2

        A closer look shows why results above 500 hPa were eliminated from the study. These are the results, which showed both less absolute moisture content (lower specific humidity) as well as lower relative humidity with warming (the NOAA curve I posted goes to 300 hPa at all latitudes). These would kick out the whole concept of a positive water vapor feedback with warming, so would be very embarrassing for IPCC.

        Elliott et al. conclude, based on the selected data below 500 hPa that SH (moisture content) increased slightly with warming, but not at a rate sufficiently strong to maintain constant RH, as is assumed by the IPCC models in estimating water vapor feedback.

        A shorter-term study by Minschwaner & Dessler over the tropical ocean confirms slight increase in moisture (SH) with warming, but only at a small fraction of the rate needed to maintain constant RH according to the Clausius-Clapeyron equation, cited by IPCC to argue for constant RGH with warming.
        http://mls.jpl.nasa.gov/library/Minschwaner_2004.pdf

        The analysis suggests that models that maintain a fixed relative humidity above 250 mb are likely overstating the contribution made by these levels to the water vapor feedback.

        This is a very guarded understatement, if one looks at Fig. 7 in the report. I have extended this graph to show the full specific humidity impact with 1°C warming.
        http://farm4.static.flickr.com/3347/3610454667_9ac0b7773f_b.jpg

        As can be seen, the “constant RH assumption” exaggerates the actually observed moisture increase with warming by a factor of around 10:1 (and hence exaggerates the model-based water vapor feedback estimates greatly).

        I discussed this all with Gavin Schmidt at RealClimate, who debated the validity of the NOAA data with me for a while before simply censoring out my posts.

        But all of this points to a serious question regarding the IPCC model-based claim of strongly positive water vapor feedback with warming.

        Max

        PS Sorry this “elaboration” got so long. But there is a lot of physical evidence pointing to a grossly exaggerated model-based water vapor feedback assumption by IPCC in its latest AR4 report.

  63. I am not wanting to waste any more time on this. It is simply not very interesting. Optical depth varies with CO2, water vapour, methane, nitrous oxide and halocarbons. I will refer you to scienceofdoom instead.

    ‘However, the key points are:

    1. optical thickness of the total atmosphere is not a very useful number
    the useful headline number has to be changes in TOA flux, or radiative forcing, or some value which expresses the overall radiative balance of the climate system;
    2. optical thickness calculated as constant over 60 years for CO2 and water vapor appears to prove that total optical thickness is not constant due to increases in other well-mixed “greenhouse” gases;
    3. clouds are not included in the calculation, but surely overwhelm the optical thickness calculations and cannot be assumed to be constant.’

    http://scienceofdoom.com/2011/04/22/the-mystery-of-tau-miskolczi/

    My intention was to suggest to Arno that there were perhaps more promising areas of study – not to simply see a response in defence of the nonsensical in a superficial and idiomatic version of the language of science.

  64. Again, the simple point raised by Miskolczi is ” go measure for yourselves and prove me wrong ” .

    All the theorising is handwaving. Observation and empirical measurement trumps models and handwaving.

    • “Observation and empirical measurement trumps models and handwaving.”

      Miskolczi did neither of these – not a single measured data point for optical depth. He assumes a model with a number of simplifying assumptions (both unverified ones and ones known to be incorrect) and uses the model to calculate results.

  65. Is the current global warming unprecedented?

    In 100-years the globe has warmed by 0.6 deg C.
    http://bit.ly/jpwgOX

    In a single year, from 1956 to 1957, the globe warmed by about 0.25 deg. http://bit.ly/j8E4Pa

    In a single year, from 1963 to 1964, the globe cooled by about 0.25.
    http://bit.ly/kouWIY

    For global warming of about twice its annual variation in 100-years, is the current global panic rational?

    In tossing a coin, the probability of getting two heads, or two consecutive warming, is 25%.

    It is not at all unprecedented.

    • Girma: The way I read twentieth century warming is as follows. First there was a ten year cooling from 1900 to 1910. This was followed by a thirty year warming from 1910 to 1940. It ended in the winter 0f 1939 to 1940 (start of World War Two) when the temperature dropped precipitously and then leveled off after the war. The heat wave during World War Two shown in all temperature curves is totally phony. After the war the temperature did not go anywhere until the 1998 super El Nino arrived. The warming initiated by the super El Nino was oceanic, lasted four years, and raised global temperature by a third of a degree – the other half of that 0.6 degrees – and then stopped. There hasn’t been any warming since then except that associated with the ENSO oscillations that started up with the La Nina of 2008. They had been interrupted by the unscheduled super El Nino and the interval that followed it was taken up by a warm period I call the twenty-first century high. It was warm but temperature did not rise. It, and not some imaginary greenhouse effect is the cause of the very warm first decade of this century. We are back to a climate controlled by ENSO oscillations so you can expect a series of alternating El Nino and La Nina periods indefinitely in the future. And none of that dangerous warming that IPCC models predict – that is pure GIGO.

      • Arno Arrak

        I think almost everyone here, with the possible exception of Fred Moolten, has “gotten the picture”, as Girma has shown us graphically.

        The long-term GMTA record shows statistically indistinguishable multi-decadal cycles of 30-year warming followed by 30-year cycles of slight cooling, all resembling a sine curve on a tilted axis, with a slight overall warming trend.

        The atmospheric CO2 record (since 1958) shows a steady increase at a CAGR of around 0.4% per year.

        The GMTA record is a “random walk” statistically speaking. There is no statistical correlation between atmospheric CO2 and GMTA. Where there is no robust correlation, the case for causation is extremely weak, if not non-existent.

        Ocean current oscillations (ENSO, PDO, etc.) seem to correlate much more closely with GMTA than atmospheric CO2, so the case for causation here would actually be much stronger. A mechanism explaining these cyclical swings has not yet been established, although there have been studies suggesting a link between these swings and solar activity. Spencer has also suggested a link to cloud formation.

        So the uncertainties and unknowns are still very large (as Dr. Curry has stated many times).

        IPCC has unfortunately focused myopically on anthropogenic forcings only, essentially relegating natural forcing (including solar) to the “insignificant” category.

        It turns out that this has been a fatal flaw, as the past decade has confirmed (no observed warming despite CO2 increase to record level and IPCC model forecasts of 0.2C per decade warming that never happened in real life.)

        But the AGW faithful simply ignore the failed IPCC model forecasts, the multi-decadal cycles, the past decade, etc. as irrelevant (or deny that they existed at all!).

        Max

      • Max

        But the AGW faithful simply ignore the failed IPCC model forecasts, the multi-decadal cycles, the past decade, etc. as irrelevant (or deny that they existed at all!).

        Fortunately, as demonstrated in the climategate emails, they admit it in private. The uncertainty of AGW is being raised. However, there are opportunities for raising revenue so it not easy to stop the scare mongering from those who are going to benefit.

  66. According to time magazine in 1974, the earth cooled 2.4F since 1940. However, since climate scientists living today have a much better understanding of what happened back in 1974 as compared to the people that were living there at the time, these records have since been adjusted to show there was no cooling at all.

    What is unprecedented is the re-writing of history and the corruption of science that has resulted.

  67. Fred Moolten, you mention how aerosol factor can be adjusted, then you say the adjustments don’t affect the climate sensitivity of the model. That strikes me as a contradiction. Also, a model can have a parameter that adjusts for ocean heat uptake, and cloud feedback, and these would have a large variation on the climate sensitivity result.

    • Mike – Climate sensitivity is the value that translates a particular forcing (change in balance between incoming and outgoing energy) into a temperature change. The models utilize a variety of inputs to compute climate sensitivity, including reasonably firm data on the forcing from CO2 changes (such as described by Myhre et al in 1998), as well as data on feedbacks, but they do not use aerosol forcing, one reason being because its value is very imprecisely known. Because of that uncertainty in the case of aerosols, a number of different forcings were tested to determine which, when used with the model’s climate sensitivity value, best matched observational data. The sensitivity parameter was not itself adjusted.

      Cloud feedbacks do affect climate sensitivity, and different models compute different sensitivities because of differences in the way they parametrize cloud behavior. Ocean uptake is more complex, but its estimation can affect calculated climate sensitivities in a number of indirect ways. The main point regarding your comment, though, is that model adjustments regarding aerosols were made to the strength of their forcing, and not to the value of climate sensitivity that each model computed, which was left unchanged.

      • Fred:

        You err when you say “Climate sensitivity is the value that translates a particular forcing (change in balance between incoming and outgoing energy) into a temperature change.” Actually, the climate sensitivity (aka equilibrium climate sensitivity) is the value that translates a particular forcing into a change in the equilibrium temperature. In climatological jargon, the “equilibrium temperature” is the temperature that is attained when all forcings are held constant over an unbounded period of time.

        The importance of distinguishing between the temperature and the equilibrium temperate lies in the fact that the former is observable while the latter is not. It follows from the non-observability of the equilibrium temperature that all of the claims that are made about the numerical value of the climate sensitivity are not falsifiable, thus lying outside science. The portion of the argument for anthropogenic global warming that rests upon the assignment of a probability (or probability density function) to the value of the climate sensitivity is not a scientific argument.

      • Terry – With all due respect, most participants in this blog know that “climate sensitivity” refers to an equilibrium temperature change, and I assumed Mike probably knew that. My statement didn’t specify this simply because this knowledge was assumed. However, if you interpreted my statement to imply that climate sensitivity dictates the immediate value of a temperature response to a forcing, it’s good that the subject was raised so that any misunderstanding can be dispelled. (If you read some of my earlier comments in this thread, you’ll notice that I do specify the equilibrium condition, as well as the fact that equilibrium for a CO2 doubling would be approached asymptotically over many centuries , which is why I pointed out that it was not an observable quantity, but nevertheless a useful metric that permits different models to be compared for their abilities to simulate observable temperature changes.)

      • Alexander Harvey

        Hi,

        Your: “Climate sensitivity is the value that translates a particular forcing (change in balance between incoming and outgoing energy) into a temperature change.”

        You should be able to find a wording that also discounts the stored heat flux and hence is a guide to plausible observable temperature changes.

        Alex

      • Fred:

        Based upon your response, I gather that we agree on the non-observability of the climate sensitivity. Do we agree on my contention that, because of the non-observability, speculations regarding the magnitude of the climate sensitivity lie outside science?

      • Terry – There has already been at least one climate sensitivity (CS) thread in this blog, and I gather another will be forthcoming, so I don’t want to comment here at extreme length. However, I would state that although the equilibrium temperature response to doubled CO2 is non-observable, the data and computations leading to a particular CS value are subject to comparison with observational data, and in that sense, CS is a useful metric (among others). Some previous sources of information on CS have been cited, including the Knutti and Hegerl 2008 review in Nature Geoscience and Chapters 8 and 9 plus references in AR4 WG1.

        As one example of CS estimates (and as AR4 indicates, there are many other approaches to the subject), it is possible to use spectroscopic CO2 data, radiative transfer equations, the Stefan-Boltzmann equation, and some approximations designed to match clear-sky vs all-sky irradiances to compute how much “forcing” (a change in W/m^2) and therefore how much warming can eventually be expected from a CO2 doubling – this value is generally thought to be reasonably accurate (the Myhre et al 1998 reference provides details), and infrared radiance measurements from the surface and from satellites provide confirmatory evidence for some of these inputs. One can then estimate the magnitude of feedbacks, which when combined with the warming from CO2 itself, yield a CS estimate. Feedbacks are a subject of more disagreement, but the principal ones – water vapor, ice/albedo, and clouds – are also amenable to observations. We can therefore say that in theory, with perfect data, and precise observational confirmation of feedback effects, we could accurately compute the temperature change that would result from doubled CO2.

        The time constants for approaching that temperature are also subject to some imprecision – see the recent discussions in Isaac Held’s Blog (e.g., posts 5, 6, and 8) and the recent Energy Balance draft by Hansen et al. As Alex Harvey pointed out in the comment above yours, the differences between observed temperature changes and estimated equilibrium sensitivity imply a rate of heat transfer to the ocean that can be compared with observations. Along the same lines, it’s important to note that changes over time to a persistent forcing (e.g., an increased CO2 concentration) are likely to differ from very short term temperature changes induced by transient climate perturbations (e.g., ENSO variations), and so CS estimates derived from the latter are not clearly extrapolable to CS estimates for changing CO2.

        Now not only do we face some imprecision in arriving at these various estimates, but we may be overlooking other factors that would play out over many centuries to modify the trajectory of temperature change. In the meantime, however, the comparisons between estimated and observed phenomena that can be made as checks on CS estimates are useful in their own right in telling us how climate is likely to respond in the much nearer future (decades to a century) to CO2 increases. As you are aware, though, we are still operating within a frustratingly wide range, with 95% confidence limits for CS usually given as 2 to 4.5 C, and with reports of values outside of those limits indicating that the issue won’t be resolved soon.

        To summarize, CS values per se are not of scientific value, but each estimate implicitly predicts near-term effects within the realm of scientific evaluation and can be informative.

      • Christopher Game

        Fred Moolten writes: “We can therefore say that in theory, with perfect data, and precise observational confirmation of feedback effects, we could accurately compute the temperature change that would result from doubled CO2.” He should have included the need, for his purpose, of computational facilities the like of which man hath not seen. And he should have noted that his “perfect data” are far beyond the realm of any feasible practice. That he left out these provisos shows that his thinking process is far divorced from reality. He might as well have claimed, in theory, to be able to foretell your next thought. Christopher Game

      • To avoid misunderstanding of my meaning, I used the term “per se” to indicate that an equilibrium temperature response one thousand years in the future is not scientifically informative simply because we can’t wait one thousand years to observe it. That value, however, and the information used to compute it allow us to predict nearer term climate behavior that can be compared with observations. CS is scientifically useful, therefore, even if its measurement far in the future is not.

      • Fred:

        I agree with you in the respect that estimation of the numerical value of the climate sensitivity parameter of a specific model could be a step toward prediction of observable temperatures at Earth’s surface thus leading to scientifically interesting results. However, a curiosity of IPCC climatology circa 2007 was that the climate models that were referenced by the IPCC in its arguments for anthropogenic global warming did not make (falsifiable) predictions. They made (non-falsifiable) “projections.” In this way, IPCC climatologists shielded their works from falsification and placed the IPCC’s argument for AGW outside science.

        By the way, the 2 to 4.5 Celsius per CO2 doubling interval in which the IPCC expressed “confidence” was not the entity which in statistics is called a “confidence interval.” A confidence interval is not determined, for in view of the unobservability of the climate sensitivity the sample size is nil. The interval in which the IPCC expresses confidence is the product of a procedure which the IPCC does not describe in detail. The least mysterious aspect of this procedure is that Bayesian parmeter estimation is used in generating entities that in statistics are called “credible intervals.” It is easy to show that each of these credible intervals is an unproved proposition and thus that the IPCC has been unsuccessful in its attempt at bounding the climate sensitivity with a sample of size nil. I’ll provide a proof if you’d like.

      • “It is easy to show that each of these credible intervals is an unproved proposition and thus that the IPCC has been unsuccessful in its attempt at bounding the climate sensitivity with a sample of size nil. I’ll provide a proof if you’d like.”

        Terry – Some of this has been discussed previously in the thread on probabilistic estimates of climate sensitivity, but I would be interested in seeing the proof you offer to provide. I’m familiar with credible intervals (aka “Bayesian confidence intervals”). I’ll reserve judgment on your conclusion that the sample size is “nil”. There is no sampling of thousand year temperature changes, of course, but multiple samples of data on which CS has been calculated. My own reservations lie mainly with the degree of subjectivity inherent in Bayesian priors used for some of the paleoclimatologic CS estimates, but at some point, I would like to review these in more detail. I’m less concerned with subjectivity regarding some of the CS estimates described in AR4 Chapter 8, involving CO2 forcing combined with modeled feedback estimates, but data uncertainty remains an issue here. I would certainly be reluctant to accept precise CS estimates at this point, but the broad range currently cited and based on a large multiplicity of studies doesn’t strike me as unreasonable.

      • Fred:
        Please find the proof you’ve requested and associated comentary attached.
        Let Y designate the equilibrium climate sensitivity. Let X designate an interval in Y. According to IPCC Working Group I (2007 report), in generating posterior probability density functions (posterior PDFs) over Y, climatologists have widely used a type of prior probability density function (prior PDF) whose probability density is constant over X and nil outside of X. I’ll call a prior PDF of this type a “flat prior.” A flat prior asserts one has perfect information that the value of Y lies within X and no information about the value of Y within X.
        On the basis of “credible intervals” defined on posterior PDFs, the IPCC argues it is likely that the climate sensitivity lies between 2 and 4.5 Celsius per doubling of the CO2 concentration. A weakness in the IPCC’s argument lies in the fact that, by variation of the bounds on X, one generates flat priors that are of infinite number. Furthermore, as I am about to show, non-flat priors that are equally uninformative about the value of Y within X are of infinite number. Thus, the selection of a particular flat prior is arbitrary.
        My proof is as follows. Let T designate a partition of X into infinitesimal segments. Let Pr( T) designate a function that maps the value of Y within each of the segments to the associated probability. By the definition of “information,” a prior PDF that is uninformative about the value of Y within X maximizes the entropy of Pr( T ). Maximization of the entropy yields the conclusion that Pr( T ) is a constant.
        The probability density of a segment is this constant divided by the length of this segment. If the segments are of equal length, the probability density is constant with the consequence that the prior PDF is flat. Otherwise, the probability density is not constant with the consequence that the prior PDF is not flat. The entropy maximizing flat priors and non-flat priors share the property of being perfectly uninformative about the value of Y within X.
        Without additional constraints, flat uninformative priors and non-flat uninformative priors are of infinite number; the proposition is false that each one of these priors is true, by the law of non-contradiction. This false proposition is a premise to the argument that is made in reaching in generating the posterior PDF that underlies each credible interval. In view of the falsity of this premise, each posterior PDF and credible interval is unproved.
        Each posterior PDF is simply an unproved model. As the equilibrium temperature is not observable, this model is not falsifiable. Thus, the IPCC’s credible intervals are inherently unprovable.
        There is a circumstance in which credible intervals are provable. In uncovering this circumstance, I’ll relax my identification of the variable Y as the equilibrium climate sensitivity, allowing me to reference Y to a different variable.
        The circumstance that results in proved credible intervals is realized in the type of experiment that is a sequence of Bernoulli trials. In a trial of such an experiment, the associated event is observed and by this observation it is determined whether the outcome is O or NOT O.
        In 1 trial, it is a fact that the relative frequency of O will be 0 or 1. In 2 trials, the relative frequency will be 0 or ½ or 1. In N trials, the relative frequency will be 0 or 1/N or 2/N or…or 1.
        Now let N increase without limit. The relative frequency becomes known as the “limiting relative frequency.” Let the variable Y designate the limiting relative frequency. Let X and Y be congruent. The experiment generates the partition T of X in which each segment is of equal length 1/N. The bounds on X are fixed at 0 and 1. In this circumstance, there is exactly 1 uninformative prior. It is a flat prior. As the prior PDF is true and there is no model, the posterior PDF is proved.
        This finding leads to Laplace’s rule of succession. Let Pr( O ) designate the probability of O. The expected value of the limiting relative frequency is unique in possessing the properties that are required of Pr( O ). Thus, the expected value is assigned to Pr( O ). This assignment yields Laplace’s rule. It is
        Pr( O ) = (x + 1) / (n + 2)
        where n is the number of trials and x is the number of trials in which the outcome was O.
        Now, let O represent the proposition that the equilibrium temperature lies in a specified interval given that the CO2 concentration lies in a specified interval. As the equilibrium temperature is not observable x and n are necessarily nil. It follows that Pr( O ) = ½. The equilibrium temperature varies independent of the CO2 concentration.
        Laplace’s rule is based upon the assumption that observed events are the sole providers of information but climatology has theories of atmospheric heat transfer. Can these theories not provide additional information?
        In a generalization of Laplace’s rule, theoretically based information applies constraints on entropy maximization converting the flat prior PDF to a curved one. However, in applying these constraints it is necessary to know the amount of the available information and in the determination of this amount it is necessary for there to be observed events. As equilibrium temperatures are un-observable there are no observed events and the conclusion stands that the equilibrium temperature varies independent of the CO2 concentration. In reaching the opposite conclusion climatologists have taken the scientifically impermissible step of assuming their theories of atmospheric heat transfer to be true. Thus, rather than supplying indeterminate information about the equilibrium temperatures, these theories seem to provide perfect information.

      • Terry – Thanks for taking the time to reply. Some of the points you mention were discussed in detail in the thread addressing Probabilistic estimates of climate sensitivity. I certainly agree that the bounds and the shape of PDF functions used as priors affect the range of climate sensitivity estimates – that is self-evident – but although that increases uncertainty, it doesn’t invalidate this form of analysis as one means of arriving at sensitivity estimates. The bounds (and to some extent the shapes) are constrained by physics – one can reasonably assume, for example, that climate sensitivity isn’t negative (which would require the climate to cool in response to increasing CO2), and so a lower bound of zero reflects that reality, while values >10 can be excluded for a variety of reasons, and so it is not entirely a matter of subjective judgment. Without the constraints, an infinite number of possible priors would imply an infinite range of sensitivity estimates, but clearly that is not true when one has to conform to the reality of physics. Moreover, these Bayesian methods are not the only approach to sensitivity estimates (see e.g., AR4, WG1, chapter 8), and the alternative approaches yield a range similar to the Bayesian analyses described in Chapter 9. For these reasons, I feel comfortable with the canonical 2 – 4.5 C range, with the recognition that it is not fixed in stone but is probably a reasonably accurate assessment of the current state of climate physics.

        I can’t agree that the sample is of “size nil”, for reasons I stated previously.

  68. ferd berple

    Why not calculate CO2 sensitivity by comparing the high CO2 atmospheres of venus and mars to the low CO2 atmosphere of earth and jupiter?

    Calculating CS by observing the earth alone is like estimating the speed of the car you are riding in without looking outside. If CO2 is affecting temperature, then it will be apparent on both mars and venus to a much greater extent than earth and jupiter, and this can be calculated directly from observation.

    Right now we are not able to resolve this question because we only have 1 data point. No matter how many lines you draw, so long as the each go through the single data point, they are equally possible.

    Adding 3 more data points gives us a way to determine which lines are likely and which are not.

  69. Fred Moolten writes “To summarize, CS values per se are not of scientific value, but each estimate implicitly predicts near-term effects within the realm of scientific evaluation and can be informative.”

    I have enormous difficulty with this. The IPCC claims that it is almost certain that CO2 causes CAGW, and yet ” CS values per se are not of scientific value”. I cannot see how these two ideas are compatible.

    The mind boggles.

  70. That is the exact definition of ” science without method ” for you, Jim.

    You might as well tell Webster to include this definition in their dictionary for describing climate modelling.

  71. ferd berple

    “To summarize, CS values per se are not of scientific value, but each estimate implicitly predicts near-term effects within the realm of scientific evaluation and can be informative.”

    If something is “not of scientific value” then anything derived from that, including “implicitly predicts near-term effects” is also not of scientific value. You cannot make a silk purse out of a sow’s ear.

    The conclusion “can be informative” does not follow “within the realm of scientific evaluation”. Clearly CS cannot be informative within the realm of scientific evaluation if it is also “not of scientific value”.

    Science without method indeed. I restate my position. A comparison of the atmosphere’s of venus, mars, and jupiter with earth provides the necessary information to resolve this question through observation, which currently cannot be done by observing earth alone.

    Why rely on model building (artifical planets) when we have real planets and real data? Most likely because the real planets do not support the agenda driven theories of climate science and the IPCC. So they rely on artifical planets to generate the data to back their ideas.

  72. ferd berple

    If the theory of GHG is correct, it will withstand rigorous scientific examination of the atmosphere of our neighbors. From what I have seen it does not. The temperature of our neighbors varies as the pressure of the atmosphere and its distance from the sun and is remarkably unaffected by the composition of the atmosphere.

  73. ferd berple

    The problem for climate science is that this finding flies in the face of the GHG theory. It destroys the ability of environmental movements that want to use CO2 polution as a means to roll back industrialization and return the earth to a “natural” state. It destroys the ability of politicials and their backers to raise funds and control economic development by taxing energy production. It destroys the careers of climate scientists that have promoted the GHG theory above all else as the primary driver for climate.

    These groups would never recognize any such finding no matter how conclusive, as it directly contradicts their objectives and future welfare. Thus, the primary driver for climate science is not science, it is self-interest.

  74. Chief Hydrologist

    I realize this is a bit OT here, but what does this recent Nature article about the “Agulhas leakage” mean?
    http://www.nature.com/nature/journal/v472/n7344/full/nature09983.html

    http://www.youtube.com/watch?v=wq12nIQ5SuE&feature=player_embedded

    The Atlantic Ocean receives warm, saline water from the Indo-Pacific Ocean through Agulhas leakage around the southern tip of Africa. Recent findings suggest that Agulhas leakage is a crucial component of the climate system and that ongoing increases in leakage under anthropogenic warming could strengthen the Atlantic overturning circulation at a time when warming and accelerated meltwater input in the North Atlantic is predicted to weaken it. Yet in comparison with processes in the North Atlantic, the overall Agulhas system is largely overlooked as a potential climate trigger or feedback mechanism. Detailed modelling experiments—backed by palaeoceanographic and sustained modern observations—are required to establish firmly the role of the Agulhas system in a warming climate.

    Max

    • What it means and what they say it means are very different. In fact this is a classic case of reasoning within AGW, as opposed to looking at the findings. You will note that the lengthy abstract is almost entirely about what AGW says (in the author’s view) and how this small finding might affect (or “trigger”!) that. Specifically this finding acts against the supposedly dangerous mechanism that might freeze Europe due to global warming. Meaning that now Europe will bake instead, along with the rest of us. Or not. Heaven help it actually sustains equilibrium.

      This is how uncertainty is masked within AGW. The Gospel is assumed but please forgive us we have some questions about the details. Oreskes would count this as a pro-AGW finding. What a colossal joke.

    • ‘But this team of scientists – drawn from the US and Europe – say wind shifts further south make it likely that leakage is increasing.’

      ‘http://unity.lv/en/news/322568/

      So this is the link to global warming. Being an agnostic on CO2 causing much of the recent warming at all – it requires ignoring ERBE, ISCCP and HIRS as well as what we know about oceanographic variability – I think the anthropogenic role in any of this is indeterminate at this time. Although there should be changes occurring as a result of greenhouse gas emissions.

      The particular link involves sea level pressure differentials between the southern polar region and lower latitude regions. There is an anti-cyclone in the polar region driving westerly circumpolar winds and current. If there is high pressure in sub-polar regions and low pressure in the polar region – the anti-cyclone is constrained to higher latitudes. Higher pressure at the polar front and storms spin off the polar vortex pushing into lower latitudes.

      The measure of this is the Southern Annular Mode (SAM) index – SAM has trended more positive (by convention low pressure at the polar region) in decades to the late 1990′s. Here’s one from AR4. The trend has been partially attributed to ozone destruction and partially to ‘natural’ causes.

      It is probably the case that the ozone layer – both in volume and temperature – is influenced by UV intensity as in this study from Lockwood et al 2010(a).

      SAM seems likely to be more negative in coming decades as solar UV emissions decline. There is probably little in this to suggest a amelioration of cold conditions – as in this Lockwood et al 2010 (b) study – any time soon.

      The authors assume again that variability is entirely anthropogenic in origin which is likely to prove to be hilariously funny.

  75. The press-Courier – June 11, 1986.

    Hansen predicted that global temperatures should be nearly 2 degrees higher in 20 years

    http://bit.ly/lf6CgS

    Hansen’s prediction applies for the period from 1986 to 1996.

    Here is the global mean temperature trend for the period: http://bit.ly/k7tmTe

    Which is a warming of 0.08 x 2 = 0.16 deg C.

    Changing deg C into deg F, we get a warming of 9*0.16/5 = 0.29 deg F for the period.

    That is, Hansen predicted 2 degrees warming, but the actual observation is only 0.29 degrees warming.

    Hansen prediction exaggerates the actual warming by a factor of 6.9!

    As a result, Hansen’s climate sensitivity of 3 for doubled CO2 should instead be only 3/6.9=0.43 deg C.

    • IPCC projected for 0.2 deg C warming per decade.
      http://bit.ly/caEC9b

      The actual observation is a warming of 0.03 deg C per decade.
      http://bit.ly/hE3vv1

      Which is an exaggeration of 6.7.

      RULE OF THUMB. Always divide all IPCC and Hansen predictions by about 7 to get the observed values

      Using our new found rule of thumb, the predicted global warming range of 2 to 4.5 deg C by the end of this century will be 0.3 to 0.6 deg C!

    • I was wrong. I will do it again.

      The press-Courier – June 11, 1986.

      Hansen predicted that global temperatures should be nearly 2 degrees higher in 20 years

      http://bit.ly/lf6CgS

      Hansen’s prediction applies for the period from 1986 to 2006.

      Here is the global mean temperature trend for the period: http://bit.ly/ji0dlG

      Which is a warming of 0.21 x 2 = 0.42 deg C.

      Changing deg C into deg F, we get a warming of 9*0.42/5 = 0.76 deg F for the period.

      That is, Hansen predicted 2 degrees warming, but the actual observation is only 0.76 degrees warming.

      Hansen prediction exaggerates the actual warming by a factor of about 3.

      As a result, Hansen’s climate sensitivity of 3 for doubled CO2 should instead be only 1 deg C.

  76. In a 1984 paper Hansen said temperatures would rise by 3 +or- 1.5 degrees C with a doubling of CO2.

    Wonder what quality of reporter 25 cents an issue bought in 1986?

  77. The Press-Courier – June 11, 1986
    http://bit.ly/lf6CgS

    Washington (AP) – Scientists are warning Congress that the long-theorized, life-threatening overheating of the Earth from man-made air pollution is now frightening reality.

    “The fact that the greenhouse effect is real is proven,” said James Hansen, director of the National Aeronautics and Space Administration’s Goddard Institute for Space Studies.

    “Global warming is inevitable – it’s only a question of magnitude and time,” said Robert Watson, director of NASA’s upper atmospheric program. “We can expect significant changes in climate in the next few decades.”

    Hansen, Watson and other scientists delivered their somber assessments Tuesday as the Senate Environment subcommittee on environmental pollution opened two days of hearings on the greenhouse effect.

    Backed by recent findings of severe ozone layer depletion in the atmosphere over Antarctica, they reiterated scientific predictions that the warming threatens the globe with floods, drought and more skin cancer by the middle of the 21st century.

    And for Sherwood Rowland, a chemistry professor at the University of California, the picture further down the road is even bleaker, “If you have greenhouse effect going on indefinitely, then you have a temperature rise that will extinct human life” in 500 to 1000 years.

    The greenhouse effect is a shorthand term for global warming caused by chemicals such as chlorofluorocarbons, carbon dioxide, methane and nitrous oxide accumulating in the atmosphere and trapping heat. The pollutants also destroy the ozone layer, which helps protect humans from cancer-causing ultraviolet rays of the sun.

    Under the greenhouse scenario, the Earth gets backed. Rich farmlands turn into deserts. Forests wilt and die. Coastal areas are inundated by oceans.

    Hansen predicted that global temperatures should be nearly 2 degrees higher in 20 years

    • No, a reporter wrote that Hansen predicted that. I found a peer-reviewed paper from 1986 in which he stated how much temperature would rise with a CO2 doubling.

      So something is wrong. My money is on how bad a reporter 25 cents an issue buys.

    • Hanson likes to change those 20 year predicitons he’s made to 40 (and then blame the reporter) when they don’t actually come to pass though. Give him time Girma, he’ll move the goalposts.

      Hansen couldn’t care less if he’s accurate or not as long as his doom and gloom message captures the headline of the day.

      • Hansen was asked what would happen in 40 years if there were a doubling of CO2. He answered the question with that assumption. It’s in the book; it’s not on the internet account of the interview. The author has acknowledged his error in the Salon article.

        When CO2 reaches 560 ppm, check on his prediction.

        “When I interviewed James Hansen I asked him to speculate on what the view outside his office window could look like in 40 years with doubled CO2. {DOUBLED CO2} I’d been trying to think of a way to discuss the greenhouse effect in a way that would make sense to average readers. I wasn’t asking for hard scientific studies. It wasn’t an academic interview. It was a discussion with a kind and thoughtful man who answered the question. You can find the description in two of my books, most recently The Coming Storm.”

      • I’m aware of the story. I personally don’t buy it. No one said anything until the initial 20 years had passed and someone noticed and wrote about the failed prediction. Then the story you have highlighted above came to light.

        The figure 40 is also, in my opinion, too out of the ordinary for an estimate of time 10, 15, 20, 25, 50, 75, 100, perhaps, but 40? I’m not biting. Not to mention even with the added 20 years his prediction after 23 years is still miles and miles off the mark.

        I don’t really have much inclination to discuss hansen at length though as I happen to think Hansen is nothing short of a complete and utter nutcase who will say anything to the press to get noticed – like his “coal carrying death trains” insert eyeroll here.

      • I’d be inclined to believe the story was in error since his projections from the 1988 paper come nowhere near a 2C rise in 20 years. Unless he changed his mind by over 1C in just two years.

  78. It’s comforting that you don’t “buy it.” That you focus on 23 years and not the doubling of CO2 is odd.

    • The issue is in 1986, Hansen predicted that global temperatures should be nearly 2 degrees higher in 20 years.

    • So now all you need to do is demonstrate that “doubled CO2″ was part of the original question and not part of the revision that came to light after the prediction had failed – at which point 20 years was changed to 40 years.

    • And how on earth could CO2 be doubled in 20 (or even 40) years? Why would this kind of qualifier even be in a question regarding this short a time span?

      • steven mosher

        Its a model diagnostic not a prediction.

      • dmartin

        As Steve Mosher wrote (about the 2xCO2 in 20 years):

        Its a model diagnostic not a prediction.

        This means some human along the line fed in the formulas or equations to arrive at this growth rate. This figures out to a CAGR of 3.5% per year. Even over 40 years the CAGR is 1.75% per year

        Since Mauna Loa started in 1958, atmospheric CO2 concentrations have increased at a CAGR of a bit more than 0.4% per year.

        This was also the CAGR over the most recent 5-year period (after a slightly higher CAGR of almost 0.5% over the previous 5 years).

        Human population grew at a CAGR of 1.7% over the almost identical period 1960-2010 (so CO2 grew at around one-fourth the rate of the population growth).

        The UN estimates that population growth will level off and will average a CAGR of 0.3% from today until 2100. Even it’s “upper rate” forecast calls for CAGR of around 0.6%.

        But let’s assume that the poorest nations of the world are successful in building up a basic energy infrastructure over the next 30-50 years, and thereby pull themselves out of abject poverty (and that this will largely be based on fossil-fuel based energy).

        And let’s assume that the large developing nations, such as India, China and Brazil, continue to grow their economies and the affluence of their populations.

        Even with this assumption, it is downright silly to think that the CAGR of atmospheric CO2 will increase to 5 to 10 times the rate we have already seen.

        So the “model diagnostic” was stupid to start of with (as were the “projections” that came out of the model). A classic case of GIGO.

        And I think that is the “take home message” here.

        Max

  79. Fred Moolten, I thought I knew these things, but now I don’t know what I know. Your posts just seem to be contradicting themselves.

    >Cloud feedbacks do affect climate sensitivity, and different models compute different sensitivities because of differences in the way they parametrize cloud behavior.

    So a model, that takes as input a variable for clouds, would be an example of a model where the climate sensitivity can be adjusted with a knob?

    • Mike N and Fred Moolten

      You have been discussing “cloud parameterization” in the climate models cited by IPCC. It is correct that all the models cited by IPCC agreed that net cloud feedback was strongly positive, although they disagreed on the values themselves.

      IPCC did concede, “cloud feedbacks remain the largest source of uncertainty”.

      Let’s leave aside for now the more recent physical CERES satellite observations of Spencer & Braswell, which showed that the net cloud feedback with warming over the tropics was strongly negative.

      Instead, let’s look at the Wyant et al. study using super-parameterization in the models in order to better quantify cloud response with warming.
      ftp://eos.atmos.washington.edu/pub/breth/papers/2006/SPGRL.pdf

      Some pertinent excerpts:

      The climate sensitivity of an atmospheric GCM that uses a cloud-resolving model as a convective superparameterization is analyzed by comparing simulations with specified climatological sea surface temperature (SST) and with the SST increased by 2 K. The model has weaker climate sensitivity than most GCMs, but comparable climate sensitivity to recent aqua-planet simulations of a global cloud-resolving model. The weak sensitivity is primarily due to an increase in low cloud fraction and liquid water in tropical regions of moderate subsidence as well as substantial increases in high-latitude cloud fraction.

      The global annual mean changes in shortwave cloud forcing (SWCF) and longwave cloud forcing (LWCF) and net cloud forcing for SP-CAM are -1.94 W m-2, 0.17 W m-2, and -1.77 W m-2, respectively.

      Shortwave cloud forcing becomes more negative at all latitudes, except for narrow bands near 40N and 40S, indicating more cloud cover and/or thicker clouds at most latitudes. The change in zonal-mean longwave cloud forcing is relatively small and negative in the tropics and stronger and positive poleward of 40N and 40S, where it partly offsets the shortwave cloud forcing change. Thus the net cloud forcing change is negative at most latitudes, and it is of comparable size in the tropics and the extra-tropics.

      The climate sensitivity of an atmospheric GCM that uses a cloud-resolving model as a convective superparameterization is analyzed by comparing simulations with specified climatological sea surface temperature (SST) and with the SST increased by 2 K.

      And

      The global annual mean changes in shortwave cloud forcing (SWCF) and longwave cloud forcing (LWCF) and net cloud forcing for SP-CAM are -1.94 W m-2, 0.17 W m-2, and -1.77 W m-2, respectively.

      This equates to a net global cloud feedback of –0.88 W m-2 °K, compared to the IPCC average value in AR4 of +0.69 W m-2 °K (AR4 Ch.8, p.630).

      IPCC models AR4, Ch.8 (p.633) calculate that this positive cloud feedback would increase the 2xCO2 climate sensitivity by 1.3°C on average (from 1.9°C without cloud feedback to 3.2°C with cloud feedback), so that correcting for cloud feedback would put the 2xCO2 climate sensitivity below 1°C.

      These model results were confirmed fairly closely by the actual physical observations made by Spencer & Braswell, which also showed increase of low altitude cloud formation, and give a good correction to the values assumed by the IPCC model simulations without super-parameterization for clouds.

      Max

      • The quoted sections related to cloud forcing, not feedback. Some models estimate cloud forcing as negative, and some as positive, but almost all estimate cloud feedback as positive.

      • Another point made earlier but worth reiterating is that the currently estimated range of climate sensitivity, from 2 to 4.5 C per CO2 doubling is based on many dozens of studies, and has typically been assigned a confidence interval of 90-95%. This tells us that some reports will lie outside that range, but it also indicates that selectively quoting individual studies, whether inside or outside the range, is a misleading approach to judging climate sensitivity. These studies can be judged on their merits, but the numerical value cited by an individual study is not an adequate substitute for taking into account the entire array of studies. Fortunately, many studies, including the “outliers”, emphasize the uncertainty that surrounds their estimates rather than suggesting that they constitute a definitive result. This is consistent with the breadth of the canonical 2 – 4.5 C range, and the need to acknowledge the possibility that a true value might lie outside it, albeit with low probability.

  80. >but the point is that assumptions needed to create a particular range of climate sensitivity outputs are not available to the models.

    This is the part that I disagree with, and perhaps it is a misunderstanding of terminology. It is the case that with a single model run, the modellers will then know the output of that model run in the future. So of course with some values for a set of variables they will know beforehand what to expect. It is also likely that they will examine a small timeframe run to see the effects of certain variables, and will thus have an idea what the effects of changing some parameterization variables are. The idea that they have no idea what is going to come out of a model run, the model is just an Oracle to be consulted, that doesn’t seem reasonable. I have seen models, perhaps simplified versions, that just took cloud feedback as an input variable. This model produced large variations in 100 year temperature response based on changes to values for oceans, aerosols, and clouds.

  81. ferd berple

    “but the point is that assumptions needed to create a particular range of climate sensitivity outputs are not available to the models.”

    That is simply not the case. The quickest way to do this is to train the model using both historical data and a synthetic dataset for the future that matches whatever sensitivity you wish to achieve.

    Or, you can use the tried and true method of varying the weightings and forcings manually within acceptable limits until you achieve a reasonable hindcast and a forecast that matches your expectations for future sensitivity.

    Either way, what you have is an identical result. The model reflects what the model builder believes is an acceptable forecast for the future along with a good hindcast for historical data. If the model doesn’t, then the model builder will modify the model without publishing.

    Model building is cherry picking. You keep the model that gives the result you want to show, and discard the models that don’t.

    • This is an issue, where I agree with Ferd. We have many different climate models and they give different values for the climate sensitivity. Every model builds on a set of assumptions and choices. The choices are related to the way basic physical knowledge is taken into account and the discretization is done in details. To a large extent they concern the way badly known processes like aerosol influences and cloud formation are included in the model. They may also concern those natural oscillations of the oceans that the models are incapable of determining internally.

      Many of the starting points are kept fixed through the model development process, while others are adjusted, when reason arises. For every model many parameters and other details are undetermined at the outset, and they are adjusted during the model development to improve agreement with empirical observations. There is plentiful evidence that the same problems can be modeled equally successfully with quite different models taking advantage of the flexibility offered by the adjustable details. While the results are equally consistent with observations, they may still be different, and they may, in particular, lead to very different projections for the future.

      The modelers learn gradually in being able to foresee, how the resulting model will change when they change the factors left fixed during the development process. Thus a modeler would, as an example, be able to modify the model in a way that leads to a higher or lower climate sensitivity by changing some of the input to be kept fixed in further development. The change could be a reduction in the strength of aerosol forcing and added flexibility given for influence of natural oscillations handled as an external influence accepting that model fails to model them. This kind of possible change would very likely reduce the value of climate sensitivity in the model. Some other modifications would have the opposite influence.

      The point of my comments is that subjective choices by the modelers do indeed influence the resulting climate sensitivity and that an experienced modeler can foresee the direction and strength of this change. Under such conditions it gets essentially impossible to make objective judgments on the strength of evidence given by climate modes on the climate sensitivity. The models do provide evidence, but we just do not know, how strong the evidence is.

      Similar issues concern paleoclimatic estimates. Paleoclimatology does not tell on clean experiments, where CO2 has varied and other forcings have remained unmodified. They do not tell direct information on global temperatures, but their raw data is something quite different. Using the paleoclimatic data involves models in many ways. They are needed in estimating the relative importance of known influencing factors and in estimating, how much uncertainty remains, when our knowledge of the past conditions is lacking in many ways. Models are needed also in interpreting the proxy data and relating it to the global temperature and other factors used in the analysis. All these models involve a lot of subjective judgment, and again these judgments are unavoidably influenced by the fact that the scientist knows, how the choices made will influence the outcome.

      Scientist try (in most cases) to be as objective as they can, and they try to estimate the uncertainties of the process, but the problems are so complex and they do involve unknown unknowns. For these reasons they cannot prove much to a really skeptical observer. But even the scientists themselves (should) know that their error estimates are contingent on the validity of some uncertain assumptions.

      This is deep in the nature of the problem. We cannot perform controlled experiments, but are limited by, what happens to be available. Using such information will always be more subjective than laboratory experiments. The scientists will also always be biased by their expectations as these expectations influence, how the data is analyzed. It’s also unavoidable that methods and models are improved as long as the results contradict expectations, but searching for further improvements is often stopped, when a satisfactory agreement with expectations has been reached. This leads to a significant bias towards confirmation of earlier expectations rather than contradicting them.

      The problem is that we don’t know the strength of this bias. We should remember its existence, but we should not conclude that the evidence is not of any value. Finding the right balance is much more difficult than choosing either extreme. On Climate Etc. we have many willing to discredit all or most evidence, and only occasionally obvious overconfidence, but this leaves the right balance open.

      • The real question is of the appropriate use of models – whether these should be understood to expository tools or used wrongly as reliable projections of future climate.

        ‘Using climate models in an experimental manner to improve our understanding of how the climate system works is a highly valuable research application. More often, however, climate models are used to predict the future state of the global climate system…

        Georgi (2005) demonstrates why climate prediction generally should be considered an initial value problem. To add difficulty to a prediction is the fact that the predictability of the climate system is strongly affected by non-linearities. A system that responds linearly to forcings is highly predictable, i.e. doubling of the forcing results in a doubling of the response. Non-linear behaviours are much less predictable and several factors increase the non-linearity of the climate system as a whole, thereby decreasing the predictability of climate systems in general. In addition to this, complex models involving nonlinearities and interactions tend to loose accuracy because their errors multiply.

        In summary, if the climate system is sufficiently non-linear, as observational evidence seems to indicate, then achieving skilful multidecadal climate predictions in response to different human and natural climate forcings is indeed a daunting challenge.’ http://www.climate4you.com/

        The models are themselves chaotic as should be abundantly evident.

        ‘Atmospheric and oceanic computational simulation models often successfully depict chaotic space–time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but nonunique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model. Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision…

        Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable.’ http://www.pnas.org/content/104/21/8709.full

        Understanding the degree of irreducible imprecision would require systematic evaluation in a suite of ‘model families’ – this is not a part of current practice.

        Tim Palmer of the European Centre for Medium-Range Weather Forecasts opines that prediction is indeed not possible other than as a probability density function.

        ‘Prediction of weather and climate are necessarily uncertain: our observations of weather and climate are uncertain, the models into which we assimilate this data and predict the future are uncertain, and external effects such as volcanoes and anthropogenic greenhouse emissions are also uncertain. Fundamentally, therefore, therefore we should think of weather and climate predictions in terms of equations whose basic prognostic variables are probability densities ρ(X,t) where X denotes some climatic variable and t denoted time. In this way, ρ(X,t)dV represents the probability that, at time t, the true value of X lies in some small volume dV of state space.’ (Predicting Weather and Climate – Palmer and Hagedorn eds – 2006)

        It is scientifically naive not to conclude that the value of the models for ‘projecting’ climate into the sometimes far future is negligible at this juncture.

    • ferd berple – ” The quickest way to do this is to train the model using both historical data and a synthetic dataset for the future that matches whatever sensitivity you wish to achieve.”

      Because claims of this type appear to be common in blogosphere discussions of model design, I’m reinforced in my belief that a guest post by a modeler would be welcome to dispel misconceptions on this issue. As Judith Curry has stated above, climate sensitivity is not assumed in models, nor can models simply be “tweaked” to yield a desired sensitivity value. Rather, changing model parameters within constraints imposed by observational data will generate outcomes that the modeler can’t predict, and this is not a method used for the purpose of achieving a specified climate sensitivity. More on this can be found at Climate Modeling. Also, see Climate Models regarding misconceptions regarding model tuning, which is done to conform to observed basic climate behavior but not to match observed trends such as temperature changes observed over intervals of changing CO2.

      This is relevant to climate sensitivity as an emergent rather than specified property of models. As stated in the second link above, “there are tuning parameters that control aspects of the emergent system. Gravity wave drag parameters are not very constrained by data, and so are often tuned to improve the climatology of stratospheric zonal winds. The threshold relative humidity for making clouds is tuned often to get the most realistic cloud cover and global albedo. Surprisingly, there are very few of these (maybe a half dozen) that are used in adjusting the models to match the data. It is important to note that these exercises are done with the mean climate (including the seasonal cycle and some internal variability) – and once set they are kept fixed for any perturbation experiment.” (“Perturbation” here refers to an imposed forcing, such as a rising CO2 concentration).

      As mentioned in earlier comments, model inputs outside the realm of tuning parameters have been re-evaluated and adjusted in attempts to better match observations, including changes in estimated values for aerosol forcing in models with a fixed climate sensitivity.

      It seems to me that disagreements on any of these points would be most usefully discussed in conjunction with comments or a post from someone who designs and tests climate models.

  82. I have a cut and paste of Prof. McWilliams succinct description of model formulation and validation and the problems of irreducible imprecision in model outcomes. James McWilliams says it better than I do and certainly has the CV to back it up. Relevant to one aspect of this discussion is the bases for judging the plausibility of models in paragraph 3. These include ‘a posteriori solution behaviour’.

    From Fred’s disreputable realclimate source – one ‘of the most important features of complex systems is that most of their interesting behaviour is emergent. It’s often found that the large scale behaviour is not a priori predictable from the small scale interactions that make up the system. So it is with climate models. If a change is made to the cloud parameterisation, it is difficult to tell ahead of time what impact that will have on, for instance, the climate sensitivity. This is because the number of possible feedback pathways (both positive and negative) is literally uncountable. You just have to put it in, let it physics work itself out and see what the effect is.’ But as we don’t know what the range of emergent behaviour is from the range of feasible initial and boundaries conditions or feasible processes and couplings – the ‘plausibility’ of the solution is a criteria for judging the ‘plausibility’ of the model. Thus in a very real sense – the emergent property of sensitivity is selected by the modellers.

    ‘In a scientific problem as potentially complicated as climate, there is another modeling practice that is increasingly important: AOS models are open-ended in their scope for including and dynamically coupling different physical, chemical, biological, and even societal processes.

    The rationales for coupling are to investigate potentially significant feedbacks (e.g., radiative properties for different airborne crystalline ice structures, changes in air and water inertia due to suspended dust and sediments, and water and other material exchanges with plants and biome evolution) and to achieve ever fuller depictions of Earth’s fluid envelope. Besides adding to the overall complexity of AOS models, coupling increases the number of processes with a nonfundamental representation (i.e., similar to a parameterization), because, for the most part, the governing equations are not well determined for the model components other than fluid dynamics. When adding a new coupling link, there is no a priori guarantee of seeing only modest consequences in the AOS solution behavior.

    AOS models are therefore to be judged by their degree of plausibility, not whether they are correct or best. This perspective extends to the component discrete algorithms, parameterizations, and coupling breadth: There are better or worse choices (some seemingly satisfactory for their purpose or others needing repair) but not correct or best ones. The bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms, and a posteriori solution behavior. Plausibility criteria are qualitative and loosely quantitative, because there are many relevant measures of plausibility that cannot all be specified or fit precisely. Results that are clearly discrepant with measurements or between different models provide a valid basis for model rejection or modification, but moderate levels of mismatch or misfit usually cannot disqualify a model. Often, a particular misfit can be tuned away by adjusting some model parameter, but this should not be viewed as certification of model correctness.

    AOS models are members of the broader class of deterministic chaotic dynamical systems, which provides several expectations about their properties (Fig. 1). In the context of weather prediction, the generic property of sensitive dependence is well understood (4, 5). For a particular model, small differences in initial state (indistinguishable within the sampling uncertainty for atmospheric measurements) amplify with time at an exponential rate until saturating at a magnitude comparable to the range of intrinsic variability. Model differences are another source of sensitive dependence. Thus, a deterministic weather forecast cannot be accurate after a period of a few weeks, and the time interval for skillful modern forecasts is only somewhat shorter than the estimate for this theoretical limit. In the context of equilibrium climate dynamics, there is another generic property that is also relevant for AOS, namely structural instability (6). Small changes in model formulation, either its equation set or parameter values, induce significant differences in the long-time distribution functions for the dependent variables (i.e., the phase-space attractor). The character of the changes can be either metrical (e.g., different means or variances) or topological (different attractor shapes). Structural instability is the norm for broad classes of chaotic dynamical systems that can be so assessed (e.g., see ref. 7). Obviously, among the options for discrete algorithms and parameterization schemes, and perhaps especially for coupling to nonfluid processes, there are many ways that AOS model equation sets can and will change and hence will be vulnerable to structurally unstable behavior.’

    I have to say that McWilliams seemed to me to be speaking another language at first but it does come clear eventually.

    http://www.pnas.org/content/104/21/8709.ful

    • Yeah, but did you see this gem?

      http://www.pnas.org/content/107/2/581.full.pdf

      I’m still weeping for joy at the audacity of this charming piece.

      • Bart – One of the sites that references the PNAS paper by Majda et al is Isaac Held’s Blog. His latest post, number 9 – “Summer is Warmer Than Winter” – cites that paper along with McWilliams and an arxiv article by Ghil et al, all dealing with model uncertainty and/or approaches to mitigating it, including the fluctuation dissipation theorem and stochastic parametrization.

        I find Held’s blog edifying. He offers provocative thoughts on a variety of climate behavior issues, is very well informed, modest, and respectful of the views of others. Even in areas where my knowledge is superficial, including some of the more sophisticated mathematical elements of model design, I would feel comfortable commenting or asking a question, because the exchanges are non-adversarial. Held moderates the blog with this atmosphere in mind, and as result, there are no long arguments between participants but rather thoughtful discussions. A downside, I suppose, is that the exchanges are limited mainly to those in which Held himself participates, and so content grows slowly.

        The earlier eight posts are worth visiting as well as the latest one.

      • So they propose the solve the problem by linearising climate and circumventing the complex models. The FDT seems to be a calculated as a linear ‘response operator’ and I doubt that there is such a thing because climate itself is also chaotic. It assumes – as Isaac Held does – that climate itself is linear. You bother me with such nonsense.

        Do they have a solution to a problem – not yet at any rate. You are suffering from premature excitation. But Bart – I wish you would stop proving Voltaire right.

        ‘Recently, it has been suggested that there is irreducible imprecision
        in such climate models that manifests itself as structural instability
        in climate statistics and which can significantly hamper the skill of
        computer models for climate change. A systematic approach to
        deal with this irreducible imprecision is advocated through algorithms based on the Fluctuation Dissipation Theorem (FDT). There
        are important practical and computational advantages for climate
        change science when a skillful FDT algorithm is established. The
        FDT response operator can be utilized directly for multiple climate
        change scenarios, multiple changes in forcing, and other parameters, such as damping and inverse modelling directly without
        the need of running the complex climate model in each individual
        case.’

        ‘Recently, it has been suggested (16) that there is irreducible imprecision in comprehensive AOS models that manifests itself as structural instability in the climate statistics and that can significantly hamper the skill of these computer models for climate change projections. A systematic approach to deal with this irreducible imprecision is advocated through algorithms based on FDT.’

      • So they propose the solve the problem by linearising climate and circumventing the complex models. The FDT seems to be a calculated as a linear ‘response operator’ and I doubt that there is such a thing because climate itself is also chaotic. It assumes – as Isaac Held does – that climate itself is linear. You bother me with such nonsense.

        Do they have a solution to a problem – not yet at any rate. You are suffering from premature excitation. But Bart – I wish you would stop proving Voltaire right.

        Recently, it has been suggested that there is irreducible imprecision in such climate models that manifests itself as structural instability in climate statistics and which can significantly hamper the skill of computer models for climate change. A systematic approach to deal with this irreducible imprecision is advocated through algorithms based on the Fluctuation Dissipation Theorem (FDT). There are important practical and computational advantages for climate change science when a skillful FDT algorithm is established. The FDT response operator can be utilized directly for multiple climate change scenarios, multiple changes in forcing, and other parameters, such as damping and inverse modelling directly without the need of running the complex climate model in each individual case.

      • The last sentence before conclusions tells, what the paper is about

        This is a vivid demonstration of the fact that irreducible imprecision
        does not necessarily affect the low-frequency response of the slow-climate variables, while clearly obstructing the computation
        of the response for the full set of variables.

        McWilliams tells that in many models we have irreducible imprecision. Majda et al demonstrate that in some models this irreducible imprecision is not serious for low-frequency response of slow-climate variables. Both arguments are valid and relevant. In some models the imprecision is significant also for the low-frequency response in others it’s not.

        Working with the models and comparing them with all types of empirical data gives gradually more knowledge on the question, where the real climate lies concerning irreducible unpredictability, but the learning process is slow and reaching objective unbiased conclusions is extremely difficult.

        Studies of the type of Majda et al are helpful, but the quote presented above is as vague, as it is, for a good (almost irreducible) reason.

      • Pekka, Fred and Bart,

        This latter discussion is a distraction from the problem of models. The Majda et al paper was a response to the McWilliams paper on irreducible imprecision of models arising from sensitive dependence and structural instability. The issue was of the reliability of long term estimates of sensitivity and in particular judging the plausibility of current generation models using as one criteria the plausibility of the output. That is, the subjective plausibility of sensitivity as a result of the simulation is one basis for judging the plausibility of the model. A circular argument never made clear by most and strenuously argued against by Fred. In his usual way this was labelled as an urban myth of disbelievers, incorrect, a misunderstanding to be made clear by an ‘expert’. So I quoted an expert at length.

        Fluctuation Dissipation Theorem is the assumption that climate evolves in a linear way in the long term response even if the models show complexity in detail. It involves substituting a linear response function for the complex evolution of physical properties in models. Is this a worthwhile response to the problems of models described by McWilliams? This is just one preliminary paper – and certainly not used in any way in the current generation of AOS. Another approach has been suggested by Tim Palmer.

        Isaac Held assumes likewise, as does the IPCC, that in the long term climate evolves (quasi?) linearly. Climate is assumed to be the statistics of weather and if you add CO2 then over a long enough period the planet warms. Observations suggest that this is an incomplete description.

        Irreducible imprecision exists for all of the models that attempt to be physically realistic. Majda et al suggest replacing the elements of the AOS that result in solution instability with an estimated linear response. So I suggest that you, Pekka, have misinterpreted the method proposed for solving the problem of irreducible imprecision.

        My opening statement was this. ‘The real question is of the appropriate use of models – whether these should be understood to expository tools or used wrongly as reliable projections of future climate.’

        Bart introduced this second paper as a deliberate distraction from the McWilliams analysis. This is a common but unfortunate ploy from climate fanatics. The McWilliams meta-analysis stands and should be understood as the reality of AOS at this time.

      • Rob,
        What I am protesting are too certain statements from all sides.

        My view is that the accuracy of the models is very difficult to estimate. No formal methods can do that. One reason is that it’s not possible to disprove the significance of irreducible imprecision that grows without limit with time. (Growing without limit doesn’t mean that very large variations would not be less likely, but it means that their distribution has fat enough tails to exclude limits.) As no formal methods can be used, we are stuck with subjective judgments of plausibility. Meaningful subjective judgments require great expertize, but the best expertize is seriously influenced by common biasing influences.

        As far as I can see, you are likely to accept, what I wrote above. But we seem to differ, when giving weight to claims and arguments to support the idea that models are bound to fail to an essential degree. I’m not at all convinced by those arguments. These arguments are equally subjective and prone to bias than the opposing arguments. As an example the paper by McWilliams presents certain good points, but does not prove much. We just do not know, whether the climate can ultimately be modeled well enough for practical purposes. There are arguments in both directions, but no proofs.

        The same applies to the practical value of the present set of models, when their inaccuracy is accepted and their possible use limited to allow for this inaccuracy.

      • The point is that we do not know to what degree the models fail – therefore they fail to an essential degree in forecasting future climate. I have no – nor does any one else – have a problem with using models to explore physics but they are misused and misunderstood by many.

        The McWilliams paper is a succinct statement of the bloody obvious. I’m not in the least impressed by the ponderous gravitas you pretend – your subjective leanings even less.

      • To make informed judgments the arguments of each side must be understood and their merits compared. There are no general principles that can provide the answer, while the detailed evidence determine the outcome.

        My knowledge is not sufficient for making strong conclusions, but it should by now be clear that I have some severe doubts on the estimates of the reliability of the climate models, while I give almost no weight for formal arguments that purport to prove that modeling is impossible or that we cannot make any useful projections about the future climate. These arguments overextend far too much the significance of some formally true theoretical observations.

      • Chief

        You wound me.

        I introduced the paper for exactly the reasons I expressed. It is beautiful, audacious and charming.

        That it’s also useful and relevant, a mere accident.

        I was merely glad to have stumbled upon it through the link you provided, and was wishing to thank you.

      • “It assumes – as Isaac Held does – that climate itself is linear.”

        Robert – I think you have probably misinterpreted Held’s perspective, which includes accomodation for climate non-linearity. However, my suggestion is that you pose this question yourself on his blog. As far as I can tell, he would not refuse to post it, and would be glad to give his response (you can inform me if that proves untrue, but you need to wait a few days sometimes for him to revisit the site).

        I think we would all learn something from his response to your assertion.

        Regarding the FDT, which relates energy dissipation from natural fluctuations around a thermal equilibrium to energy dissipation from a perturbed state – as Pekka (below) points out, its value is most likely to emerge for slow, long-interval responses than for more short term fluctuations. I didn’t see anything in the Majda et al paper that requires the climate to be linear (but again, check with Held on this).

      • Pekka’s comment was above this, not below it.

      • Fred – the fact that you misunderstood the application of Fluctuation Dissipation Theorem as being an estimation of a linear climate response does not surprise me.

        Distractions, playing at semantics, red herrings – are all part of the game

      • Robert – Your comment is puzzling because it doesn’t describe anything I said, but I’ll let others judge that.

        My earlier point was that you appear to have misinterpreted or misrepresented Isaac Held’s interpretation of climate response as well as Majda’s. I’m not sure you’ve responded to that. If you read what I wrote, you’ll see that my emphasis was particularly focused on Held. You stated that he “assumes that climate itself is linear“. Do you continue to defend that claim? If so, I believe you continue to misunderstand Held’s and Majda’s perspective on climate responses – neither one claims that climate itself is linear.

        Held’s perspective is one of interest to me, because I largely share it, although I haven’t approached it with his skill at the level of mathematical precision. Essentially, the point is that climate itself exhibits clear non-linear and chaotic behaviors, but their impact depends on timescales of interest. Over short timescales, they dominate, but when one wants to evaluate a long term forced trend (e.g., a climate response operating over many decades or a century), the non-linear fluctuations become less important. Held interprets the response of the climate to the flux imbalance imposed by anthropogenic greenhouse gas forcing to be an essentially linear function of the imbalance, or close to it, and concludes that over the longer intervals, this linearity is likely to emerge despite the non-linearity of the shorter term internal climate fluctuations. (Note that the temperature response itself is not linear, because its rate declines as flux imbalance declines – see Held’s blog post 5 for more on this).

        As you can see, I don’t think, despite your claim, that Held “assumes” that “climate itself” is linear. However, speculation about what someone assumes is unnecessary here, because you can test your claim by asking Held himself what he assumes about climate itself. If this was indeed his assumption, he will corroborate your claim. If you misinterpreted him, he will tell you, politely, that you were wrong.

        Please understand that in questioning the accuracy of your comments, I’m not engaged in a personal attack against you. For that reason, your response seems unnecessarily defensive and adversarial, but perhaps I’m being unfair.

  83. Fred – your words are a mere semantic misdirection masqerading as puzzled objectivity. You have changed the grounds of the discussion from sensitivity to linearity – albeit on dubious ground on both counts.

    Mafja et al develop a linear response function to supplement the chaos of the models. Indeed this is essential to the application of the fluctuation dissipation theorem – although it may be drawing a long bow in respect to climate.

    Attempting to change the scope of the discussion is the fallacy of shifting ground – which is something that I see you doing a lot although it may be unconscious. Bart indulged in this cynically – which is why I find him cynically amusing but you simply frustrating.

    The argument you made in repeated posts is that sensitivity was an emergent property of models. McWilliams states that one basis for judging the plausibility of models is the plausibility of the result. Clearly there is a subjective choice of plausible output from non-unique solutions of plausibly formulated models subject to sensitive dependence and structural instability. You were asking for an expert to confirm your assertion of the wrongness of everyone else. I quoted extensively from an expert who categorically showed in a meta-analysis of AOS that you are wrong.

    Bart then changes the subject – and you enthusiastically leap on the bandwagon. I’m not sure about Pekka – he may may have just wandered into the firing line.

    • My impression is that you don’t lose gracefully, Robert – perhaps because winning is more important to you than arriving at an accurate understanding. As always, readers can judge these exchanges to come to their own conclusions. I’ll be happy to discuss specific issues, but preferably in a non-accusatory environment.

      • Well Fred – I get the impression that you are always saying how every one else is wrong and that you never admit to being wrong in any slightest way.

        You can explain to James McWilliams if you wish how ‘a posteriori solution behavior’ is not a factor in judging the plausibility of AOS formulation.

        ‘The bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms, and a posteriori solution behavior.’

        But I think your argumentative gambits – and devotion to a viewpoint -should be obvious to any with more than a fleeting acquaintance. Let’s leave it to the readers to judge.

      • Fred,
        A serious look into the mirror after studying the meaning of ‘projection’ is strongly suggested for yourself.

  84. … the steep rise and slow fall of fluctuating asymmetry [in sexual selection] is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift-the paradigm had become entrenched-so that the most notable results are now those that disprove the theory.

    http://nyr.kr/l9Lype

    Hope the same applies to AGW.

  85. Mike Hulme interview was interesting and rather moderate and reflecting uncertainty in climate knowledge and the importance of culture. (If you worked at the “Tyndall Centre”, where would your bedrock beliefs lie in respect of CO2 and climate change.)
    http://www.abc.net.au/rural/telegraph/content/2011/s3206387.htm

  86. Maybe apropos here: RealClimate has reviewed a new book by Haydn and Washington that attributes “denialism” (which might or might not include skepticism depending on how it’s defined) to people who haven’t evolved beyond the autonomic reptilian brain stem and have no higher brain intellectual capabilities. Since it’s then genetic are we no longer accountable? And since we haven’t evolved, does this add support to Roy Spencer’s thesis?

    • PS I moved (in my mind at least) the above to the Monbiot thread.

      • What percentage of human beings have a lizard brain? I think it’s universal.

      • All humans have what is called the lizard brain (stem). But they’re also supposed to have other brain parts that reptiles don’t e.g.

    • Rod B,
      The move to dehumanize the opposition is the last refuge of losers.

  87. Rod B –
    Not sure where they got the the autonomic reptilian brain stem. I don’t remember that in the family tree. But if it is there, then ALL present branches of the tree would have evolved to a more or less equal extent. What I’m getting at is that if “denialists” haven’t evolved beyond that point, then neither have “Believers”. I believe it’s an unavoidable consequence of being on the same branch of the tree. :-)

  88. ELEMENTARY ARGUMENT AGAINST AGW.

    Here is the global mean temperature anomaly (GMTA) for the last 130 years.
    http://bit.ly/iUqG8I

    Let us define two periods:
    Period 1=> 1880 to 1940
    Period 2=> 1940 to 2000

    From the above graph, the approximate increase in GMTA in the two periods:
    Period 1=> 0.4 deg C
    Period 2=> 0.4 deg C.

    From the above result, the rate of change of GMTA for the 60-years period before and after 1940 are nearly identical.

    Since the forcing is proportional to the rate of change of GMTA (Forcing = Constant * Rate of Change of GMTA) , the above result means that the forcing is also constant, which contradicts AGW. That is, the increase in human emission of CO2 after the 1940s has not affected the global mean temperature anomaly.