Workshop on the Roles of Climate Models

by Judith Curry

I am in the Netherlands, attending a Workshop on The Roles of Climate Models: Epistemic, Ethical, and  Socio-political Perspectives.

I am particularly excited to meet many researchers from the philosophy of science community whom I have referenced in my recent papers (especially Uncertainty Monster).

From the meeting website:

Climate models influence our understanding of climate change, its causes and its future. They are a central technology of climate science. But they are also sources of information for far-reaching policy decisions, sites of multidisciplinary integration, products of distributed epistemic labor and much more. As a consequence, climate models are of significant interest to scholars in philosophy, history of science, and science and technology studies. This workshop will bring together well-regarded scholars in these fields along with established climate scientists to explore the epistemic, ethical and socio-political roles that climate models play, their interactions and implications.

The program:

Thursday, October 31
09:00 – 09:15h Welcome and Coffee
09:15 – 09:30h Introduction to the workshop
09:30 – 10:20h Judith Curry: “A 21st century perspective on climate models from a climate scientist”
Comments by Wendy Parker
10:20 – 10:40h Coffee/tea break
10:40 – 11:30h Gregor Betz: “Are Climate Models Credible Worlds? Prospects and Limitations of Possibilistic Climate Prediction”
Comments by Rafaela Hillerbrand
11:30 – 12:30h Group discussion session 1: The epistemic roles of climate models
[Facilitator: Lenny Smith]
12:30 – 14:00h Lunch break
14:00 – 14:50h Kristen Intemann: “Values in Climate Modeling: The Good, The Bad, and The Ugly”
Comments by Behnam Taebi
14:50 – 15:40h Steven Yearley: “Models and muggles, representations of ‘models’ in professional, policy and public discourses”
Comments by Matthias Heymann
15:40 – 16:00h Coffee/tea break
16:00 – 17:00h Group discussion session 2: The centrality of climate models to climate science [Facilitators: Joel Katzav and Wendy Parker]

xxx

Friday, November 1
09:00 – 09.30h Welcome and Coffee
09:30 – 10:20h Suraje Dessai: “The role of climate models in informing climate adaptation decisions”
Comments by Joyashree Roy
10:20 – 10:40h Coffee/tea break
10:40 – 11:30h Erica Thompson: “Assessing the evidence: How decision-makers could gain useful insight from climate model results”
Comments by David Sexton
11:30 – 12:30h Group discussion session 3: The socio-political roles of climate models and the roles of values in climate models [Facilitator: Wendy Parker]
12:30 – 14:00h Lunch break
14:00 – 14:50h Gavin Schmidt: Title TBA
Comments by Joel Katzav
14:50 – 15:10h Coffee/tea break
15:10 – 16:30h Group discussion session 4: The freelance climate science movement, climate science blogs and climate models [Facilitator: Joel Katzav]

The abstract for my talk:

A 21st century perspective on climate models from a climate scientist

Judith Curry

Georgia Institute of Technology

Summary

Over the past two decades, the climate modeling community has increasingly interlinked the dual objectives of advancing scientific understanding of the climate system and providing actionable projections for decision makers.  Arguments are provided that climate models are inadequate for both of these objectives and that the current path of climate model development is unlikely to significantly improve this situation.  It is argued that the power and authority that is accumulating around GCMs and the expended resources, if continued, could be detrimental to both scientific progress and policy applications. To make progress on understanding the climate system and providing useful information for decision makers, I propose that two distinct strategies are needed, both of which de-emphasize the current strategy of building a comprehensive Earth System Model based on a general circulation model.  Elements of the proposed strategies include:

I.  Understanding the climate system: increased plurality in numerical climate model structural form; increased focus on lower order models and creativity in experimental design using models that are less computationally extensive; alternative approaches from network theory, information theory, dynamical systems; engagement of expertise from outside the traditional climate modeling community.

II.  Supporting decision making:  improve understanding of historical regional climate dynamics and black swan events; creative, regional approaches to scenario development; development of regional extended peer communities that support assembly and evaluation of climate and relevant land use, population and alternative policy scenarios.

A completely general, all encompassing climate model that is accepted by all scientists and is fit for all purposes seems to be an idealistic fantasy.  Given the compromises made for multiple purposes, current and planned Earth System climate models may not be an optimal solution for any of these purposes.

My ppt presentation can be downloaded here [curry presentation].

Not sure if the papers and ppt presentations will be publicly available, but I hope to have material for another post on this Workshop.

326 responses to “Workshop on the Roles of Climate Models

  1. Thanks for thinking of us Judith and best wishes for an interesting and enjoyable conference. I fail to see any names that I recognise from the climate orthodoxy giving any presentations but this might be not be factual.

    • Seconding Peter’s best wishes for a successful conference,
      Judith.
      (Say, liked comment in previous thread on weakness of models
      by H H Doiron PhD @ 10.21am 30/10.)
      Beth the serf.

      • Antonio (AKA "Un físico")

        I am not seconding Peter and Beth. Anyone can understand that models have contributed to the advancement of science. But models are validated by experiments.
        In climate change, though, the experimental measurement requires centuries, (as any physicist, climatologist or meteorologist could agree), see my:
        https://docs.google.com/file/d/0B4r_7eooq1u2VHpYemRBV3FQRjA
        And (I do not why) this experimental measurement issue is hidden from this climatic change debate (either skeptics or academics or consensus or whatever). Is there a fauilure in scientist ethics?

    • Dr Judith wrote:
      engagement of expertise from outside the traditional climate modeling community.

      Dr. Doiron comes to mind as a first choice.

      I simulated landing on the Moon, in 1965 with a Model he wrote.

      He is the leader of our Climate Study group that is make up of people with diverse backgrounds, including Climate Science. Our study group was formed in 2012, but some of have been studying climate for some years.

      • Dust off Ewing and Donn Climate Theory. Get the Albedo right.

        Advance Ice Extent after it snows more every time oceans get warm and wet.

        Retreat Ice Extent after oceans get cold and frozen and it don’t snow enough to replace ice that melts every summer.

        Look at data and understand that temperature bounds got tighter as Polar Ice Cycles developed.

      • Tom Wysmuller comes to mind a another first choice.
        He understands Ewing and Donn Theory and he makes skillful long range forecasts.

    • Gavin is listed for the orthodoxy, as you call it.

      • True but I do not put him in the same class as Mann and Hanson. He is a warmist but IMO engages far more effectively with sceptics than most others. His research interests are in natural variability and in the correct incorporation of paleo data with modern data series, which Mann has failed to carry out properly.

  2. No mention of regional models?

  3. Two points:

    This workshop will bring together well-regarded scholars in these fields along with established climate scientists to explore the …, ethical … roles that climate models play

    Good. I trust you will get to the heart of what is important to most people and explore the ethics of advocating for policies that will cost the world dearly and deliver next to no change in the climate – i.e. no benefits. I am of course, referring to policies such as GHG mitigation, carbon pricing and renewable energy schemes.

  4. JC,

    In your summary you mentioned two strategies. I think you have missed what is arguably the most important one: what is the damage function, or what is the net cost/benefit per degree of GW, or does it really matter if the planet warms?

    Richard Tol’s 2013 paper http://www.sciencedirect.com/science/article/pii/S0165188913000092 shows there are very few studies of the damage that can be used in the economic analyses, and the results are all over the place. The uncertainty is enormous. I’d expect the studies are probably biased towards worst case.

    Why isn’t fixing this gaping deficiency one of the strategies you advocate?

    • Actually, I’m looking at things on a more basic level, and with reference to climate models themselves (my talk is in the epistemic category). Others at the workshop may bring up this topic.

      • You said “increased focus on…models that are less computationally extensive.” You’ve said things in this vein before. I’m reminded of how, in the mid-70s, evidence began to accumulate that very simple macroeconomic forecasting models outperformed the big “100s of equations models” that had been growing up with increasing computing power. Sometimes less is more, and I hope to see more posts about that.

      • @NW – It all depends upon what you are attempting to accomplish. But yes, it is fascinating that simple models sometimes work best. That is due to the cancelling factor of so many of the inputs.

        Economics is not like Climate science in many regards. But perhaps if they step back and decide what it is they are trying to accomplish with an unbiased eye, they can actually come up with better models.

      • It really looks as if the epistemic uncertainty in something like a globally averaged temperature time series is swamped by the aleatory uncertainty in natural fluctuations.

        With the CSALT model, the aleatory uncertainty can be peeled away revealing occasional glimpses of epistemic or systemic uncertainty. For example, what caused the warming spike starting in 1943 and lasting for a year or two?

        See the latest http://ContextEarth.com blog post.

      • Economist Harry Johnson argued that one reason that the Friedmanite style of macro at U. Chicago started to gain ground in the profession in the 1970s was that it allowed junior faculty to work with simpler, more tractable models. They could come to conclusions and publish those instead of being relegated to knob-adjusters on equations 210-212 of some giant simultaneous-equations macro model. Arnold Zellner, a Bayesian econometrician at Chicago, was one of the strongest proponents of simple regression models.

    • Latimer Alder

      You mean, shock horror, that there might be some good things to come from a warmer world? That it’s not all bad news?

      Wash your heretical mouth out with soap!

    • “I’d expect…probably…”

      And you’re referencing this as evidence of what exactly?

      • me,

        OK, leave out that sentence, if it is distracting from the main point. It was intended as an aside, and I accept it can distract from the main point. My bad.

      • Easy, me, evidence of overweening fear and ignorance.
        =======

      • And you accept that “The uncertainty is enormous” can apply in both directions?

      • Nope, the last 2 degrees C of warming have been demonstrably vastly beneficial, and so will the next 2 degrees.

        If we get them. To demonstrate and all, you know. I suspect warming is just a vain hope, though.
        ==========

      • And you’re referencing this as evidence of what exactly?

        You’re asking if Peter has a consistent approach to uncertainty? You’re asking if he’s a skeptic, not a “skeptic?”

        Here, allow me to quote from him above:

        “…policies that will cost the world dearly and deliver next to no change in the climate – i.e. no benefits.”

        I’d say that should answer your question, eh?

      • Joshua

        Wouldn’t you agree that what the comment- ““…policies that will cost the world dearly and deliver next to no change in the climate – i.e. no benefits.” is generally correct about climate mitigation actions???

        How would you or anyone be able to demonstrate the specific benefits of those actions?

      • Hey Rob –

        Wouldn’t you agree that what the comment- ““…policies that will cost the world dearly and deliver next to no change in the climate – i.e. no benefits.” is generally correct about climate mitigation actions???

        Nope. I think that there is too much uncertainty to say for sure, and so then a careful approach to cost/benefit analysis (including external costs) in the face of uncertainty, with a consideration of perhaps improbable but significantly dramatic impact.

        I think that “skeptics” have it right when they talk about the importance of quantifying uncertainty. Unfortunately, many of then adopt a broadly selective approach to uncertainty.

        How would you or anyone be able to demonstrate the specific benefits of those actions?

        IMO, the best that can be done is a careful approach to cost/benefit analysis (including external costs) in the face of uncertainty, with a consideration of perhaps improbable but significantly dramatic impact.

      • Steven Mosher

        dont forget the external benefits.

        one of the main contributors to the decrease in infant mortality is electricity, which, when you were born, probably came from dirty coal.

        such that you’re being alive is due in part to the existence of electricity
        which was more likely than not due to dirty coal.

        how do you propose to put a number on your life?

      • Steven Mosher, “how do you propose to put a number on your life?”

        Try a little tenderness :)

        http://slackhalla.org/~demise/test/socialattitude.php

      • dont forget the external benefits.

        No doubt. An analysis that fails to account for external benefits would not represent a careful approach to cost/benefit analysis (including external costs) in the face of uncertainty, with a consideration of perhaps improbable but significantly dramatic impact.

        how do you propose to put a number on your life?

        A good question.

        As is the question of why people assume that they know how much “electricity” contribut[es] to the decrease in infant mortality” as compared to ” freedom of opportunity, including freedom to access credit; and economic protection from abject poverty, including through income supplements and unemployment relief.”

        http://books.google.co.uk/books/about/Development_as_Freedom.html?id=Qm8HtpFHYecC

      • Steven Mosher

        Joshua

        ‘As is the question of why people assume that they know how much “electricity” contribut[es] to the decrease in infant mortality”

        huh, I’ve never seen anyone say they know how much it contributed.
        I’ve seen studies showing how much it does contribute in urban versus rural studies of the contributing factors to infant mortality. But urban rural studies are just a hobby of mine.

        My argument isnt about how much it did or did contribute. The point is rather this: to achieve what YOU ask for, a fair accounting of all this, the onus is on you to propose something. And its my observation that you routinely ( its your motivated reasoning ) fail to acknowledge the possibility that there are benefits. maybe you just forget. but You might ask yourself, why do people constantly remind you that you forget the benefits or even forget to mention the possibility of benefits when you have good reasaon to believe that there are external benefits.

      • steven –

        you routinely ( its your motivated reasoning ) fail to acknowledge the possibility that there are benefits.

        I think you are mistaken. I often mention external benefits along with external costs.

        I never fail to “acknowledge the possibility that there are benefits” if asked about it.

        Of course there are external benefits.

        As I said above, I am in favor of a careful approach to cost/benefit analysis (including external costs) in the face of uncertainty, with a consideration of perhaps improbable but significantly dramatic impact.

      • And Steven –

        “… the onus is on you to propose something. ”

        At the point where I begin to care about your asssessment of what I’m responsible for, I’ll let you know. Here’s a hint – engaging in good faith would be a prerequisite.

      • Joshua

        What means do you use to quantify the benefits of a climate mitigation action? If actions are taken today that will result in CO2 levels being at 450 ppm vs 453 ppm in 2050 how do you determine or even theorize that there will be any benefit?

      • Rob –

        What means do you use to quantify the benefits of a climate mitigation action? If actions are taken today that will result in CO2 levels being at 450 ppm vs 453 ppm in 2050 how do you determine or even theorize that there will be any benefit?

        I don’t know that there “will” be any benefit. I think that it is a possibility, and that is why I pointed to the complete certainty in Peter’s statement. Such certainty, it seems to me, is not consisent with due skeptical diligence.

        There have been a number of studies done which have varying results w/r/t benefits and/or costs. And of course, those studies, then, need to be placed into the best efforts at quantying uncertainty w/r/t the potential impact of ACO2 on the climate – with due consideration of potentially improbable but significant impact.

        But as far as I know, even those studies are not contextualized by a thorough approach to a full cost accounting that includes negative externalities, (and because Steven is so obsessed with me and feels harping on the obvious comprises and argument, I’ll add, positive externalities also). I don’t see how anyone can argue that they have a full grasp of the relative benefits and costs without a good faith and comprehensive effort to quantify externalities. For example, when I think of our current economic state in this country, I think of the enormous costs of keeping the fossil fuels flowing, in terms of more direct costs but also in terms of costs respresented by healthcare costs, premature deaths, unnecessary deaths, environmental costs, etc.

      • Rob Starkey – Your question gets to the heart of the deception in alarmist theory. The proposed remedies do not come close to matching the proposed problem. And because most people are innumerate, they can be swayed by qualitative arguments that make no quantitatvie sense. Joshua is, either wittingly or unwittingly, in this camp.

      • Joshua

        Imo, there is a test of reasonableness and the majority of climate mitigation action fail the test of having a reasonable chance of producing worthwhile results to those incurring the expense (or to those living much later). If you assume my example is in the ballpark (take actions today that will result in CO2 levels being at 450ppm vs. 453ppm in 2050). You can quantify the costs being spent today, but there appears to be no claimed benefit for incurring the expense. In order to justify today’s expense, isn’t it the obligation of those desiring others to incur the expense to show that it will accomplish something worthy of the expenditure? Now someone might like to claim that there will be fewer storms in 2050 or lower sea level as a result of the change, but we both know that is a deeply flawed analysis.

      • Rob –

        “… isn’t it the obligation of those desiring others to incur the expense…”

        First, as a kind of aside, I am not a big believer in “burden of proof” arguments. I don’t see “obligation” as residing on either side in these debates, nor do I see the goal of determining “obligation” to be consistent with constructive dialog (although it seems to be effective for self-justifying victimhood). But that isn’t what I consider to be the main point of interest in your comment, so let’s not get stuck debating that issue (I’ll just tell you my perspective and you can take it or leave it).

        “You can quantify the costs being spent today ….In order to justify today’s expense, isn’t it the obligation of those desiring others to incur the expense to show that it will accomplish something worthy of the expenditure?

        IMO, if you haven’t quantified the negative (and positive) externalities, then your discussion about “expense” is based on poorly quantified certainty. My perspective is that in reality, without a more comprehensive approach, you can’t quantify the costs being spent today.

        What is the “cost” of the trillions spent – money that largely winds up in the hands of despotic governments that don’t give developmentally crucial freedoms to their populations – for the purpose of keeping oil flowing? What is the “cost” of particulate matter from burning fossil fuels? What is the “opportunity cost” of not building mass transportation infrastructure that would be powered by alternative fuels? (And because steven thinks that the argument is all about me, what is the benefit of reduced infant mortality resulting from increased access to electricity – as compared to the impact of civil rights or access to education or access to healthcare on infant mortality rates?).

        And as if those questions weren’t difficult enough, what is the “cost” of potentially delaying, at least to some degree, for at least some period of time, significant climate impact that might be improbable but that is not impossible, and during that time devoting resources to the development of new technologies?

        Obviously, a direct quantification of “benefits” from the kind of mitigation that you are referring to is very difficult. The question as to whether anything can (realistically) be done at this point in time that would significantly reduce climate change that might be “in the pipeline” is a very legitimate question, IMO. “Skeptics” are absolutely correct in asking those questions. Once again, however, I don’t think that a selective approach to uncertainty serves any purpose towards providing answers, and it doesn’t particularly matter which group of climate warriors is ignoring uncertainty. All it does is perpetuate the Jell-O flinging.

      • Steven Mosher

        Joshua

        “I think you are mistaken. I often mention external benefits along with external costs.

        Im open minded. here is a test.

        Go back through threads. Look at every comment where you used the word ‘externality’ or external

        Count the number of times you mentioned benefits BEFORE somebody reminded you to include it.

        lets see if your count matches mine.

        google grep

      • Steven –

        You’re entitled to think the number is whatever you think it is. Knock yourself out.

        I sometimes mention positive externalities before being asked about them, and I sometimes don’t.

        I will always acknowledge that accounting for positive externalities is a crucial part of a careful approach to cost/benefit analysis.

        Of course it is.

        That is the point of the argument, IMO, although you certainly have a right to maintain your obsession with making the argument about me.

      • Who will toll the externalities?
        =========

      • Who will ignore the externalities?

        And be oh-so-confident when doing so?

      • Discerning the inherent subjectivities is hardly ignoring externalities, blithely or not. A swing and a miss.
        ===============

      • Uncertainty about the problem (man-made climate change) is a given; but uncertainty about the chosen solution is inexcusable. This is to say, we should be confident that our solutions are going to be effective, and the more expensive the solution the more confident we should be. In short, big responses require high levels of confidence that they will work. There seems to be a lack of credible evidence to demonstrate carbon pricing passes this test.

        http://jennifermarohasy.com/2013/08/why-the-ets-will-not-succeed-peter-lang/

      • Quite valid way to characterize science. It’s when people begin claiming certainty and full knowledge that I reach for my resolver, and double-secure my wallet pocket. .

  5. Latimer Alder

    Nice gabfest.

    But what will have changed after the conference?

    Who/what will be doing things differently?

    • Perhaps a grad student or postdoc attending the workshop will be inspired to head off on some road less traveled, and 30 years later will, as a result, bump into something useful. This has happened before.

    • Latimer makes a good point. Why should experts share their insight with each other when they could just ask Latimer to tell them what to think?

      • @Joshua

        I’m always open for consultation. Very reasonable fees, waived for the deserving. I fear that, in your case, no reduction is envisaged.

        But the substantive point remains.

        Our academic colleagues can get together for three days in a nice part of the Netherlands and ‘share their insights with each other’ till they’re all blue in the face, overcome by too much food or drunk as skunks on cheap white wine.

        And if nothing changes at the end of it will all have been a complete waste to their time and our money. Good for the Air Miles, but bad for the fossil fuel consumption.

        Above a contributor suggested that the eventual results might not be obvious for 30 years…and may never occur at all. Though this might be ‘the way things are done in climatology’ (copyright P. Jones UEA), you’ll forgive me for wondering if its the best way for taxpayers money to be spent.

        And without some definite objectives beyond ‘sharing insights’ it looks very like just a pre-Xmas shopping jolly to Europe.

        Has videoconferencing not arrived in academe?

      • Latimer –

        Given that you’re open to providing consultation services, I’m shocked that you haven’t had your door beaten down in the stampede of prospective customers. What could explain why it hasn’t happened? Must be some kind of conspiracy, eh?

        Our society, as a whole, has decided that it is worthwhile to support the development of academics. I’d say that all-in-all, it has paid off. You might think that you should be deciding precisely what activities do or don’t make sense for them to engage in. Who knows, maybe you would be a better “decider” than they. But while I’m sure that some mistakes will be made, I’d say that professionals like Judith have a pretty good sense of what kinds of activities will further their research trajectory.

        As for the rest of your comment… What value is there to be gained from your moralizing, finger wagging, and scolding here on this blog? Certainly it will have no effect on other people’s behavior. Is the benefit that somehow it makes you feel better?

      • Latimer Alder

        @joshua.

        I’m very happy for people to get together to ‘share their insights’. In the Dog and Duck at the end of my street it happens between 5:30 and 8:00 pm most nights of the week. In other establishments its called ‘Happy Hour’. And no doubt academics, just like the rest of us, enjoy convivial company and a bit of office gossip. An occasional few pounds or dollars or Euros of their own money spent on a couple of drinks and a good old natter is probably a decent investment

        But when it comes to an expensive week-long trip for dozens of participants, I’d hope to see something better in the purposes of the meeting than hand-waving ‘sharing insights’.

        And without some concrete objectives, any such gathering is likely to have no more lasting value than the Friday night cocktails. Attendance probably looks good on one’s CV – giving a paper even more so, but in terms of achieving anything for the taxpayer’s money invested, it seems to be very unlikely.

    • Baby steps.

  6. Morley Sutter

    Is there nothing on model skill or their testing? Are these not the “elephants” in the rooms?

  7. Looking forward to the follow-up post.

  8. Tell ’em there has been a ‘systematic neglect of relevant alternatives’.

    H/t Gee, I looked for the author and couldn’t find one.
    =================

    • Or, maybe start out with something other than “it’s Co2 and we are going to prove it!”

  9. Ooh, probably Tetsuji Iseda.
    ===========

  10. So the conference is on today and tomorrow, and Gavin Schmidt is yet to write his presentation.

    Perhaps he wants to see Judith’s one first ;)

  11. I am sorry. For making forecasts 30+ years in the future, climate models can never be validated. This process simply takes too long. We know climate models have not been validated for time periods of 10 years; Smith el al Science August 2007. So we know such models are completely useless for giving advice to policy makers. In fact they are worse than useless. They give the false impression that the results mean something, when they are nothing more than guesses.

    Unfortunately this applies to the advice that has already been given, using these models, so immense damage has been done already. As Latimer has so rightly observed, this conference is not going to accomplish anything. “All sound and fury signifying nothing” Shakespeare.

    Someone who matters needs to state, in words of one syllable, that science, physics, cannot tell us what happens as we add more CO2 to the atmosphere from current levels. If that could be the outcome of this conference, then it might do some good. But with the content of your speech, and Gavin Schmidt as one of the speakers, such an outcome is almost certainly impossible.

    • Callandar create a simple first order model in 1938.
      It validates .
      It is useful for policy.

      a zeroth order model was created in the 1890s. It’s also valid.
      Its also useful for policy.

      Hint. a models dont have to be right to be valid and they dont have to be correct to be useful.

      • What’s amusing is that the nth order, and expensive, GCMs aren’t useful for wise policy.
        =========

      • useful, adj: a description of an article which means pretty much whatever the speaker wants it to mean.

        synonyms: “fairness”; “for the children”

      • What policy?

      • Steven, you write “Hint. a models dont have to be right to be valid and they dont have to be correct to be useful.”

        You seem to be being obtuse. I have no idea what you are getting at. Until a model has been validated, it is useless for making predictions about what happens in the future. Period. Once a model has been fully validated, it can make accurate predictions about the future.

        Everything else is just nonsense.

      • Steven Mosher

        Jim

        predictions dont have to be accurate to be useful.

        The weatherman predicted a 50% chance of rain today.

        That prediction was useful. I made use of it. I brought an umbrella to work.

        It’s sunny outside. Looks like his prediction was wrong. But it was useful.

        Models are not validated against reality. They are validated against a specification.

        You know operations research. We build models all the time that are never right. and sometimes never measured against reality, but they are useful.

        Think of any war simulation. Consider all the theatre simulations run before Desert Storm. You know about those right? You know about all the theatre simulations run before the attack? not a single one of those models had ever been measured (successfully) against real war data. not a one. yet those models were useful.

        in some cases we used commercial GAMES to understand things.
        Games there were never designed for this purpose.

        http://strategypage.com/wargames-handbook/chapter/9-wargam.aspx

        http://strategypage.com/wargames-handbook/chapter/Contents.aspx

        Since you dont make decisions you dont get to say what is or is not useful

      • Actually, a 50% prediction is not predicting rain. It is basically giving no prediction. A 60 or 70% prediction would be a prediction of rain.

      • Steven, you write “Models are not validated against reality. They are validated against a specification.”.

        I disagree. If the warmists are advising our politicians that we need to spend billions of dollars, and decarbonize our economy, then the predictions on which this is based MUST be validated against reality. If they are not, then the warmists are grossly misleading our politicians. Which they clearly are. That is the issue.

        You should see my post below on the statement that Baroness Verma made in the House of Lords. Presumably this statement was based on advice from the UK Met. Office and the predictions made by the climate models. I would be prepared to go against any scientist, and claim that the statement is just plain wrong. And it is supposed to be illegal for any Minister of the Crown to lie to Parliament..

        But that is the low level that climate science has brought us, when the warmists think that it is worthwhile lying to Parliament in defence of The Cause

      • Just to add to my post. Your statement that models are validated against specifications is in the same category as that estimates are not categorically different from measurements. You make the statement with authority, presumably in the hope that no-one will challenge it.

        Both statements are just plain wrong.

      • Jim Cripwell,
        “Just to add to my post. Your statement that models are validated against specifications is in the same category as that estimates are not categorically different from measurements. You make the statement with authority, presumably in the hope that no-one will challenge it.”

        Nope, Steven is right. That just illustrates the problem with the models and the “measurements”. Neither had a specified standard prior to design. We are battling free form pseudo-science. Not that is no validity to CO2 enhanced warming, just that guys footing the bills never specified any design criteria so there is no goal other than indicating CO2 has some impact on climate. They can call anything a “measurement” as long as there are no design tolerances. With tolerances, you tell them to store their estimates and models, where the sun don’t shine.

      • A pig in a poke. Cloud source an estimate of the pig’s weight and eyeball the volume of the poke.
        ================

      • Ted Carmichael

        Hi, Mosh. You said, “Models are not validated against reality. They are validated against a specification.” I’ve seen you state something similar before. You should take more care to clarify that, as an absolute, your statement only applies very narrowly, such as with software V&V. In a more general sense it is perfectly acceptable to validate models against reality – even software models (if the creator is interested in conducting simulation experiments rather than delivering a product).

        Simply put, verification is “are you building it right,” and validation is “are you building the right thing.” It is perfectly acceptable for “are you building the right thing” to be measured against real-world data, to see if all the theories and assumptions built into the model are sufficient to explain or predict a real-world phenomenon.

        I would go so far as to say that *most* models are designed to be measured against reality. They are, after all, simplified versions of something real. And so the validation process is used to see how well the model replicates reality (did you build the right thing?) relative to the model’s purpose.

      • > I would go so far as to say that *most* models are designed to be measured against reality.

        In that case we should talk of verification. A model is verified in reality. A model is validated by a theorical apparatus.

    • Curious George

      “… climate models can never be validated.” While CAM 5 model has never been just validated, it has been “scientifically validated”. For what ends, I don’t know; I don’t intend to drag answers from NCAR any longer.

  12. Have a lovely time, kick butt and take names :)

    https://lh5.googleusercontent.com/-eC-JCT_YuTI/UnI3Gsocu7I/AAAAAAAAKR8/k86gbJ3rWpI/s645/land%2520use%2520and%2520best.png

    Then perhaps we can revisit land use and the hydrology cycle :)

  13. A fan of *MORE* discourse

    Judith Curry advocates “Increased focus on lower order models and creativity in experimental design using models that are less computationally extensive.”

    James Hansen et al practice Judith Curry’s recommendation  “We use an efficient climate model to expand our estimated climate sensitivities beyond the Cenozoic climate range to snowball Earth and runaway greenhouse conditions … [namely] the simplified atmosphere–ocean model of Russell et al. which solves the same fundamental equations (conservation of energy, momentum, mass and water substance, and the ideal gas law) as in more elaborate global models. [;…] We use [this] global model, simplified to essential processes, to investigate state dependence of climate sensitivity, finding an increased sensitivity towards warmer climates, as low cloud cover is diminished and increased water vapor elevates the tropopause.”

    Judith Curry, aren’t James Hansen and his collaborators already following your modeling recommendations?

    Conclusion  The success of simplified Hansen-style climate models verifies and validates Curry-style model-design principles.

    \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • Yes, I found that a bit perplexing, given that Judith has been telling us for years that the models need to much more comprehensive and thay they render certain parameters too simplistically.

      • Nope, I have never argued for complex models. They get the fundamental atmospheric and ocean dynamics wrong; given that, adding complexity is pointless

      • That is why the elementary mean value forcing function model works so well. Simple and yet to be invalidated.
        It accounts for the pause as well.

      • If I’m not too far off this sub-thread, “fundamental”, “comprehensive” and “complex” have relationships that can be compared to those of the more-commonly seen pair, “accuracy” and “precision”.

        A traditional example is the rifle that shots high & to the right, into a small group; it is precise, but inaccurate. The rifle that clusters all it’s shots symmetrically around the bullseye, but into a larger group, is performing more accurately, but less precisely.

        Models can be comprehensive, without complexity. Sometimes we choose a non-comprehensive model, under the same kind of thinking & goals behind “isolating” a muscle or limb, in gym-work and exercise. Considerations of fundamentals, likewise.

        Sometimes, we find ourselves ‘obliged’ to ‘load’ a modeling-project with ad hoc, empirical factors …. like when we are groping our way into a phenomenon. Indeed, accommodation/use of complexities in a model is a sign that either the project is still immature, or that the subject is not amenable to (scientific) modeling.
        ===

        The use of the computer & programming resources to ‘man-handle’ complexities has contributed largely to our current troubles, in climate. Workers at all levels have accepted the presence of undesirable complexity, under the fallacy that since the computer can ‘do the bookkeeping’, the liabilities that complexity introduces into the model have been resolved, which is not true.

      • http://map.nasa.gov/documents/3_07_Meeting_presentations/curry.pdf

        P.13 sounded like a call to improve the details and more accurately reflect the complexities.

    • A fan of *MORE* discourse

      Judith Curry asserts  “I have never argued for complex models. They get the fundamental atmospheric and ocean dynamics wrong; given that, adding complexity is pointless.”

      Judith Curry’s assertion notably lacks reason, citation, and verifiability; fortunately these deficits are repaired in this week’s Ars Technica survey Why trust climate models? It’s a matter of simple science

      There is a place within climate-science for multiple modes of understanding:

      • the simplest radiative balance arguments affirm global warming, and

      • paleo calibration affirms large global warming, and

      • simple Hansen-style dynamic models affirm global warming, and

      • large-scale dynamic models affirm global warming, and

      • thermometric, altimetric, and gravimetric observations affirm global warming, and

      • global-scale energy balance analysis affirms global warming, and

      • the ‘pause’ in the (large noise!) surface-temperature record apparently is ending!

      Conclusion  The era of rational climate-change skepticism is ending, so that sustainability will become the chief focus of public discourse in coming decades.

      This scientific evolution is pretty simple, Climate Etc readers!

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • “• the ‘pause’ in the (large noise!) surface-temperature record apparently is ending!”

        And of the 5 temperature indicators you only choose 1 to back up that claim. I wonder why… Could it have anything to do with the fact that the one, GISS, is the only one that is even close to showing September 2013 as the warmest? Fits your paradigm?

      • • the ‘pause’ in the (large noise!) surface-temperature record apparently is ending!

        Self-referential, circular citation.

        If we actually had a decent sign that the pause is ending, there would be nothing else in the news.

        ENSO is predicted to be neutral, through the spring of 2014.

        Solar cycle 24 just showed evidence of a “double-peak”, further-improving its curve-matching with previous solar-related severe cold-snaps.

        No … the AGW & CAGW communities are clearly holding their breath, hoping the pause simply continues as-is, and doesn’t turn into an overt cooling pattern.

      • Ya gotta love fan’s touching zeal. 14 and a half years of “pause” was denied. But one month of temp increase and “The pause is over! Yay! We’re all going to die!!!”

        15 years of “pause”? No big deal.

        A warmer than average September? Priceless.

      • The last year has been the most puzzling. According to the CSALT model, we should be seeing slightly elevated temperatures. This last GISS datapoint puts it back on track, but this has to sustain as well.

        http://ContextEarth.com

        A good model like this that includes a 6 month lead indicator such as SOI makes it fun to do near term projections. It has worked for 130+ years and I see no reason for it to fail working.

        See my latest blog post for more on this, linked above.

      • WHUT,

        I asked you on another thread, but it was busy so maybe you missed.

        Has your model been looked at by any of your fellow consensus types? If so, any links to their reactions?

        And have you run the model on paleo data?

      • WHUT,

        I’m an old WordPress guy, too. You installed WP Dot Org on your own Remote Host, or server? If so … you know, you’re ahead of WUWT, and Climate Etc. Oh! you’re UMN … so it’s on the campus network? Nice perk.

        But …. but – you got all your posts on the Front Page!? You should use More, and link to pages for each post. It’s right there in the Editor….

        And, your Introduction is your first Post, which is currently dead-last on a humongous single page. That’s a little Illogical. ;)

        Implementation & Execution counts … as always. “Ideas are a dime a dozen … Invention is 90% perspiration and 10% inspiration”. Letting the Site be a challenge for your visitor isn’t going to improve the prospects for your Model. Harsh, maybe; true, absolutely.

        If the current Big Models swan-dive into the scientific dry cement swimming pool, ‘alternative’ models (including your own) will see an intense burst of at least transient interest & cursory inspection.

        You should be ready … more-ready.

        Ted

      • Clayton,
        Everything you said was wrong, starting with the fact that you have no clue as to where the server is hosted. University? Too funny.

      • WHUT,

        … [Y]ou have no clue as to where the server is hosted. University? Too funny.

        Only you have used the word, “University”. My comment does not contain it.

        You’re the one trying to promote a privately-developed Model. It’s up to you to decide how much effort will going into making a success of its presentation.

        People who write papers for publication, seek & greatly value input & guidance from others, before they plop it down on an Editor’s desk. Putting forward a new Model, is an even bigger project, and has a greater need for vetting & proofing, than a mere paper.

        I could not see from a visit to your site, that you are even doing something specific, much less that it is supposed to promote a Model.

        You may think little of me … but my “experience” at your site is likely to be typical.

        Ted

      • Clayton, You are not very observant. I have a blog here, http://ContextEarth.com which is used for discussion, but my research site is a semantic web server hosted at http://entroplet.com (which is actually an EC instance on the Amazon cloud). The latter contains all the models, data, algorithms. knowledge, etc.

      • WHUT,

        I am working outside & away, through the weekend, but I will visit Entroplet (nice name – as is ContextEarth!) as I can, and get the outline of your model sketched in my head.

        There really should be more of these kinds of efforts, and those who are making them, are the ones working in the right direction.

        More later. – Ted

      • This morning, I used my smidgeon of free/personal time to visit both of WHUT’s domains. My goal was to identify posts or pages there, that ‘introduce’ the Model that WHUT is working on.

        What I believe I saw, is that WHUT is working on a model-abstraction, or as it is sometimes otherwise known in programming & academic circles, an abstraction-layer. This abstraction is being called “Context Modeling”.

        In an effort to get a different ‘take’ on this Context Modeling idea, I searched for the term … and sure enough, there’s an article for it, on Wikipedia. Unfortunately, in terms of finding a ‘different’ perspective on the term/idea, it turns out that WHUT is also the author of the Wiki piece … using text from the websites.
        =====

        “Abstraction” and abstractions can be very powerful. They can also be “arbitrarily” opaque & arcane. Some of our most penetrating scientific advances, have been thanks to exercises in “pure” abstraction.

        The downside to abstraction, is that it can require a high level of previous preparation, before it is either comprehensible by, or of any utility to the student.

        A favorite example of the joys & tragedies of abstraction, is the realm of “pure” mathematics. Hairy-brain mathematicians are fond of reminding us, that math is NOT about “numbers”. ‘Say what?’, is the response of many a student.

        Aficionados of abstraction have been known to casually respond to complaints, that one will of course need to get a good education, before it is possible to follow what they are talking about. Or that one must buy a specialty text, which itself is arcane & abstract, and acquire in-depth command of it.

        I’m not going to be able to take those steps
        =====

        What I need, and what I think other members of the audience would also like to see/have, is a ‘pragmatic’ or ‘practical’ description of the model that WHUT is working on. An ‘example’, or instantiation as it is known in the trade

        Especial, the practice we have seen in these comments, of pointing others to the base domain of the websites that WHUT has up, without a direct link to a post or explanatory page, leaves one facing the prospect absorbing the entire body of research & activity, before an inkling of what WHUT is up to can be had.

        Again, I don’t have the time to do that. WHUT could be the most brilliant climate-modeler on earth, but in the absence of effective communication, it’s possible that few if any others would ever know it.

        That it a nutshell is the occupational hazard of abstraction.

        Point me to a succinct & direct & ‘realistic’ introductory treatment of your project/Model, WHUT, and I will look at it.

        Ted


      • Point me to a succinct & direct & ‘realistic’ introductory treatment of your project/Model, WHUT, and I will look at it.

        Ted

        Context modeling of environmental models is a continuation of the famed Autonomous Vehicle Grand Challenge competitions that have occurred over the last decade. I didn’t come up with the name but my team was the only one involved in the funded infrastructure research so far.

        Google C2M2L

        The basic gist is that all future technology designs will take place in a virtual world and so the “context” of that virtual world (terrain, weather, etc) will have to be of sufficient fidelity to accommodate the verification of those designs.

        Long-term climate models weren’t in the original scope of context modeling but the research software is all open source so I have recently incorporated climate and fossil fuels into the mix.

        Anyone is free to come on board if they have something to add.

  14. In other news, Douglas Keenan tries to push another non-stationary model.

    Review by Richard Telford:

    > Replacing an oversimplified but informative model with a physically meaningless model is not progress.

    http://quantpalaeo.wordpress.com/2013/10/31/statistics-are-not-a-substitute-for-physics/

    One does not simply bash models while invoking statistical arguments, even if a few days lapse between the two hobbies.

    • Especially amusing is the second half of Doug’s little article. By Golly, now I’ll have to go read it.
      ==============

      • The Bish has it.
        ======

      • Amusing and revelatory the travails of Lord Donaghue. Audit that, willard.
        ==========

      • Here, Koldie:

        > While I enjoyed Parker’s talk, in the end I was not convinced. To begin with, we are left with the seemingly contradictory conclusion that assimilation models are not observation instruments, and yet produce observations. Second, van Fraassen’s idea was meant to apply to measurement, not observation. Parker acknowledged that van Fraassen distinguishes the two, but she treated them as effectively the same. Lastly, it is not clear what hinges on making the distinction that Parker is pushing, and indeed quite a bit of confusion may arise from blurring too much the distinction between actual (that is, empirical) observations and simulation outcomes. Still, the underlying issue of the status of simulations (and their quantitative outputs) as theoretical tools in science remains interesting.

        http://rationallyspeaking.blogspot.ca/2012/12/philosophers-and-climate-change.html

      • If one can remove other time series components from the one under study, that’s what you want to do.

        So if we remove the SOI, aerosol, LOD/StadiumWave, and TSI, we have a reduced time series which shows much less noise and needs another process to explain. Not too many options left, and the obvious one, growth of GHGs fits the data mighty fine.

        Curry’s stadium wave concept was a breakthrough in pinning down the contribution of CO2 to the warming trend.

      • Hah, Donaghue ran into egregious obfuscation @ the Met Office, too.
        ===========

      • Indeed, but is Gavin honest?

  15. Hi Judith

    Hope you are having a good time in The Netherlands.

    Don’t forget to try some proper (not mass-produced) croquettes and bitterballen while you are there!

    As for the conference, while it lacks the pizzaz of resorts such as Cancun and Rio, it does seem to be an honest attempt to discuss the issues. Just a note – the Dutch are culturally inclined to a bluntness that your more diplomatic American manners might find a bit disconcerting.

    I was born there, raised in Australia (not a bad fit, culturally), but even I am sometimes abashed by the directness of a real Dutchie – not unlike the Scots. It’s nothing personal.

    Look forward to your news.

    • Johanna,

      I couple your cautionary words about Dutch bluntness with a recent article that said the averaged Dutchman is 6′-4″ tall and growing, while the average American male is stagnating at around 6′. I will have to remember these facts the next time I find myself buying bread in Brussels.

  16. Climate models are tools of climate science. The world outside of climate science should be presented results of climate science, not specifically those of climate models. Sometimes the models can produce output that represents accurately the present scientific understanding, but it’s more likely that raw model results are not fully representative of the understanding.

    From the Workshop website:

    As a consequence, climate models are of significant interest to scholars in philosophy, history of science, and science and technology studies.

    I would like to split that in two separate issues:

    1) The interest of all the listed parties in climate science.

    2) The role of large models in climate science or more generally in the analysis of complex systems.

    I’m afraid that handling both issues at the same time leads only to confusion.

    • That is a sage analysis Pekka. Always good to take into consideration the reality of the problem. +1

  17. David L. Hagen

    Congratulations on lead billing.
    On “lower order models”, suggest raising why skeptic models appear to be giving better predictions with natural variations dominant.
    e.g. Nicola Scafetta
    Global warming prediction project
    Joseph d’Aleo and Don Easterbrook
    etc. etc.
    How do we restore the scientific method of weighing and falsifying models with poor predictions compared to those that give better predictions?

    • You cannot compare a non physical dimensionally incorrect formula
      with a physics model.

      Well, you can so lets compare them

      please provide the taylor diagrams for Scafetta for the following metrics

      1. Sea surface temperature
      2. Surface air temperature
      3. TLT
      4. Stratospheric temperatures

      Then compare those taylor diagrams with the taylor diagrams for
      the worst GCM you can find.

      tell me who scores better.

      why these variables?

      1. SST is a diagnostic that Bob Tisdale thinks is important and has criticized the GCM for
      2. Surface air temperture is a metric I have criticized
      3. TLT is one that mcIntyre has criticized
      4. Stratosphere is one that Santer has criticized

      So there we pick some metrics that skeptics and critics have looked at
      and we compare two models : scaffettas and the worst GCM

      winner take all.

      oh, and please provide scaffetas data as used and code as run.
      you know his co author Loehle has distanced himself from scafettas mannian refusal to be transparent

      • Matthew R Marler

        Steven Mosher: 1. Sea surface temperature
        2. Surface air temperature
        3. TLT
        4. Stratospheric temperatures

        That’s a start. We need regional variation in temp and rainfall, and we need the variation through all levels of the atmosphere. Rainfall is important because of the warnings of Steven Chu and James Hansen (and uncounted others) of permanent drought. Also important is cloud cover, because increased cloud cover occurs with increased temperature, so it damps and may reverse temperature increase.

        I support your insistence that all data as used and code as used be made available to the interested public. This past year there were two publications about climate in Science that interested me, where the authors did not post (links to) their data and code as used, and have not (at least not yet) responded to my request. I personally am not that important, but the violation of the written Science policy should be noted, and I am not the only person to have noted it.

        I also support your idea of systematically comparing model predictions to data. At minimum, to be considered seriously, a model should accurately forecast the spatio-temporal mean Earth surface or lower troposphere temperature during the next 20 years, don’t you think? We could do something like a cusum chart for each model (each GCM, or the GCM mean only; Scafetta’s model, webhub’s and Vaughan Pratt’s model, other models) and see which model has the smallest accumulated sum of squared prediction errors. Where models make contingent predictions (such as WebHub’s model that incorporates aerosols), each year’s model prediction will use the best available measurements of the variables they are contingent upon.

        For models that have nearly equal mean square prediction errors, we can look at other measurements, such as geographic distribution (especially sea surface vs land surface.)

        My main points are: (1) ranking the criteria in terms of importance (mean temp, variation in temp, mean rainfall, variation in rainfall, etc); (2) specifying the criteria in advance (instead of picking and choosing post-hoc, as you have done here) and (3) using all the data consistently across all models.

  18. Slide #10 of the presentation:

    Are GCMs the best tool?
    – Explore scientific understanding of the climate system

    GCM disadvantages:
    – Computationally expensive; many problems don’t require high resolution, complex physical parameterizations
    – Resources spent primarily on climate model development and IPCC production runs; little time and $$ left over for understanding
    – Diminishing returns on understanding from GCMs

    Other approaches:
    – Lower order models, larger ensemble size, parametric sensitivities
    – Plurality in climate model structural form
    – Semi-empirical methods
    – Theoretical advances are needed

    The lists of GCM disadvantages and other approaches seem to be well justified. One valuable property of a large comprehensive model is, however, that the same model may be used in a large variety of tasks. In each of those tasks a smaller model could be as good and computationally much more efficient, but preparing separate models for each task may be a larger cost than the cost of inefficiencies and non-optimality of the large model. In research of a very complex system it’s common that a model is needed, and the difference between various applicable models concerns mainly the cost rather than the ultimate results.

    There are certainly many other issues where the plurality is important, and the better transparency of simpler models very valuable.

  19. But Dr. Curry, if computer simulations are deprecated in policy planning, what is left is precedent. Antarctic ice cores and geology provide the evidence for the last seven most similar precedents: seven times in the 800,000 year ice core record, CO2 levels rose 100 ppmv from 180 ppmv to 280 ppmv, and seven times sea levels sharply rose over 100 ft within a single millennium correlated with the CO2 rise.

    Now, correlation isn’t causation, but if you have abandoned models, you are left bereft of causative logic and can best rely on precedent: CO2 rise of about half a doubling equals rapid sea level rise of over 10’/century.

    Pick your poison. Are GCMs worse than precedent?

    • False dichotomy.

      And false precedent, BTW.

      Twofer!

      • Precedent, simple energy balance models, and more complex GCM and other models.

        That’s a three-fer.

        Plus we have empirical observations and a complete vacuum amongst the skeptics and deniers for a valid alternative model.

        Clean sweep.

      • WHUT,

        “Precedent, simple energy balance models, and more complex GCM and other models.

        That’s a three-fer.”

        Three different ways of arriving at failed predictions isn’t really anything to crow about. Two wrongs don’t make a right, and three is simply gratuitous.

        “Plus we have empirical observations and a complete vacuum amongst the skeptics and deniers for a valid alternative model..”

        The empirical observations scuttle your models, and tu quoque fallacies cannot save them.

      • The empirical observations agree with the CSALT model

      • Chief Hydrologist

        It is like talking to a goldfish.

    • Let’s go all the way back to the last 15 years (or so) for our precedent.

    • Bart, you write “Pick your poison. Are GCMs worse than precedent?”

      You have left out other options. One is to conclude that science, physics, cannot tell us what happens when more CO2 is added to the atmosphere. That, to me, is the most reasonable conclusion we can come to. But the warmists can NEVER agree that it is even a possibility, when it is, in fact, the truth.

      • Bart R

        Simulation by equations is … “

        Equivalent to your example of “precedent”.

        You invent a nonsensical binary categorization, and demand that we operate by it. No thanks.

        “If you cannot tolerate the very tiny “wrong” in the equations, …

        The wrong in the GCMs is not tiny. The errors are already known to be too large for the purposes to which the GCMs are being put, and it remains to be seen how large the err will ultimately prove to be.

        “… for whatever reason, then you are left with all the inferior methods, of which precedent is the obvious leading contender in making policy decisions.”

        No.

      • You are a dope barty. We have had a recent rapid increase of CO2 of 120 ppm, with a significant amount of that occurring in the last 15 years. Where is the accelerated sea level rise that is going to get us to a rise of 100 freaking feet? Oh, I forgot about the volcanoes, the pipeline and all the Hiroshima bombs going off in the briney deep cold abysses. When all that sorts out, we will all be underwater.

        The Aussies have wisely kicked out the carbon tax clowns. Let that be a lesson to pols everywhere.

    • Pick your poison. Are GCMs worse than precedent?

      Certainly potentially, “worse”.

      The utility or value of a flawed (bogus) model, can actually be more-adverse than merely “wrong”.

      • Ted Clayton | October 31, 2013 at 11:58 am |

        Chewfer on this.

        All methods are to some degree “wrong” by some criterion. Simulation by equations is the generally preferred least “wrong” method where the equations can be shown to be valid approximations that are most simply, parsimoniously and universally accurate or very nearly true based on evidence and inference.

        If you cannot tolerate the very tiny “wrong” in the equations, for whatever reason, then you are left with all the inferior methods, of which precedent is the obvious leading contender in making policy decisions.

        Certainly, “simple energy balance models, and more complex GCM and other models,” as WHT points out, are also available, but it appears these all fall within the equation-based methods Dr. Curry and her fandom are rejecting. Hence, precedent is the next obvious contender, and the dichotomy holds true.

        What else but “empirical observations” is precedent? What else is “a complete vacuum,” but no alternative at all? Hence we sweep aside WHT’s objection as false duplication of cases.

        Don Monfort’s simple-minded “last 15 years” is not the rescue he imagines, because then we must concentrate on the last 15 years very, very hard, and ask all the rational questions of inference as to why stop at 15 years, and what exactly is special about the last 15 years, and what do we know about volcano influences and all other contributions on all 50 essential climate variables, and we find ourselves in the realm of greater uncertainty that we must allow more extreme possibilities out of, and more expensive range of preparations, and hence more reason to mitigate. I’m pretty sure no one — least of all Don Monfort — wants to increase our expenses by choosing inferior foundations for decision-making, to which I add coin-flipping, dice-rolling, religious visions, drug-induced hallucination, bribery by interested parties and pure guess.

        And let’s look at Jim Cripwell’s argument like grown-ups: “One is to conclude that science, physics, cannot tell us what happens when more CO2 is added to the atmosphere. That, to me, is the most reasonable conclusion we can come to.

        If we are left in an utter vacuum to found our decisions upon, then we are to make rather no decisions, and agree to nothing. We have no basis to agree to any taxes or any tax spending, any government programs or any government, any business decisions or any business. Jim Cripwell’s utter nihilism paralyzes all reason, and leaves us with no reason to move or breath or think or do. How does this qualify as an “other option?”

        No. The dichotomy holds, among rational policy decision makers: GCMs — the logical extension of equation-based models, or precedent — the logical extension of historical models.

        And the precedent most like the issue at hand is furnished from the last seven events most like the current situation. Seven rises of 100 ppmv CO2. Seven simultaneous correlated 100’+ sea level rises.

        You can equivocate, moan, declaim or deny all you wish, but equations and precedent are the two choices rational policy makers have; all the rest are the very definition of irrationality.

        If you prefer irrational policy making, that’s fine. But don’t pretend you’re anything but crazy when you do.

      • Bart R,

        Isn’t the least wrong of wrong still wrong?

      • M. Hastings | October 31, 2013 at 12:51 pm |

        Is the least wrong of wrong still wrong?

        All of Science must have at least a little wrong left in it, else there would be fewer questions left to answer than we now have. We would know the weight of ‘Weakly Interacting Massive Particles’ making up ‘Dark Matter’ if either of those exist, or how to account for the gravity we observe holding galaxies tightly together that have so little mass from visible matter they ought fly apart.

        We would know what genes are responsible for what traits in every species and how best to alter them to obtain the outcomes we desire. We’d know a great many knowable things we do not now have any good explanation for, so we know now that we are wrong in some ways in Science.

        Is Science more wrong than abandoning Science entirely? Than living as we did in 1600 AD, or 1066 AD, or 33 AD?

        Things aren’t all black and white reasoning, to be sure, but you’re looking for the parsing of a subatomic degree of wrongness to argue for throwing out the baby with the bathwater.

        Far better to proceed with the best of the wrong explanations we have — the simple, parsimonious, universal explanation that is most accurate — as very nearly true, and along the way should new data or new observation require us to amend that position, then at that point take up that next slightly less wrong explanation.

        We must have a basis for policy makers. You can resort to superstition, delusion or corruption if you really want, but I find those proceedings disgusting and repellent, a descent into madness, and the only alternatives historical precedent — which the ice cores tell us is that a 100 ppmv rise in CO2 risks a 100’+ rise in sea level as a consequence — or the inferred equations that tell us that with a doubling of CO2 level we risk substantial alteration of 50 essential climate variables and consequent costly and damaging outcomes.

        Do you prefer weak precedent over strong equation? That’s fine; there’s some rational foundation for precedent. But if you abandon reason for madness, why should we care what you have to say?

      • Bart, you write “If we are left in an utter vacuum to found our decisions upon, then we are to make rather no decisions, and agree to nothing. ”

        Complete and utter garbage. What I am talking about is what the scientists should say to the politician; not what the politician does. It is completely dishonest of a scientist to pretend that climate models predict what is going to happen in the future. That is the issue. Politicians are then left with the usual inadequacies of any situation to make their decisions. As Oliver Cromwell remarked “Politics is the art of the possible”. So don’t confuse political decision making with scientific advice. I am only talking about scientific advice, and I claim that all true scientists must be honest in giving this advice.

        Pretending that climate models give any idea of what is going to happen in the future is being scientifically dishonest. That is the issue.

      • Bart R,

        If you prefer irrational policy making, that’s fine. But don’t pretend you’re anything but crazy when you do.

        Politics, and policy, is and always has been, essentially irrational.

        If we could eliminate the irrationality, we would have Scientocracy.

        It is compelling, how immaculately free the history of the last few centuries are, of anything resembling movement toward Scientocracy.

        Rationality, in the societies & cultures that exist, is just a tool like a hammer. Nobody has made any real move to elevate mere tools to positions of power & leadership …. and I’m not going to even pretend to hold my breath that Science has any prospect of attaining Scientocracy.

        Much as I like it, respect it, have an affinity for it, and thrive within it; Science sucks hind teat.

        Ted

      • Bart R,
        “All of Science must have at least a little wrong left in it, else there would be fewer questions left to answer than we now have.”
        Obviously – the more we learn, the more we learn how much we don’t know.
        “Far better to proceed with the best of the wrong explanations we have.”
        I disagree – we need more information yet, to make a decision that may do more harm than good. Why are you so convinced that we should hurry into a decision that may be wrong?
        “but you’re looking for the parsing of a subatomic degree of wrongness to argue for throwing out the baby with the bathwater.”
        Nothing could be further from the truth.
        “We must have a basis for policy makers. You can resort to superstition, delusion or corruption if you really want, but I find those proceedings disgusting and repellent, a descent into madness,”
        That’s an absurd statement and shows your maturity.
        Your tone suggests that you think you are absolutely and unequivocally correct, I suggest some humbleness because you’ll soon find out you are not. It always happens, that’s science don’t you know.

      • Steven Mosher

        Bart

        ‘Jim Cripwell’s utter nihilism paralyzes all reason,”

        The really shocking thing is that Cripwell used to work in Operations research. So he should know better

      • Steven Mosher | October 31, 2013 at 4:51 pm |

        Yeah, but Canadian Operations Research.

        I believe you’ve dealt with the problem of getting data out of Canada. It’s different up there. They’re kinda retentive.

      • Don’t forget that it was a Canadian, Harold Larnder, who “invented” Operations Research. I stand by my statement. It was my experience in OR that gives me the background to know that I am right.

      • Steven Mosher

        Bart when time permits I may do a piece on the simularity between OR and climate science.

        I have a few examples from desert storm that I have to check through.

        Of course the analysis that Jims hero did was pure speculation.

        thankfully the polciy deciders listened. but if they had been cripwell they would have done nothing

      • Steven Mosher, “thankfully the policy deciders listened. but if they had been cripwell they would have done nothing.”

        Funny how Climate Science modelers not owning up to issues with the unspecified, invalidateable and unfalsifiable models might change that. Since them “good ‘ole” days, there have been some “model” issues also in the economics field plus considerable revelation of less than “quality” or if you prefer “sub-optimal” professionalism in medical fields. Science is no better than the ethics of the scientists.

        Just today there was a paleo study of the indo-pacific region that noted that there have been “pulses” of warming around the Roman Optimum and the Medieval Warm Period. Micheal Mann appears to be a bit taken aback for some reason. These “pulses” appear to have periods of roughly 5000 years. Imagine that?

      • captdallas wrote:
        “Just today there was a paleo study of the indo-pacific region that noted that there have been “pulses” of warming around the Roman Optimum and the Medieval Warm Period.”
        If one were to shuffle over to WUWT, they could watch a nice interview on the past 10,000 years and temps according to the study.

      • M. Hastings | October 31, 2013 at 2:49 pm |

        Actually, I think everyone on the planet has had 300 years to prove Isaac Newton incorrect on how to proceed in policy from scientific evidence, and no one has yet succeeded, though a few have, like Einstein did, extended Newton’s thinking.

        You don’t think there’s enough evidence yet for decision-making on this issue? So you’re adverse to uncertainty, that’s fair enough, you can abstain from a role in deciding if you resort to the Precautionary Principle.

        I myself find the Precautionary Principle a better last resort than first reflex, and at best every plausible effort should be made once committed to it to obtain enough certainty, enough evidence, to proceed from indecision to the most prudent non-precautionary alternative.

        Those with tolerance for dealing head-on with decisions at lower levels of certainty are the ones we generally admire. We would never have launched into space if we found the levels of certainty we have about AGW insufficient. Columbus would never have launched into the New World. Heck, today science is more certain, and for better reason, of AGW than of the Higgs Boson, or the existence of extrasolar planets.

        So when you throw words like maturity and humbleness around, I believe you should look into them for yourself, and maybe also investigate the word ‘hypocrisy’.

      • Bart R
        We would never have launched into space if we found the levels of certainty we have about AGW insufficient …

        Fatuous drivel. Was there even the faintest of chances the space program would result in plunging the entire world into increased poverty, taxes and government ?

    • Matthew R Marler

      Bart R: Pick your poison. Are GCMs worse than precedent?

      Good post. I would add that there are many precedents, over diverse time spans, and with non-comparable measurements in many time series. The GCMs attempt to incorporate all relevant ideas from all the precedents,and from laboratory results. If at some time they correctly predict the spatio-temporal distribution of measurements accurately enough then the model construction will be a great achievement.

      In the mean time, their inaccuracy over the last 15 years shows that the goal has not been achieved.

      To illustrate that there are many precedents, consider WebHubTelescope’s lnCO2 model, which incorporates the recent history (100+ years) of temp, CO2, SOI and others. It produces a dramatically different forecast for the effect of doubling CO2 than does the extrapolation of the precedents that you cited.

      I do think that some day humans will reap benefits from the developments and refinements of the GCMs. I just don’t expect it any time soon, say in the next 20 years.

      • Matthew R Marler | October 31, 2013 at 1:51 pm |

        Relatively, how inaccurate are GCM’s?

        We know they cannot be accurate about global temperature in any one year, or five year, or eleven or twenty two year span because we know climate is not weather — and seasonal effects under the influence of short-term sub-annual events is truly beyond prediction past even a few days or weeks, because we know five year spans depend on the influence of volcanic aerosols in the stratosphere — an utterly unpredictable incident, because we know the Sun has slight variations that too are truly unpredictable outside of the Hale Cycle, and the Hale Cycle itself stopped correlating with global temperature signal in the 1950’s though once it did correspond well with ups and downs in global temperature.

        So on a more than 22 year span, how accurate are GCM’s, relative to all other predictions?

        Name the prediction that was closer than the GCMs. When was it made? By whom? How?

        Considering the purpose of GCMs is NOT predictive, how is their predictive skill a competent validation of their power?

        I, too, am concerned about what politicians are told in prelude to their evidence-based decision-making. If they’re told GCMs are predicting temperature rise, they’re being badly advised, clearly.

        If they’re being told GCM’s prove that there is no better explanation for observed changes in fifty essential climate variables on the multidecadal scale than one where human industry is the major factor and perhaps accounts for up to 110% of the changes, that is it likely reversed the natural tendency and then went up to ten times farther in the opposite direction from Nature — then they’re being well-advised, however.

        Because then they can assess the Risk of continuing to take bribes from coal kings and oil barons to subsidize fossil fuel burning. Then they can assess the chance of high costs to the Economy of continuing any other plan than low carbon economy and low carbon technology globally and immediately.. Because we know low carbon economy and low carbon technology lack those Risks, and can be executed at lower cost than oil — face it, oil burning for stationary electricity was last financially viable in the 1970s — and a lower cost than coal by 2020, and a lower cost than natural gas by 2025.

        Why prefer a riskier, more expensive set of policy options?

      • Matthew R Marler

        BartR: Considering the purpose of GCMs is NOT predictive, how is their predictive skill a competent validation of their power?

        I can live with that. As long as we agree that GCMs are not predictive, we have no need to compare them to future observations or other out of sample data. Or to compare their non-predictions to the predictions and non-predictions of other models. But now and again someone wants to use GCM output to justify a claim that the future will be bad if we as a species do not reduce CO2 emissions–those people need to be adequately informed that the GCMs are not predictive.

      • Matthew R Marler | October 31, 2013 at 5:22 pm |

        Right. Not that the future is predicted to be bad, but that the risk of the future being more costly and full of harms increases, if our governments continue to create affordances in infrastructure and tax policies for we as a species to continue our industrial CO2 emissions.

        Very important distinction. Quibble. Hairsplitting. Whatnot.

      • Matthew R Marler

        Bart R: for we as a species to continue our industrial CO2 emissions.

        The expectation that CO2 emissions might make things worse is not based on much better or more science than the expectation that CO2 emissions might make things better.

      • Well clearly warmer is better than colder, there is that. And if plants are enfranchised, it’s all cheroot.
        =============

      • Matthew R Marler | October 31, 2013 at 7:01 pm |

        Right, because flipping the coin of an incredible longshot that the Universe has suddenly developed a plan where the product of the gluttony, sloth and greed of a few means benefit for all is such a responsible way of going about policy?

      • > The expectation that CO2 emissions might make things worse is not based on much better or more science than the expectation that CO2 emissions might make things better.

        Citations needed.

      • Cut out the hysterics, barty. It’s really not working for you. The pause is killing the cause.

      • Matthew Marler “The expectation that CO2 emissions might make things worse is not based on much better or more science than the expectation that CO2 emissions might make things better.”

        Good original quote thanks Matthew.

        Willard asks for a citation but he is unlikely to find anything. The question is usually discussed in the literature from one aspect only and in both cases, the underlying science seems not settled.

      • Matthew R Marler

        BartR: Right, because flipping the coin of an incredible longshot that the Universe has suddenly developed a plan where the product of the gluttony, sloth and greed of a few means benefit for all is such a responsible way of going about policy?

        That comment is totally independent of all science.

      • Matthew R Marler

        willard(@nevaudit): Citations needed.

        For what? That plants grow better with higher levels of CO2? That warmer is better so far in human history? About the epistemic cavities in climate science?

        It is hard to provide a reference for something that is clearly not known, such as the fact that the effects of a less than 1% of downwelling IR in the sea surface are unknown. Yet there is no published analysis of the relative amounts of energy transferred to warming vs H2O vaporization. That’s a known unknown that a bunch of people here do not recognize as important.

      • > For what?

        For

        The expectation that CO2 emissions might make things worse is not based on much better or more science than the expectation that CO2 emissions might make things better.

        I don’t think that “yes, but plant food” substantiates this expectation very well.

      • Well, willard, warmer substantially sustains more total life and more diversity of life, so there you are.
        ==============

      • Our Venus, our fire
        At our desire

    • correlation isn’t causation, but it it obeys the basic laws of physics it can be right.

      CO2 does go up and down because the huge oceans kick out CO2 as they warm and they suck it back in as they cool.

      For a trace gas to drive the temperature is really not something I can believe.

      For the oceans to drive the CO2 levels as they warm and cool is something that I can believe.

      Just like in a carbonated drink. A hot carbonated drink spews a lot of CO2 and a cold carbonated drink spews much less CO2.

      • Herman Alexander Pope | October 31, 2013 at 3:44 pm |

        You know, your hypothesis can be tested.

        We only have to look at the concentration of CO2 in the oceans and predict what ratio would hold under the temperature increase seen if it were indeed CO2 being released by the oceans causing CO2 to rise, instead of CO2 rise causing oceans to warm.

        Guess which of these two alternatives (your made-up one spawned by futile hopes reality isn’t harsh and unforgiving of greed, gluttony and sloth vs. there are consequences for actions) the data supports?

      • “CO2 does go up and down because the huge oceans kick out CO2 as they warm and they suck it back in as they cool.”

        It seems warm oceans hold less CO2. Why? I wonder if CO2 in the oceans might effect the oceans ability to release heat to the atmosphere. Step one of moving heat away from Earth.

        If CO2 in water effects its heat retention, that would fit with a defensive Earth. Hoarding heat in the oceans during a glacial and emitting it during an interglacial.

        Now, standing at the end the an interglacial, the oceans may have run out of enough CO2 to emit.

      • No Ragnarr,
        You can’t have our loony left wing radical GIAA hypothesis, that belongs to us progressive nutcakes.

        Anyway, there is plenty of CO2 in the oceans, try the infinite buffer capacity theory, it sometimes pops up at WTFUWD. Infinite in the raising pH direction anyway. Lots of carbonate rocks that can dissolve under the right conditions.

      • bob droege:

        I found some support for more CO2 from the oceans to the atmosphere with the current situation. I look for a resilient Earth. A climate romantic (of, characterized by, or suggestive of an idealized view of reality)?

        Perhaps with the CO2 response with warming oceans, emergent behavior. This is all so sketchy on my part.

      • Herman Alexander Pope,

        CO2 does go up and down because the huge oceans kick out CO2 as they warm and they suck it back in as they cool.

        “Carbon dioxide leads temperture”.
        “Temperature lags carbon dioxide”. (‘lag(s)’ is the better search-term)

        If this is actually how it works with climate & CO2, then climate-models are on-track to be one of the most spectacular scientific train-wrecks in history.

        Certainly, this is not only Chemistry 101 (“equilibrium shift”) … it is all-too-obvious to carbonated beverage users.

        Tracing just how CO2 leaves the atmosphere, and enters the water-cycle, really-and-truly, has not been easy. A breakthrough that shows convincingly how this really happens, could be the linchpin. We have CO2 satellites going up now or soon, that might move us toward this outcome. Or we might have to wait for another generation of them, for them to be able to cinch the CO2-sink question (better resolution) … and other approaches might beat the satellites to the answer.

        Then again, one can suspect that current CO2 satellite projects understand perfectly well that they might be able to do this, and are quietly preparing & expecting to do so. Doing so, after all, will be a slam-dunk Nobel Prize.

        Ted

      • Herman Alexander Pope has an excellent visual demonstration of outgassing and subsequent reabsorption of CO2 gas. He secured a latex glove over the top of a beverage bottle, alternately warming it (inflating the glove), and then shriveling it up by chilling it.

        I am imagining the fun that might be had, using a condom instead of a glove. The animated GIF could be a real winner.
        ===

        Pressure also affects the solubility of CO2 in water. Solubility increases dramatically, as pressure rises to 500-1000 psi, and then less-dramatically, thereafter.

        As CO2 dissolved into the sea makes its way to greater depths, the amount of gas that the water can accept increases rapidly at relatively shallow depths, and then continues to increase more slowly, at ever greater depths. Temperature falls with depth, too.

        The combined increasing pressure & falling temps of the ocean depths, make them a true & potent ‘sink’ for CO2.

        When a container is lowered to depths in the sea, closed tightly while at the bottom, and then brought back to the surface, the water in the container will fizz, when the ‘cap’ is taken off.

        Ted – [I will be away for the rest of the day, shortly.]

      • Ted Clayton | November 1, 2013 at 9:53 am |

        Hate to burst your little bubble, but what you’re describing is one of the problems with throwing all this extra CO2 into the atmosphere-ocean system.

        http://natgeotv.com/uk/killer-fog/videos/tragedy-at-lake-nyos

        Drive enough CO2 and heat at the same time into a large enough reservoir of water that it forms a relatively still pool of supersaturated CO2 solution under high enough pressure — and no water on Earth is under so much pressure as the deep oceans — and at some tipping point (in the correct sense of the phrase) some forcing external to the supersaturated equilibrium will disturb it. It needn’t be a large forcing, either, as little as the fall of a single bit of plankton into the interface will do, and BOOM.

        An ocean-scale Lake Nyos explosion.

        What are the odds of this? Lake Nyos sat calm with no sign it would erupt in a cloud of lethal CO2 to blanket the region for centuries or millennia. So maybe the chances of any one of the candidate deep CO2 supersaturate reservoirs creating a near pure CO2 bubble the size of Greenland is small. How would you model that?

      • Bart R | November 3, 2013 at 10:02 am |, asked;

        How would you model [a deep oceanic CO2 supersaturated reservoir, and its destabilization]?

        Preferably, from orbit. Another planet? ;)

        Firstly, afaik, we don’t have the kind of saturation in ocean reservoirs to create a spontaneous, in situ event of this sort (and aren’t creating it, burning fossil fuel). Even relatively isolated ocean basins, are not nearly as still and free of disturbances, as Lake Nyos (and don’t have nearly the CO2-injection … Nyos sits on an active volcano).

        Pretty-much, we would need a large disturbance, to destabilize unsaturated gases … such as say the injection of heat from the bottom, from say a monster fissure-volcano. And even then, it would not be the positive-feedback, ‘cascade-failure’ type event as at Lake Nyos.

        The bottom line though, is that evidently no such catastrophic oceanic CO2 eruption has ever occurred. If it had, we would be able to read the story in geological formations, from 10s & 100s of millions of years ago.

        At Lake Nyos, water gushed hundreds of feet into the air, and a tidal wave over 80 feet tall scoured the surrounding slopes. In a similar ocean-event, waters easily running to 1,000s of feet deep, would slosh long distances across continental terrain. The scarring & sediment-deposits would be unique, conspicuous, and awe-inspiring.

        In the Northern Rockies, at the end of the last Ice Age, and onset of the Holocene, the glaciers & icesheets impounded large lakes, and then occasionally collapsed, emptying them suddenly. The scars in the flood-channels from these events are clearly visible, today, even from space. Pioneers arriving in these districts looked at the prehistoric devastation, and took it as proof of the Biblical Flood.

        Geologist would have noticed formations indicative of oceanic incursions onto the continents … but when this possibility occurred to folks, earth-scientists quietly went around and reviewed the landscape, looking for signs that were somehow not previously noticed (it would be pretty hard to miss this kind of stuff). And with a discreet sigh of relief, they shook their heads. “Nope. Never happened”.

        Not in the last few hundred million years, anyway.

        And of course, we drop probes into those deep & quiet ocean basins, and know what the CO2 concentration is in them. There’s nothing hair-raising happening out there.
        =====

        It’s also a military/Navy matter. High gas-bubble content in water reduces its density dramatically, and vessels will just … drop. If you could destabilize the CO2 under a certain area (say by setting off depth-charges, or other explosives), even well short of having to worry about suffocating or causing waves, you could destroy enemy vessels, and whole fleets. This possibility has been studied, investigated & experiments run. “Nope”.
        =====

        There are models for this kind of thing, Bart, if that is really what you’re looking for. This a well-developed part of Chemistry curricula … there are safety issues at stake in labs, and Rules in place, to forestall flying hot liquids & acids, etc. You would start with Chemistry Lab Class, and then account for the ways an ocean isn’t a beaker, and vice-versa.

        Ted

      • Ted Clayton | November 3, 2013 at 10:58 pm |

        Nicely said, sir!

        I will now have to drop the Lake Nyos IQ test from my armory, though it has withstood the Denizenry here from almost the inception of Climate Etc.

        You’re the first to challenge it with fact. While I can’t say you win the Internet, you certainly beat Climate Etc.

        So what about the other, less extreme, influences of CO2 on the WMO’s 50 Essential Climate variables, plus on biology.

        Do those all (somewhat miraculously) end up benign or beneficial to everyone by pure coincidence?

      • Bart R | November 4, 2013 at 4:29 pm;

        Thanks! I’ll try to live up to the kind words. :)

        So [are] the other, less extreme, influences of CO2 … [all] benign or beneficial to everyone by pure coincidence?

        Roughly speaking, yes, probably.

        The greenhouse-effect of CO2 does not appear to initiate any robust positive feedback paths. If this was really in the cards, then lurid outcomes would always be lurking in the wings. Early in earth’s (living) history, a pulse of CO2 would have set off an irreversable runaway greenhouse feedback, and earth today would be totally unrecognizable.

        So no: positive CO2-mediated feedbacks do not become dominant. Any that exist, are reliably negated or reversed by ‘trump’ effects.
        ===

        In the reality that I occupy, the wild climate-fluctuations of the Ice Age, over the last 2-3 million years, show us that climate slam-bangs off the walls very dramatically, without any contribution from Industrial CO2. “Naturally”.

        The Ice Age consists of a couple dozen ‘rough’ quasi-cycles, each cycle running about 100,000 years cold, and 10,000 years warm. Peppered (thickly) across that record, are large numbers of more or less violent climate-excursions, both warming & cooling in nature. Many of those climate-changes were essentially “Day After Tomorrow”-style “abrupt climate change”, to within the finest resolution that we can attain. Totally naturally.

        No; I don’t think ‘humans are doing it’, or that CO2 is the mechanism behind it.

        The Chit Will Hit The Fan, shortly after we enter the next minor cooling-spell. “… Five, Four, Three …”

        Ted

      • Ted Clayton | November 4, 2013 at 6:48 pm |

        A good answer, but not an answer to the question asked.

        Runaway GHE on Earth due human activities? That’s incredibly unlikely to happen, requiring so much warming that water would stop precipitating out of the atmosphere in liquid form, and/or that the balance of water vapor in the upper atmosphere losing hydrogen ions to space through UV splitting would shift up by orders of magnitude. Were such things to come about, we’d have other problems to deal with.

        If you define all outcomes as either “everyone dies everywhere runaways”, or “beneficial”, your answer does address what I asked.

        But my question wasn’t about runaway effects, as runaway effects are the least likely of the negative, non-benign, non-beneficial outcomes.

        What are the odds that by coincidence the cost to me of your industrial CO2 emission is lower than the benefit I get from your burning of carbon?

        Probable?

        That isn’t even remotely plain.

        Also, “Nature does it, so why can’t I?” doesn’t cut it. If Nature jumped off a bridge, would you do it too?

        The dominant factor in the ~100,000 year glacial cycles appears to be the tilt of the Earth (and other astronomical co-factors), linked to the North-South asymmetry of the continents. The Vostok Ice cores show the past seven or eight times this happened we did have 10,000-40,000 year phases of mostly warm alternating with mostly cold, and yes there were some violent downward temperature events apparently linked to volcanic activity for the most part.

        But we haven’t had a capstone event on record since the Eocene that sent CO2 up to 400 ppmv, and in those millions of years ago days Baffin Island was hot enough that camels evolved there.

        What do we know of CO2 rises on the scale of over 100 ppmv in a short span of time? The seven most recent such rises corresponded with sea level rise in excess of 100′ in as little as a century or two, according to Vostok. How is that beneficial? How is that benign?

        Don’t think it’ll be as bad as 100′? Will it be only 50′? Twenty feet? Twenty feet displaces a billion people, and makes coastal development depend on building and engineering techniques that don’t exist yet. How is that beneficial? How is that benign? How is any beneficial or benign scenario more probable than this outcome?

      • Bart R | November 4, 2013 at 4:29 pm objected;

        Runaway GHE on Earth due human activities? That’s incredibly unlikely to happen, requiring so much warming that water would stop precipitating out of the atmosphere in liquid form …

        The stock phrase is, I believe, “the oceans will boil off”.

        Like the Lake Nyos ‘story’, I didn’t invent the positive-feedback of CO2-driven warming. It’s an integral part of the AGW ‘story’; it’s what “drives” climbing temps … with clear logical consequences that are as patently-outlandish as the Nyos-exploding-oceans scenario.

        Ted

      • Fine discussion even with Bart’s Monopoly Money and the bridge he has to push off on you.

        Whoa, camels on Greenland! What would the Vikings think?

        Sure Bart, changes; more likely beneficial than not, by far.
        =================

  20. DuWayne Smith

    Judith, you are on a good path, but progress is excruciatingly slow. The 60-year cycle or Wyatt/Curry wave makes all the difference in the world with respect to global temperature forecasts. The GCM’s cannot get the right answer if they do not replicate the cycle. The aerosol “plug factor” used to replicate (explain) the 1947-1977 flat period has no scientific (or logic) basis. As a result of ignorance with respect to the cycle, the GCM’s are useless.
    Akasofu’s very simplistic model gives a better forecast than the GCM’s because it incorporates the cycle. In my opinion, his forecast could be improved by recognizing the increase in warming of the last cycle due to CO2. But these simple models are better because they incorporate a major feature of the climate unrecognized by the GCM modelers.
    I know none of this is news to you, but will it take 5 precious years to replace the faulty GCM’s with models that work much better?

    • The “aerosol plug factor” barely effects the midcentury pause.

      It has more to do with the fluctuations in the SOI and the stadium wave.

      The Aging eruption in 1963 is about the only natural source of aerosols and the man-made ones have not shown any modulation so are not good candidates for replicating a pause.

      You just have to do it for yourself.
      http://entroplet.com/context_salt_model/navigate

    • [W]ill it take 5 precious years to replace the faulty GCM’s with models that work much better?

      Given sufficient incentive, current models could be abandoned in a year, maybe less. Strong cooling, or other ‘wild’ deviations, could do the trick. It isn’t necessary, though, that we *replace* current climate-models.

      We may see as with Biology, a fragmentation of modeling-topics. We have many ‘little’ bio-models … and when we go to large-scale bio-eco-treatments, they are very simple.

      • “Given sufficient incentive, current models could be abandoned in a year, maybe less.”

        Not. A. Chance.

        The models are performing perfectly, as far as their intended purpose.

        They are useless when it comes to predicting long term global temps.

        They are useful as hell at maintaining and increasing progressive government control over the energy economy; and in generating funds for both the politicians and the modelers.

        The GCMs won’t go, and the IPCC won’t go, until there is a change in political leadership, in the US at least.

      • Not. A. Chance.

        Yes, but … however!

        What’s a “sufficient incentive”?

        The GCMs won’t go … until there is a change in political leadership, in the US at least.

        OK … Major mid-term leadership-struggle gets serious, late next summer. The voting public determines the outcome. Given spectacular but feasible weather-events and ice-growth, etc, voters could well rebel against Government-sponsored climate-activism.

        Politicians will respond to voters. The quickest, surest, and most-likely route to GCM reform or abandonment, is via exactly the same tool by which it now lives. Politics.

        He who lives by the gun, dies by the gun.

      • Ted Clayton,

        Oh I agree politics is the best way to force abandonment or reform of GCMs, it’s the only way, as my comment indicates.

        But public opinion and mid-term elections can only take you so far. Public opinion was enough to stop Copenhagen. It was enough to stop the first run at socializing healthcare, Hillarycare.

        But it is one thing to stop the progressives from doing something, It is quite another to force them to retrench once they achieve a tactical victory. Not one of the failed “social” programs they have enacted since Roosevelt has ever been undone, to date. Welfare was reformed over much wailing and gnashing of teeth, but Obama is quickly undoing all that already.

        That does not mean they can’t be, just that it takes more than a change of a few congressional seats, particularly if those seats are held by progressive Republicans.

        A lesson it looks like the Aussies may learn this time around.

      • Dio. II Syr. Sword.

        But tell me more about the ‘very simple’ bio-eco treatments. I’m strangely attracted.
        ============

      • Dio. Deux. Syr. Swor. is prolly bettah.

        Hey, attracted enough to go look for myself. Thumb through my thumber’s index, finger out that simple stuff.
        ====================

      • Good gracious, how revolting, how repelling….er how revolutionary, how rebellious; something more complicated than climate.
        =====================

      • The chaos theory of politics? Helter skelter? Oy.

      • Kim,

        But tell me more about the ‘very simple’ bio-eco treatments. [Egad! … runs for door.]

        Biology modeling ‘degenerates’ into ecology modeling. Ecology degenerates into human demographics, which is then subsumed by politics.

        But the models themselves are (logically, scientifically) simple, or are based on versions that were (often gnarly – but simple – math, or maybe not). Variables are small in number. The inputs & outputs are limited (even Standardized). Hairy versions often take pains to trace their pedigree (math-proof fashion) to napkin-grade constructs.

        [If you know about database history, these models arose somewhat like the “relational algebra” of modern (relational) databases. Very simple, at the base. Fully provable, step-by-predicate.]

        Bio & eco modeling predate the climate-modeling we are now familiar with. It is very much to their advantage, that they happened to have their heyday, when BASIC was the means to implement models in a Thesis.

        It is one of those grisly little facts that we sometime wish weren’t … that software is inherently somewhere between very and profoundly “conservative”. The field of ecology remains organized closely around programming-concepts that can be, and originally were, written from the ground up, by one person, or a small team, in a rather short time period. Mainly because they were.

        [That software is intensely conservative, bodes poorly for the “reform” of GCMs, etc. Such work is called “refactoring”, today. It used to be called “porting”, or just “rewriting”. It is not real popular, and pointedly enough, is far less practicable, with codes that are ill-formed, ill-mannered, ill-conceived … spaghetti, ca-ca, doo-doo, etc. (Climate software coding is over-the-top infamous, notorious, Untouchable, within the general self-respecting programming community.)]

        Thus, ecological concepts & methods … used the Divide & Conquer strategy. The problem is broken down into pieces that are by definition within the grasp of a person. You tend to end up with a daunting proliferation of models, but they all are or reference down to, bite-sized projects.

        There are ‘classics’ which are the key to much of the later & derived work, which can get a little overwhelming (cough).

        [Ewww …] something more complicated than climate.

        Yeah, I wouldn’t jump up to argue that … that “life” is more complex than “climate”.

        But the software situation of bio-eco versus climate, despite the different historic settings of their development, certainly strikes me as dominated by the different political roles they fell into, rather than the complexity of the subject matter. Frankly, the political exploitation of ecology pointed too-directly & too-rudely to outcomes that too-overtly resembled sneaky forms of genocide, etc [Eugenics on massive steroids] … the whole scene became ‘too hot to handle’, long ago. Even today, the Administration takes heat for its inclusion of a man who once dabbling in/espoused the ideas/models of politicized ecology.

        Ted

      • Thanks for the illuminating backstory, Ted; ten minutes after lift-off the vehicle was veering off course, and I bailed.
        ===============================

  21. I refer to a story on GWPF this morning. http://www.thegwpf.org/uk-government-no-global-cooling-centuries/

    I quote “The Parliamentary Under-Secretary of State, Department of Energy and Climate Change (Baroness Verma) (Con): The UK government has made substantial investment in research that concerns the likelihood and timing of future changes in global and regional climate.
    All of the climate models and policy-relevant pathways of future greenhouse gas and aerosol emissions considered in the Intergovernmental Panel on Climate Change’s (IPCC) recent Fifth Assessment Report show a long-term global increase in temperature during the 21st century is expected. In all cases, the warming from increasing greenhouse gases significantly exceeds any cooling from atmospheric aerosols. Other effects such as solar changes and volcanic activity are likely to have only a minor impact over this timescale.”

    This is an official statement made by a Peer of the Realm in the House of Lords. It vividly illustrates the immense damge caused by the improper use of climate models. The statement by Baroness Verma is wrong in so many ways, that I dont propose listuing them. They should be obvious to our hostess and resders of CE.

    Surely we need to stop this nonsense before even more extensivce damage is done. But no-one who matters has the gonads to do it.

  22. One of the main reasons for the use of large models of complex systems is that they allow for including a large amount of diverse information in a single framework. When the empirical data is as sparse and of as many different types as the data on present and past Earth system, a reasonable hypothesis is that a large model could be constrained sufficiently by the data to result in a model that has predictive skill. It’s clear that this approach is not an easy one. Initial failures do not prove that additional work cannot lead to better outcome, but it’s also possible that progress remains weak.

    A fundamental difficulty of the approach is that testing the resulting model is difficult at every step of the development as long as the model has not reached such maturity that it’s locked. Only when that’s done and when a large amount of additional data, not used directly or indirectly in its development, is available, is interpreting the statistical significance of test results more straightforward.

    • “When the empirical data is… sparse… a large model could be constrained sufficiently by the data to result in a model that has predictive skill.”

      I am not sure what you mean by “sparse data,” but suppose I take this to mean that many estimates that “constrain the large model” have relatively large stnwilcox@chapman.eduors. In that case, the claim you make here is demonstrably false. For instance, you do not necessarily improve the predictive skill of a linear model by adding “true predictors” to it: For that to occur, you have to have a sufficiently precise and sufficiently unbiased estimate of the parameter that multiplies that predictor. Given that the causality arrows are pointing every which way in complex systems, the claim that all of the parameter estinates are unbiased is (to me) a priori implausible; and if the data that constrains those estimates really is sparse, the idea that the parameter estimates are sufficiently precise is also dubious.

      • That’s supposed to be “relatively large standard errors,” not a garbled email link.

      • I mean that there’s a lot of data, but it comes from so many different sources that it cannot be used in a linear fashion, where one part is used to deduce some conclusions, next related data allows to extend that, and that approach is continued until a full picture is formed.

        With sparse data the things that can be deduced directly are disjoint, and do not allow stepwise extension to form a coherent picture. Assuming for the sake of argument that a basically good model is available, but ti’s parameters are unknown, such sparse data may suffice to fix those parameters well enough to result in a model with predictive skill. Whether that works operationally with Earth system models is not obvious, but that’s a possibility.

        When a model is developed on this basis it’s testing is difficult, if not impossible until a significant amount of such data becomes available that was not known during the development of the model.

      • Thanks for staying with me on this.

        “[1] Assuming for the sake of argument that a basically good model is available, but it’s parameters are unknown, such sparse data may suffice to fix those parameters well enough to result in a model with predictive skill… [2] When a model is developed on this basis it’s testing is difficult, if not impossible until a significant amount of such data becomes available that was not known during the development of the model.”

        I would appreciate it if you would elaborate on [1]. I can understand situations where, say, a simultaneous equations system has a vector of unknown parameters that needs to be instantiated in some way, and the available data can only constrain the parameter estimates to some subset of the logically possible space for that vector. In econometrics that is called a partial identification situation. This creates significant statistical complications in talking about the covariance matrix of the estimated parameter vector, but one can still speak of that in a heuristic way, understanding that deriving it is a harder problem than when we have full identification (the estimated parameter vector is a point, not a set).

        What about [2]? A lot of the denizens say as much, skeptic and warmist alike. But surely there is some sense in which the uncertainty of the estimated parameter vector (whether a set or a point) can be formally propagated to the forecast or projection, so that we can say what uncertainty we attach to the forecast or projection for that reason (as opposed to the Lorenz type reasons).

        Does anyone try to do this?

      • NW,

        My belief is that large Earth system models (GCMs of atmosphere and ocean, and possibly more) are too complex and built in a way that formally propagating parameters to outcome is not possible. It’s not even possible to give a unique well defined list of the parameters as much physics is built in the model structure.

        What I write below is not based on firm knowledge but represents only the impression that I have developed reading various sources.

        The present models are an outcome of a long evaluative process. First models were rather simple and based on fundamental physical equations, but that approach alone cannot describe everything important or even a significant part if it. The most obvious missing part is the detailed description of the locally resolved atmospheric processes (local weather phenomena). The grid size is far too coarse for that. The basic dynamics of condensation and cloud formation is also still insufficiently understood. To make the model complete the missing parts are described by parametrizations that are supposed to describe the influence of these phenomena within each grid cell. Research of these phenomena empirically and using detailed models that concentrate on these issues only helps in determining suitable parametrizations, but much is fixed also based on the results that the model gives. This is tuning of a subsystem that may then be kept fixed and thus not a part of the further tuning of the full model.

        There are also many other points where the model builders must make choices between possible alternatives being influenced by what they know about the Earth system and how they expect the model to react to their choices. Much trial and error is also involved in that.

        If the amount of data that is used explicitly or implicitly in model development and tuning is extensive enough, the resulting model must have much in common with the real Earth system and it’s likely to make valid predictions for many situations that were not considered during the model development. Success in that becomes the less likely the further the application goes outside the range of cases taken into account during the development.

        The outcome of this all is a model that may be reasonably good, but whose skill in making predictions/projections remains unknown lacking formal ways of determining what that skill is.

      • I have to go teach now, but I’ll say more and ask more questions later (or another time). Thank you so much for staying with me and attempting to answer my questions or addressw my concerns.

  23. David L. Hagen

    Abuse of models
    When 95% of global climate models are wrong (TOO HOT), appealing to global warming to stop coal power plants in developing countries is wrong. It harms the poor, forcing them to pay more for power and hinders their economic development. See:
    U.S. Says It Won’t Back New International Coal-Fired Power Plants

  24. I hope Judith saw some light bulbs click on in the eyes of her audience.

  25. Yes, a brilliant move. To quote from a commenter on another blog, “obama so loved the poor, he made a billion more”.

    It’s amazing how liberals in general who want to save the planet, the poor, or solve all social ills advocate for policies that will do the exact opposite of what they intend. To raise people out of poverty, they need affordable, abundant, and reliable energy, and for now, that means fossil fuels, that is, until our messianic leaders burden the fossil fuel industry with unnecessary and costly regulations so that “energy costs will necessarily skyrocket”.

    See the attached for a perspective on how to change the narrative re: fossile fuels:

    https://uy137.infusionsoft.com/app/linkClick/2703/7c7f3c72f739163a/129191/c5ad575fa4013a2c

    • I saw the pumpjack on the cover page and figured the writer was living in the past.
      Crude oil extraction is not sustainable and that has nothing to do with climate science.

      • Web, you keep spouting this crap, but here is some other info to chew on:

        In the last 100 years, America’s population has tripled. Life expectancy has increased by 70 percent. The productivity of the American people, measured in terms of real per-capita Gross Domestic Product (GDP), has increased by 600 percent. At the same time, we have consumed more than 340 billion barrels of oil, almost 60 billion short tons of coal, and more than 1,090 trillion cubic feet of natural gas.
        These things are linked. Affordable and reliable energy is a crucial factor in making these and many other significant human, social and technological achievements possible.
        Yet even with steadily increasing rates of economic and population growth, as well as increasing energy consumption, the United States today possesses greater recoverable supplies of oil, natural gas and coal than at any point in its recorded history. How can that be? Have vast new sources of hydrocarbon fuels magically materialized beneath our feet over the past 100 years? Or is it possible that, despite what you’ve read, heard and have been told, our continent has always had a lot more energy available to it than some would have us believe?
        The answers lie in the data. In 1980, official estimates of proved oil reserves in the United States stood at roughly 30 billion barrels. Yet over the past 30 years, more than 77 billion barrels of oil have been produced here. In other words, over the last 30 years, the United States produced more than two and a half times the proved reserves we thought we had available in 1980. Thanks to new and continuing innovations in exploration and production technology, there’s every reason to believe that today’s estimates of reserves are only a fraction of what will be produced and delivered tomorrow—not only here in the United States, but across the entire North American continent.

        OIL
        Total Recoverable Resources: 1.79 trillion barrels.
        • Enough oil to fuel every passenger car in the United States for 430 years
        • Almost twice as much as the combined proved reserves of all OPEC nations
        • More than six times the proved reserves of Saudi Arabia

        COAL
        Total Recoverable Resources: 497 billion short tons.
        • Provide enough electricity for approximately 500 years at coal’s current level of consumption for electricity generation
        • More coal than any other country in the world
        • More than the combined total of the top five non-North American countries’ reserves. (Russia, China, Australia, India, and Ukraine)
        • Almost three times as much coal as Russia, which has the world’s second largest reserves.

        NATURAL GAS
        Total Recoverable Resources: 4.244 quadrillion cubic feet.
        • Enough natural gas to provide the United States with electricity for 575 years at current natural gas generation levels
        • Enough natural gas to fuel homes heated by natural gas in the United States for 857 years
        • More natural gas than all of the next five largest national proved reserves (more than Russia, Iran, Qatar, Saudi Arabia, and Turkmenistan)

        People like you have been screaming PEAK OIL going on 40 years now, yet we have more recoverable oil today than we had 40 years ago even while consuming at an accellerated rate. I have a lot more “faith” in our ability to develop new technologies to extract the fuels we need, as we continue to develop rational and workable alternatives, e.g., not wind and solar, but possible nuclear or some bio-engineered fuels. There is more than sufficient fuel supply, and we will continue to discover more.

      • Forgot to incclude the citation – the info came from the Institute for Energy Research

  26. Issues concerning the integrity and credibility of Climatists have reached the point where the public should no longer be forced to underwrite a view that, long term projection is ‘do-able’ and climate change is a real issue for high-level policy agendas. Government-funded Climatology has become nothing more than a tool to increase the size and scope of government at the expense of traditional American principles of individual liberty and personal responsibility.

  27. It is sort of like the current political argument surrounding Obamacare. A question has arisen concerning what basic honesty requires in the way of providing the necessary caveats that allow voters to make intelligent decisions about various matters that concern them. Some believe it was dishonest — especially knowing it wasn’t true at the time — to say –e.g., “If you like your doctor, you will be able to keep your doctor, period.”

    The alternative view is that it is the fault of the people for being so stupid as to believe what they are told by a politician. In Climatology, the necessary caveat (or are we just stupid to believe anything Climatists say) should be–e.g., “even if we believe long term projections are ‘do-able’ — say, 50 years out — we’ll have to wait 50 years to find out if we are right.”

    …even if a consensus of climate scientists on alleged anthropogenic climate change actually did exist, it would prove nothing. Frank J. Tipler

  28. The General Circulation Model (GCM) does not include the potential energy of the atmosphere in the calculations. It is 50% of the total energy exchanged; as a result, the GCM as we know it is worthless and needs fixing.

  29. Public money should no longer be used to fund the creation of fantasy GCMs when — as we learned early on with the example of Michael Mann’s ‘hockey stick’ — they are more a religious than scientific.

    • Today, modellers, we have naming of
      parts, complex – interactions of whether,
      uncertain – initial – conditions, biome –
      undreamed – feedbacks, cascades,
      clouds, convection and evaporation,
      ebb and flow, rise and fall, gyre and
      gymbol, perturbations, permutations,
      thermal – radiation, oscillations undreamed
      of, Horatio, in your universe.

      Today, modellers, we have naming of
      parts, scratchings on the chalkboard,
      E=MC2 overlayed with children’s
      scribblings.

      with apologies to Henry Reed and kim.

      • We have wobbles and magnetos and fields of influence with immeasurable affect and runaway effect, convolutions and radioactive oscillators all righty from hellish to almightiest, events, particles with frequencies and optical illusions, impacting and influencing, photo, kinetic, diffusion and diffractive, convoluted and colliding, vectors with novel spectroscopy, spinning in waves of pulsing microscopy, affluent inflows and influential up-flows and outflows and no-shows and congruent no-goes, with regular predictions and prognostications, forecasts and projections of perturbations forcing additional contributions to current observations outside past paradigms into alternate spheres of utmost atmos of partial irradiance aplenty anomalously increasing while decreasing oscillators in synchronous ablution hasten to decline.

      • “affluent inflows” – my favourite kind.

  30. “It does not matter who you are, or how smart you are, or what title you
    have, or how many of you there are, and certainly not how many papers your side has published, if your prediction is wrong then your hypothesis is
    wrong. Period.” ~Richard Feynman

    • Wagathon, Totally agree with the Feynman statement but not that GCMs are fantasy. Need to keep working at some % of climate science on the models until they can reflect observations. Lots of complex models work well but they have to be bounced against observations and predictions play out. Chemistry model reactions are a major improvements in prediction biological chemical results. Physics and plasma models inform experimental design in high density physics and fusion research. The discipline of testing and revision are the key.

      Scott

      • “Need to keep working at some % of climate science on the models until they can reflect observations.”

        The hundred million (billion?) dollar question no one in the climate science/political/industrial complex wants to ask is:

        What if the climate is too complex and chaotic to model?

      • The only way to have real success in science … is to describe the evidence very carefully without regard to the way you feel it should be. If you have a theory, you must try to explain what’s good about it and what’s bad about it equally. In science you learn a kind of standard integrity and honesty. ~Richard Feynman

      • The horse is ailing, and not responding to vets’ nostrums. “Tbsp of turpentine a day,” prescribes a neighbor. Horse fails to recover, “More turpentine,” says the neighbour. As the reeking corpse is hauled away, the neighbour frowns disapprovingly, “Not enough turpentine.”

        The full shaggy dog story is more fun, but the ‘fixes’ to the models reek of ‘more turpentine’. Wasn’t sure if that interjection would have been appreciated without at least a précis!

  31. “Over the past two decades, the climate modeling community has increasingly interlinked the dual objectives of advancing scientific understanding of the climate system and providing actionable projections for decision makers.”

    It’s the fact that modelers have used the latter(CAGW projections) to obtain funding for the former (understanding climate) that has destroyed their credibility.

    It’s like being offered family counseling by a hooker. You can’t ignore how she funds her activities when deciding whether to take her advice on other matters..

  32. Reminds me of “all models are wrong, some are useful” (who said that?)

    • Tasmin Edwards used it but am not sure if that was original
      Scott

    • Tamsin Edwards,
      young modeler at U of Bristol.
      Has a web page.
      Scott

    • We all say it. Most of us pay attention to it. George Box used it before Tamsin Edwards first said “Mama”.

    • Steven Mosher

      just about everyone who has ever built or used a model.

      your visual system is a model. your hearing system is a model.
      you are a living breathing walking model.

      • So who you gonna believe? The climate models or your lying eyes?

        btw, if the climate model most accurately reproduce erroneous data, is that any indication of modeler bias? :)

      • Steven Mosher

        neither.

        its not a matter of belief.

      • Steven, “its not a matter of belief.” Everything is a matter of belief, trust or faith. Webster believes he has a valid model because it matches data with a margin of error with 5 other sources of data with their own margins of error after a little scaling and tweaking. You believe that land use doesn’t have a significant impact on GMT (+/- 0.05 C maybe) but look at the shift in DTR and the amplification like that is just the way it should be.

  33. Group discussion session 3: The socio-political roles of climate models and the roles of values in climate models [Facilitator: Wendy Parker]

    Dr. Curry, what is meant by the word “values” in this description?

    • Perhaps something like this:

      “But the standard ways of using probabilities to separate ethical and social values from scientific practice cannot be applied in much of climate modeling, because the roles of values in creating the models cannot be discerned after the fact—the models are too complex and the result of too much distributed epistemic labor. I argue, therefore, that typical approaches for handling ethical/social values in science do not work well here.”

      “What value does one assign to economic growth compared to the degree to which we would like to avoid various environmental risks? In the language of decision theory, by social values we mean the various marginal utilities one assigns to events and outcomes.” – Eric Winsberg,

      Values and Uncertainties in the Predictions of Global Climate Models

  34. Meghan McArdle has a very interesting article. Her focus is Obamacare, but it applies to climate models well. Basically, her premise is that wonks often find themselves in “Expertopia” – a place where “everyone knows” the difference between the public statement and the reality. Everyone in the wonk community knew that under Obamacare you couldn’t really keep the health insurance policy you had or that everyone would really see premium prices drop and it would cover the uninsured. The math was obvious. But the experts forgot about the general public, a place where people only hear about an issue in sound bites during an election, or from the media. This group didn’t analyze the proposal, so they actually believed what they were told- you can keep the plan you have now, you will see premium prices drop.
    That real world is angry right now- partly because they were ill-served by the assumptions of the experts.
    http://www.bloomberg.com/news/2013-10-30/what-everyone-knew-about-obamacare-and-wouldn-t-say.html

    I think climate campaigners are running into this now. There is a whole world of people whose only knowledge of climate models is a colle

    • jeffn,

      “But the experts forgot about the general public, a place where people only hear about an issue in sound bites during an election, or from the media.”

      The experts didn’t forget about the general public. The “experts” in favor of Obamacare counted on the ignorance of the public in making the false claims. They also counted on the “experts” in the media, who also knew full well that the claims were lies, would never call them on it.

      This recent spate of MSM articles wondering whether the administration “knew” its claims were false, is just their way of avoiding the real question: Why didn’t they, the media, who also knew the claims were false, inform the public of the facts when there was still time for the voters to do something about it?

      Everybody politically aware knew Obama was lying. Everybody also knew the public in general was lapping it up. Conservatives fought to tell the truth and were call racists for their efforts (and still are). Progressives, “moderates” and “independents” all stayed silent, because the one thing they can all agree on, the public is too stupid to make such an important decision for itself.

      This is just one area of blatant lies by Obama and the other progressives running the government. And the short term interest on reporting the issue< because of the crash of the Obamacare website, will pass. Don't worry all you drones, you can go back to pretending that Obama is an honest "moderate" like yourselves shortly.

      • And the answer is simple. The people losing their individual high-deductible major medical policies are all small business people who vote GOP anyway. Who cares about the Indian guy running the 7-11?

      • No need for web drones:
        Obamacare, Obamacare, Obamacare.

      • Harold (and Willard),

        The real shock comes when the guy running the 7-11 finds his premium may have spiked a bit, but his deductible has gone from $1500 to $6000.

        The wonders of centrally planned healthcare that nobody tells you about:

        “A recent World Bank study finds that in the United States, only 20 percent of health-care spending comes in the form of out-of-pocket expenses paid by consumers. In Singapore, it is 88 percent and in Switzerland 72 percent. But even the single-payer systems of Canada and the United Kingdom feature more out-of-pocket spending by consumers, 49 percent and 53 percent respectively. How is it that in countries with “free” universal health care consumers pay more out of pocket than they do in the United States? The short answer is that treatment in single-payer systems tends to be kind of terrible, which is why a tenth of British subjects use private plans rather than the NHS. And a significant share of Britons who use the NHS must be turning to private care fairly often, since it is estimated that the typical medical specialist in the U.K. supplements his income by 50 percent moonlighting in private practice. In Canada, about 75 percent of people carry supplementary private insurance, and about 28 percent of all health-care expenditures happen in the private sector.”

        http://www.nationalreview.com/article/362443/obamacare-worst-case-scenario-kevin-d-williamson

      • And that’s going to be the next shoe. Just like AARP and others offer “Medicare supplements”, there’s going to be a new market for “Ocare supplements”.

        And the dirty little secret about Medicare supplements is that they actually don’t pay out very much, they’re just very good at billing Medicare in a way that gets more reimbursement.

        This is not going to end well.

      • sorry my original got cut off. Probably by me. But, yeah, the politicians knew what they were saying was bunk. But I think Meghan’s premise was that the wonks compounded the problem, even the ones who opposed Obamacare. They thought it was so obvious that it wasn’t worth discussing, so they focused on the minutia.
        That let slimy politicians get away with a fast one.
        I’ll be even more blunt: the wonky back and forth about the models obscures the fact that they were used to support a narrative and that narrative wasn’t true. It doesn’t matter that models have actual value, if you aren’t able to admit that. I think Judith gets crap from the team because she’s a credible climate scientist willing to say “this wasn’t true” instead of simply engaging in the diversionary wonkery where “everyone knows models are wrong, but have value”
        Don’t forget there are millions of people who only hear about this issue from the local paper and they’re all saying “dude, you said it would get warmer every year and you said there’d be more hurricanes every year.”

      • Harold:
        “The people losing their individual high-deductible major medical policies are all small business people who vote GOP anyway. Who cares about the Indian guy running the 7-11?”

        Good comment. Winners and losers. Add to the list family owned restaurants, many small businesses in your local downtown, farmers, independent truck drivers.

        I would’ve asked that my high-deductible health plan (HDHP) remain unaffected, as well of those of my clients. We had perhaps one star in all of the healthcare problems arena, but we’re waiting to find out how much HDHPs will be effected.

        I think the best area for hope is to try new things, open up more options. The IRS tax code can be looked at again. Does anyone one know why we take roughly 10% of your income before deductions and exclude those out of pocket medical costs as deductible? << A generalization that is subject to exceptions.

        Why can some people have Health Spending Accounts and others cannot? Why do some get a direct (every time) write off of their health insurance premiums and other are subject to the 10% rule mentioned above?

        Of my clients, the ones most likely to have medical exceeding the 10% number are old retired ones. Our effective target ends up costing old retired people more tax money in some cases.

      • The biggest losers:

        NerdWallet estimates for 2013:

        1. 56M Americans under age 65 will have trouble paying medical bills:

        – Over 35M American adults (ages 19-64) will be contacted by collections agencies for unpaid medical bills

        – Nearly 17M American adults (ages 19-64) will receive a lower credit rating on account of their high medical bills

        – Over 15M American adults (ages 19-64) will use up all their savings to pay medical bills

        – Over 11M American adults (ages 19-64) will take on credit card debt to pay off their hospital bills

        – Nearly 10M American adults (ages 19-64) will be unable to pay for basic necessities like rent, food, and heat due to their medical bills

        2. Over 16M children live in households struggling with medical bills

        3. Despite having year-round insurance coverage, 10M insured Americans ages 19-64 will face bills they are unable to pay

        4. 1.7M Americans live in households that will declare bankruptcy due to their inability to pay their medical bills

        – Three states will account for over one-quarter of those living in medical-related bankruptcy: California (248,002), Illinois (113,524), and Florida (99,780)

        5. To save costs, over 25M adults (ages 19-64) will not take their prescription drugs as indicated, including skipping doses, taking less medicine than prescribed or delaying a refill

        http://www.nerdwallet.com/blog/health/2013/06/19/nerdwallet-health-study-estimates-56-million-americans-65-struggle-medical-bills-2013/

        Yes, but Joe the Plumber.

      • In 2013 over 20% of American adults are struggling to pay their medical bills, and three in five bankruptcies will be due to medical bills. While we are quick to blame debt on poor savings and bad spending habits, our study emphasizes the burden of health costs causing widespread indebtedness.

        http://www.nerdwallet.com/blog/health/2013/06/19/nerdwallet-health-study-estimates-56-million-americans-65-struggle-medical-bills-2013/

        Yes, but Joe the Plumber.

      • http://finance.yahoo.com/blogs/daily-ticker/obamacare-unintended-losers-152316725.html

        “These “losers” are primarily people who buy their own health insurance rather than have it provided by an employer. Many now find their plans will be canceled because they fail to meet the minimum coverage requirements under Obamacare. These plans offer “bare bones insurance…usually catastrophic care…and beginning Jan. 1 insurance companies will not be allowed to offer these very plans,””

        As I said in my prior post, an HDHP combined with an HSA was a Star of the current problem situation. They appear to be receiving unintentional fallout in some cases. Perhaps corrections will now be made. Sorry to stray so far off topic.

      • “These “losers” are primarily people who buy their own health insurance rather than have it provided by an employer. Many now find their plans will be canceled because they fail to meet the minimum coverage requirements under Obamacare. ”

        Wait till next year. The delay given to businesses is hiding the real dirty secret. Plans that people have through their employer will also be voided as the grandfathering will be up in 2013. Expect roughly 93 million people to be forced into the exchange.

        Think of all the gay males who will have to buy insurance with maternity benefits.

    • Matthew R Marler

      jeffn: Everyone in the wonk community knew that under Obamacare you couldn’t really keep the health insurance policy you had or that everyone would really see premium prices drop and it would cover the uninsured. The math was obvious.

      Wonks and politicos who opposed Obamacare warned publicly of its liabilities, e.g. who paid the costs. Wonks and politicos who supported Obamacare simply lied about it, describing it as benefits without costs.

      The anguish we read about now comes principally from people who only now realize that the costs have been on them all along.

    • A fan of *MORE* discourse

      jeffn asserts Experts understood [the healthcare] math was obvious.

      Statement by jeffn, literature-link by FOMD.

      Summary  The evolution of Hillary/Romney/Obama-Care is irreversible, for reasons that computer folks appreciate, that originate in the sobering 21st century technological reality of Big Data Healthcare too.

      Conclusion  20th century political ideologies are inadequate to 21st century medical realities. Conservatism in particular must concretely address the new medical realities, or perish as a viable political movement.

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

  35. Tomas Milanovic

    Reminds me of “all models are wrong, some are useful” (who said that?)
    .
    I wish that people would stop quoting this stupidity as if there was some hidden wisdom in it.
    If all models were wrong then all would be useless because the principle of action is and has always been a correct prediction.
    A wrong model gives wrong answers and wrong answers can guide neither science nor policies.
    In reality there are correct (e.g non wrong) models like QED or the GR which allow to formulate predictions validated by observation with unsurpassed accuracy.
    These models are by definition also useful.
    .
    Then there are domain bounded models (Newtonian mechanics or turbulence are examples) which are wrong generally but are approximately correct in a bounded domain.
    Of them one can say that they are sometimes wrong and then useless and sometimes approximately correct and then and only then useful.
    .
    The category of wrong and useful doesn’t exist.

    • I think that in that quote “wrong” means “not rigorous”, but useful means “close enough”. IOW, Newtonian mechanics is wrong, but close enough to design airplanes by. Trying to use quantum mechanics or relativity to design airplanes is slightly more accurate in principle, but pointless and excessively difficult in practice.

      The relativistic correction for the mass of an airplane flying at 600 mph is something like 10^-12. Basically immeasurable. But it is more correct to use it. In a case like that, as long as we have a good idea how “wrong” it is, the statement is technically true. Newtonian mechanics is wrong but useful.

      If only climate models were this straightforward and well behaved.

    • Steven Mosher

      The category of wrong and useful doesn’t exist.

      It sure does.

      My car has a model of how far it can travel on the remaining gas in the tank.

      1. The sensor that detects the amount of gas remaining is wrong almost
      100% of the time unless the car is perfectly flat

      2. The model of rate of burn is wrong.

      3. The prediction of miles to empty is wrong and does not account for

      a) slope of the road ahead
      b) windspeed and direction ahead
      c) changes in rolling friction

      Yet this model is useful and works. As long as I follow it’s advice I have never run our of gas.

      • Tomas Milanovic

        The category of wrong and useful doesn’t exist.

        It sure does.

        My car has a model of how far it can travel on the remaining gas in the tank.

        No, wrong and useful doesn’t exist.
        You confuse also wrong with correct in a bounded domain and that’s why this quote is one of the stupidest things there ever were.
        If your car “model” was wrong, then you would be out of gas all the time.
        The simple fact that it is rarely the case shows that the model is approximately correct and the fact that it neglects negligible things and considers that many other things average out in many cases is a reasonable expectation.
        It works like the simplest weather prediction model : “the weather tomorrow will be the same like the one today”.
        You would be surprised to look at the complexity and correctness of the proof that shows that this “model” is actually reasonable.
        Things get much hairier when the time scales increase.
        .
        My car has the same “model” and I make the same observations that the model is mostly right (e.g non wrong).
        However it is only correct for a limited domain. If you make a trip that’s only going upwards during several hours, you will understand what a limited validity means.
        Wrong models make by definition wrong prediction and wrong prédictions can never be useful.

      • Too bad climate models aren’t as simple as a gas sensor. You can’t show that climate models are useful for anything.

      • SM – if you will just unplug that gas gauge for a year, estimate how much gas is left in your tank as you go about your business – then MAYBE you will finally appreciate the difference between an estimate and a measurement. In your case, I’m not sure, but you never know.

      • And to think us old timers had to get by with just an E and F and a curve between them to guess when to fill up. ;-)

    • it is a mistake to argue semantic absolutes with CAGW obscurantists. That is why they use the. They obscure the issue for the hoi polloi, and divert the more knowledgeable skeptics into semantic dead ends.

      Who cares if models are “useful”? In the context of the climate debate, the question is – useful for what? It’s the qualifiers the obscurantists always leave out.

      it’s like arguing the definition of “climate change” or “fairness”. They are all content free terms, until they are delimited.

      A Humvee is useful for driving from Philadelphia to New York. It is useless for traveling from New York to Paris.

      Put another way, an honest policy maker needs a GCM to make climate policy like a fish needs a bicycle.

      • Steven Mosher

        Policy makers decide what is useful for them. not you.

        I love people who think they get to tell policy makers what they can and cannot consider.

      • Gary,

        My thoughts exactly. I’ve noted for a long time now, the semantic games on the part of the warmists. Global warming becomes the unfalsifiable “climate change.” . Storms become “extreme weather” and “dirty weather.” Carbon dioxide, becomes the dire sounding “carbon” with its attendant images of nasty black pollution..

        Brilliant in its way, I must say. They can no longer win on the science, so it becomes a propaganda campaign. Joseph Goebbels had nothing on some of these folks.

      • Godwin!!

      • “Godwin”

        I thought hard about that last part, and left it in anyway. I think it’s apt. Instead Joshua, why not respond to the substance of my remarks. You see no semantic gamesmanship in those examples?

      • Chief Hydrologist

        “A lie told often enough becomes the truth.”

      • I think that comparing people to Nazis is rather over-the-top, PG.

        Lysenko!

      • “Policy makers decide”

        Spoken like a true statist.

        Andrew

      • Joshua, I don’t like the comparison either.

      • Steven Mosher

        Andrew wants policy makers who dont decide.

      • “Andrew wants policy makers who dont decide.”

        I want individual citizens to decide. I don’t want inbred statist power garbbers to decide.

        Andrew

      • M. Hastings –

        Just wanted to acknowledge your comment.

      • kim’s corollary to Godwin’s Law is that the first one to cry ‘Godwin’ on a thread about tyranny is a useful idiot.
        ==================

      • kim’s corollary to Godwin’s Law is that the first one to cry ‘Godwin’ on a thread about tyranny is a useful idiot.

        kim, with his/her usual keen insight, makes an excellent point. Because comparing people to Nazis is so non-idiotically useful. Look at how much has been accomplished, non-idiotically, in the hour since PG compared millions of people to Nazis.

        Think of how much non-idiotically useful might be accomplished if in every thread every comment had people comparing millions to Nazis.

        Yes, indeed, the real problem her is that my comment of Godwin! just slowed down the degree of non-idiotically usefulness.

      • > it is a mistake to argue semantic absolutes with CAGW obscurantists.

        Way better to wait for anti-Obamacare non-drones to argue semantic absolutes.

      • “Over the top.”

        Yes Josh, you’ve said that twice. Perhaps you have a point, Now why not answer my question? Do you or do you not agree that warmists are playing semantic games?

      • Under the Big Top lion charmers crack quips, and the lions just grin.
        =============

      • Ooh, that was supposed to be ‘lyin’ charmers’.
        ======

      • R. Gates aka Skeptical Warmist

        Steven Mosher said:

        “Policy makers decide what is useful for them.”

        —–
        True. And we (theoretically) get to decide is policymakers are useful for us.

      • Chief Hydrologist

        Not to be confused with – in the shell game of semantics – useless idiots.

      • “Joshua, I don’t like the comparison either.”

        Not fair, is it? They’re not really Nazi’s after all, despite their authoritarian impulses.

        “Denier” of course is just fine with the warmists, not just despite it’s obvious appeal to holocaust denial, but because of it…I’m sure Josh objects that term every chance he gets/

      • Steven Mosher,]

        You just can’t help yourself, can you?

        Your comment: “Policy makers decide what is useful for them. not you.

        I love people who think they get to tell policy makers what they can and cannot consider.”

        has absolutely nothing to do with my comment: “…an honest policy maker needs a GCM to make climate policy like a fish needs a bicycle.”

        I say the policy makers don’t need useless (for the advertised purpose) GCMs, and you transform that into my saying they can’t use them.

        (You also neatly dodge the point of my comment, that you never describe what the GCMs are supposedly “useful” for when making your obscurantist pitch.)

        But more to the point let me put it this way:

        “I love elitists who think voters have no right to tell their elected policy makers what they should and should not consider.”

        Fortunately, we do not yet live in the kind of society you clearly desire where self appointed elitists like yourself decide for the stupid voters what public policy should be, without any of that annoying speech and voting stuff.

        I, in fact, have the right to tell “policy makers” any damn thing I like.

        It is so much fun when the progressives’ polite facade slips and you see the statist demagogue that lurks beneath.

      • http://www.youtube.com/watch?v=fXpmkHrTwtA

        Mosh a-ah
        Savior of the universe
        Mosh
        He save everyone of us
        Mosh
        He’s a miracle
        Mosh
        King of the climate model

        He’s for only one of us
        Speaks for every one of us
        He save with a mighty mod’l
        Every man, every woman
        Every child, with a mighty
        Mosh

        Just a man
        With a G C M
        You know he’s
        Nothing but a man
        But mods can never fail
        No one but the most elite
        May find the Climate Grail
        …Oh..Oh……..Oh..Oh….

        Mosh

      • Very clever! I wonder how many will get the Flash reference.

      • GaryM,

        + many

        You’ve summed it up very well. It needs to be said.

  36. Tomas Milanovic

    I think that in that quote “wrong” means “not rigorous”, but useful means “close enough”.
    .
    This is exactly the point. If you make these substitutions then you obtain a statement that makes sense (and corresponds to my second category of models whose prédictions are only valid in a bounded domain)
    That’s why the original quote is stupid – one needs to transform it first to obtain something that is acceptable.
    Unfortunately not everybody will make that transformation and then you’ll get only confused people.

    • Exactly. Only the absolute simplest of models (i.e. simple conservation laws) are unlimited in domain. Everything else is limited in domain. And modelers have a way of talking about models as unlimited oracles rather than limited approximations.

      This is somewhat analogous to the way some people talk about data without talking about error bars at the same time. If you talk about the one without talking about the other, you’re presenting incomplete (and probably misleading) information.

  37. Dr. Curry:
    Your advice about GCMs reminds me of this diagram:

    http://wattsupwiththat.files.wordpress.com/2011/06/chaos_training.png

    Full article:
    http://wattsupwiththat.com/2011/06/13/the-chaos-theoretic-argument-that-undermines-climate-change-modelling/

    Tamino argues against Edmonds here:
    http://tamino.wordpress.com/2011/06/14/chaos/

    “The “weather” in this system is unpredictable, but the climate is not: it’s stable.” – Tamino.

    Weather/Climate. Does nature care about semantics and follow our descriptions of time?

    • Tomas Milanovic

      Thanks for the link Ragnaar
      I have several comments.
      .
      1) Edmonds makes the same historical error by attributing the chaos discovery to Lorenz. In reality it was Poincaré 60 years before Lorenz and chaos was not doscovered in fluid dynamics (or weather) but in astronomy (the 3 body problem). Yes gravitationally interacting systems are chaotic.
      .
      2) Edmonds develops the temporal chaos theory, e.g behavior of systems where the single variable is time. Even if the temporal chaos is rather well understood and studied since Poincaré 100 years ago, it doesn’t apply to weather and climate. Weather and climate are examples of spatio-temporal chaos where the variability happens both in space and in time.
      In a very simplified way, temporal chaos are curves with time on the horizontal axis.
      Spatio-temporal chaos are spatial patterns that vary with time, e.g every point of the space is a chaotic oscillator. This can’t be represented by curves but by surfaces or volumes.
      .
      3) The “opponents” came again with the idiotic example that one can predict that temperature in winter will be probably lower than in summer what is right. But then they say that the chaotic behavior doesn’t matter what is idiotic.
      For instance the logistic equation X(n+1) = 4.X(n).(1-X(n)) is chaotic.
      By analogy with weather, I can’t predict what the temperature will be for a given n (a given day).
      Yet I am able to predict that the system will be much more often by 0 than by 0.5 what is equivalent to predict that températures in winter will mostly be lower than in summer.
      How is it possible if teh system is unpredictable.
      Well in this particular case it is because the system is ergodic and has an invariant probability distribution function (PDF). It can even be explicitely computed and indeed shows that probability of x=epsilon > probability of x = 0.5.
      So the fact that one can predict some probabilities of some particular inequalities (like average T winter < average T summer) proves nothing and certainly not that chaotic systems are predictable (even in average).
      Besides this particular example applies only to temporal chaos and not for spatio temporal chaos.
      An equivalent for spatio temporal chaos would be to be able to predict the probabilities that I will have 2 El Ninos in a row or 2 La Ninas in a row. Or 3 . Or N .
      Of course we cannot predict that and it is even possible that such PDF don't exist at all.
      .
      4) Even more idiotic is the babbling of some "opponents" about "initial value problem" and "boundary value problem".
      To avoid writing 2 pages, I will just say that if a system is chaotic then it is chaotic for all and any boundary values. There is no verbal magics that makes a chaotic behavior disappear just by calling it a "boundary value problem".
      And as far as energy conservation constraints go, it is really a red herring. Of course that all chaotic system conserve energy like anything else and this "constraint" is irrelevant for their dynamics and predictability.

  38. The government could burn through a billion more dollars filling more filing cabinets with worthless AGW junk science but the real harm global warming fearmongering is doing is cutting the growth of GDP and that amounts to hundreds of billions of dollars and millions of jobs lost, and futures of the next generation destroyed, and poorer nations being deprived of the benefits of modernity — and, that KILLS (how many have died from malaria because of the global ban on DDT? A million?).

  39. “They are a central technology of climate science:

    They certainly should be. I can say that with some authority as one who pioneered modelling in the aero-space field. Computer simulation has made possible the integration of all knowledge of a complex system like climate. What would I expect to see in the landscape of simulation in climate science? I would expect to see vigorous discussion among the various groups sponsored by the 20 or so IPCC models. Does such vigorous discussion occur? The IPCC does not expose those transactions so we don’t know. We don’t know what, if anything, is going on. Whether there is contention over particular equations?. Whether they agree on a particular symbolic notation for their work?

    As far as I am aware, there ha never been a public dissection of any model. So how can anyone outside the priesthood know what is going on? Modellers have to build up .confidence in their work otherwise it will have little impact.

    • The metric of the peculiar privacy of this self-destructive feedback chamber is the degree(intended) to which it has gone wrong before being noticed, or being called upon the error.
      ================

    • Curious George

      Why are modelers concentrating on a 100-year climate forecast instead of a 100-hour weather forecast?

      • +1 That’s what I have been saying for a long time now.

      • R. Gates aka Skeptical Warmist

        Apparently you two have no understanding of the different purposes for each and would probably use a screwdriver to drive a nail in thinking that one tool “fits all”.

      • The point R Gates is that a 100 year global forecast serves no useful purpose that I can see and such forecasts are crapola in any case.

      • Curious George

        Dear Sceptical Warmist, please supply me with a list of equations which have to be different in weather vs. climate models. I partly agree with you in the sense that climate models have to be severely simplified to run in today’s supercomputers. That does not make them correct. Do you agree that a possible way would be to develop a reliable weather model first, and only then to simplify it?

    • Dr. Biggs,

      In An Alternative Theory of Climate Change (pdf), you highlight:

      The averaged mean surface temperature increased uniformly from the 1910 until 1940 – a total of 0.45C.

      This passage is below a dramatic chart of the Annual Global Temperature Anomaly, from Australia’s BOM. A bold trendline traces the 11-year running-average of the data … the ‘star’ of the chart, actually.

      It ‘bothers’ me some, that CO2 contributions from industrial activity and internal combustion engines increased strongly through the 1910-40 era, yet the temperature increase response was quite linear. Indeed, the anomaly rises a little steeper, in the earlier part of this era.

      More generally, the entire running-average has the appearance of linear pieces jointed together. This whole trendline is long on relatively straight sections, and short on curves or sweeps

      Can you comment on these seemingly-unusual features?

      Ted

      • Thank you quoting my own paper back to me. The 11 year moving average was a good choice because it clearly showed the trend of temperature change, But such clarity is not without a price. Part of the price is that you lose possibly real information, particularly higher order derivatives of the function. Yes, you have noticed this and I thank you. But remember the purpose of my paper. was to discover and explain climate change. I was struck by both the linearity of the change and its abrupt halt in 1940. This led me to the conclusion of the on/off nature of climate change which cannot be replicated by normal continuous differential equations Is this why the IPCC models don’t replicate past climate? The on/off nature is a separate problem. Why? I have seen similar behavior in Quantum mechanics. Temperatures rise and fall in steps and stairs. It seems crazy to apply this to the troposphere, because we normally see this as a continuous process. But the evidence is compelling. In the case of CO2, it has many degrees of freedom and can vibrate in several IR modes. It’s specific heat id not much higher than the other gases, N2 and O2. Specific heat is usually measured in the 20C to 25C range which also covers the troposphere temperatures. What if most CO2 in the troposphere is already on the lowest step? How can it absorb more heat?

      • Dr. Biggs replied;

        The on/off nature [of climate change] is a separate problem. Why? I have seen similar behavior in Quantum mechanics.

        Because CO2 is a trace component of the atmosphere, I often wonder whether there could be semiconductor doping effects going on.

        The semiconductor part of semiconductor electronics is actually just the ‘straight-man’ in the act. The script is actually written for the dopant, and the action is created & controlled by it.

        Dopants respond differently, to slight variations of their environment (the semiconductor matrix). Slight variations of dopant itself, also produce different effects.

        One of the key effects sought & exploited in electronics, is of course switching-behavior – which semiconductors are included toward, naturally. Could CO2 be acting as a dope?

        Ted

      • It’s a odd sort of switch that flips only after the main current changes direction … hardly useful. Or influential. Or controlling.

      • Brian H,

        A switch is (and models as) an oscillatory with certain characteristics.

        Oscillators & switches, amplification & feedback … that pretty much covers the global conversation.

        Those four are all either forms of each other, or their attributes & effects.

        Climate is just a big ball o’ scraps o’ string …. oscillations & switches, feedbacks & amplications. Pick a loose end – any of thousands – and pull on it. ;)

        Ted

  40. We can look at the data instead if you want to throw away the models. The last 40 years (yes, including the whole pause) exhibit a transient climate response equivalent to 2.5 C per doubling. While this supports the models (by the way), it can be used alone to gauge potential global impacts of increasing CO2 further, also noting that the transient response is a lower estimate of the equilibrium response. Do we also throw out the data or admit it tells us something about what to expect?

  41. ” As a consequence, climate models are of significant interest to scholars in philosophy, history of science, and science and technology studies.”

    Seriously? Science is committing Harri-Kari because of its inability to produce answers bounded by uncertainty. How about, just say you have no clue, and wait until you do.

    I swear, if it weren’t for AGW, they would have to invent killer Asteroids as the death of humankind. Oh, Wait.

    • The GCM’s seem to fall into the category as being of academic interest only as their outputs have failed verification and validation under most forms of sound scientific practice.

    • Ed,

      “…..because of its inability to produce answers bounded by uncertainty. How about, just say you have no clue, and wait until you do.”

      I think you probably mean “unbounded by uncertainty” . But, shhh don’t give the game away! Judith doesn’t want to be seen as a ‘merchant of doubt’ !!

  42. Consensus Climate Science has Proved that you cannot fool all the people all the time, but you can fool most of the people most of the time.

  43. Dr. Strangelove

    Are GCMs the best tools? Yes, the best tools for propaganda especially if real data do not support your hypothesis. But if you’re trying to do science, models are only as good or as bad as the modelers. If the modelers are dumb or biased, throw the GCMs in the trash can. It’s just wasting money to fool the public.

  44. As the ‘workshop’ draws to its close, I’m looking forward to seeing the published plans from the various participants about what they will be doing differently as a consequence of their attendance.And a checkpoint about how they have got on with those plans in 3, 6 and 12 months time.

    Can somebody remind me of where I should be looking?

    • Can somebody remind me of where I should be looking?

      I could make a suggestion if you’d like.

      • Joshua,

        Is there really room for two heads there?

        Especially if one seems to be very big.

        Live well and prosper,

        Mike Flynn.

  45. Judith You might suggest the replacement of the “not fit for purpose” modeling approach with that outlined in various posts on my blog at
    http://climatesense-norpag.blogspot.com
    Here’s a quote from the latest post
    “In summary the projections of the IPCC – Met office models and all the impact studies which derive from them are based on specifically structurally flawed and inherently useless models. They deserve no place in any serious discussion of future climate trends and represent an enormous waste of time and money. As a basis for public policy their forecasts are grossly in error and therefore worse than useless.

    2. A Simple Rational Approach to Climate Forecasting based on Common Sense and Quasi Repetitive- Quasi Cyclic Patterns.

    How then can we predict the future of a constantly changing climate? A new forecasting paradigm is required .

    It is important to note that it in order to make transparent and likely skillful forecasts it is not necessary to understand or quantify the interactions of the large number of interacting and quasi independent physical processes and variables which produce the state of the climate system as a whole as represented by the temperature metric.”
    This post can also be found at
    http://wattsupwiththat.com/2013/10/29/commonsense-climate-science-and-forecasting-after-ar5/
    The blogosphere is well ahead of the Academics in all this- as I think you realize. The academic community have been more interested in defending and propagandizing the establishment AGW meme than in objective science.

  46. M Mann:
    “…but also sub-surface ocean temperatures) is not as high as during Medieval times, i.e. during what they term the “Medieval Warm Period” (this is a somewhat outdated term; The term “Medieval Climate Anomaly” is generally favored by climate scientists because of the regionally variable pattern of surface temperatures changes in past centuries–more on this later).”

    In the lab we used to call it flattening the top:
    http://autumndoucet.files.wordpress.com/2012/06/12a-flattening-the-top4.jpg

  47. How about diverting most of the funds from models, into technology that can accurately measures the radiation budget?

    With that settled, we wouldn’t need*** to know exactly how the earth’s energy complex climate system responds to rising CO2. But we would be able to see how much the earth’s energy balance is affected.

    Surely that will bring far more bangs per buck? Certainly as far as policy is concerned.


    *** It would of course be preferable if we did know.

  48. Climate models are only as good as the data inputted into them. An important part of input data is choosing the part of the temperature curve to extrapolate into the future. It is not too much to ask that such a starting point should be the current temperature record, not some other point ten years back. Unfortunately this is something that the architects of CMIP5 completely ignore. Their future projections of temperature that look like a duster are an extrapolation of the nineties temperature record, poorly reproduced and inaccurate. It is actually hard to understand why their starting position has an upward slope. One would guess they have faith that the world is still warming, never mind that pesky twenty-first century pause. That is either stupid or dishonest or both. The actual temperature record since 2002 has been a horizontal straight line, and that is the latest world temperature you have to use for extrapolation. But instead of this they create a temperature record of their own from the nineties on that bears no relation to reality. Among other things, the super El Nino of 1998 is squashed down and its true height is hidden. It is not the result of any greenhouse warming and does not belong in the dataset whose purpose is to extrapolate greenhouse warming into the fiture. There is a further problem with using the nineties as part of this starting platform,namely the fact that there was no warming in the eighties and nineties. I pointed that out in my book and now the big three of ground-based temperature, GISTEMP, HadCRUT, and NCDC, all agree with me. They secretly changed their data for this period last fall and did not tell anyone. This of course was too late for AR5 and as a result CMIP5 uses the old version. It is impossible to see from their “observations,” supposedly of four datasets, that any standstill at all has occurred at any time since the beginning of their temperature chart. And that makes the entire CMIP5 enterprize a fraud whose purpose is nothing less than to convince the world of a non-existent global warming ahead. After all, world’s best climate scientists, using world’s best supercomputers, predict that this is so. But here is the true temperature history for this period. The current temperature standstill begins with 2002, the end point of the step warming of 1998. The space between 2002 and the standstill in the eighties and nineties is taken up by the super El Nino and its step warming, neither one a source of the greenhouse effect. The step warming brought to us by the super El Nino raised global temperature by a third of a degree Celsius and then stopped. This is almost half the total warming for the last century. It has observable consequences in the far north of Siberia and the Canadian Arctic as well as ecological effects involving species migrations. But from official temperature curves you would never know that the step warming even exists because it was covered up by a fake warming called the “late twentieth century warming” until last fall. If you now align the starting position for CMIP5 with the current real temperature curve that is horizontal their bundle of duster feathers will not change and a very strange prediction is obtained: all predicted strands float up at an angle. To make them nearly horizontal you would have to reduce their sensitivity, a parameter that IPCC claims is responsible for the spread of results they show. To correct that, sensitivity should be reduced, and to obtain a completely horizontal projection it should be reduced to zero. That means no increase of global temperature when atmospheric carbon dioxide is doubled, and no greenhouse warming whatsoever. That of course, is not what IPCC is about and they decided to do their best to hide this stark fact. But it so happens that the theory of Ferenc Miskolczi actually requires this. He published it in 2007 and was rejected by the global warming science establishment. His theory applies to a situation where more than one greenhouse gas simultaneously absorb OLR. In the earth atmosphere the gases that count are water vapor and carbon dioxide. Their combined absorption in the IR is not the arithmetic sum of their individual absorptions but is determined by an optimum absorption window they jointly maintain. The optical thickness of this window for a CO2 and water vapor combination is 1.87. This corresponds to a transmittance of of 15 percent or absorbance of 85 percent in the IR. If we now add more carbon dioxide to the atmosphere it will start to absorb and the optical thickness will increase. But as soon as this happens water vapor will start to diminish, rain out, and optical thickness is again restored to its optimum value. Since there is twenty five times more water vapor than carbon dioxide in the atmosphere it will only take a small fraction of it to redress the balance. As I said, the global warming establishment rejected this but by 2010 he had experimental proof. Using NOAA weather balloon observations that go back to 1948 he studied the absorprion of infrared radiation by the atmosphere and discovered that absorption had been constant for 61 years. At the same time, carbon dioxide in the air increased by 21.6 percent. This means that the addition of this substantial amount of CO2 to air had no effect whatsoever on the absorption of IR by the atmosphere. And no absorption means no greenhouse effect, case closed. This is the explanation for the existence of the warming pause, now 15 years old, and also of the 18 year warming pause in the eighties and nineties. That second one was covered up by a fake warming in official temperature records until last fall. Since the big three of global temperature have stopped showing this fake warming we can now add up the total known warming pause since the beginning of the satellite era in 1979. They add up to a total of 33 years. In between the two there is just enough space to squeeze in the super El Nino and its step warming. This means a total of 34 no-greenhouse years since 1979. Knowing this fact, do you believe that any earlier warming, when carbon dioxide was lower, could have been caused by the greenhouse effect? I vote no for this. As for climate models, with sensitivity zero as required by the absence of greenhouse warming, they are a total waste of money and academic resources. Modelling may have uses in other fields of science but for predicting future climate it is worthless.

  49. Arno;
    inputted is incorrect. Input is the past tense.

    And that huge run-on paragraph should be at least 8 by my count. Don’t you get it? People won’t read blocks like that. You ensure that you will be ignored by not bothering to isolate sub-topics.

    • Arno

      Can I endorse the helpful comments made by Brian H?

      You do have the habit of running together a huge amount of often quite detailed text. It is difficult to take in, so tends to get skimmed over.

      Please break it into manageable chunks. Your piece at 1.09PM would have been easier to absorb broken up into a least 6 or more paragraphs, each with a line space between them.

      tonyb

  50. Latimer Alder

    Tuesday morning here in Europe.

    You’ve all had your conference. You’ve all done your Xmas shopping and had time to return home. You’ve had time for reflection.

    So, apart from looking good on your CVs, what’s changed? What are you going to do differently as a result of your attendance? Why? When will we see the results?

    • They now realise that there is no CAGW and AGW is not a problem once the USA allows progress towards cheap energy.

      Sea level rise is irrelevant. The cost is negligible according to Tol (2011) Figure 3: http://www.copenhagenconsensus.com/sites/default/files/climate_change.pdf

      The only significant negative impact of Global Warming is energy cost.

      ‘Agriculture’ and ‘Health’ impacts are both strongly positive to beyond 4 C temp increase.

      The impact of “Storms’ and “Sea level rise’ are about zero net benefit/cost.

      ‘Water’ and ‘Ecosystems’ are small negative impacts but the positive benefits of agriculture and health greatly exceed the negative impacts of “Water’ and ‘Ecosystems’.

      Conclusion: allow cheap energy and the impacts of global warming will be positive to at least 4 C increase above today average global surface temperature and to well beyond the end of this century.

  51. Lauri Heimonen

    Judith Curry,

    I have understood that assessments of the climate sensitivity based on climate models are not only too uncertain but even useless in order to make reach any working solution possible for potential actions needed by climate problems. I have understood that you, Judith Curry, assess the climate sensitivity at the lowest value in the range of general assessments, whereas e.g. according to Nicola Scafetta ‘the real climate sensitivity is less than 1.5 C’. I myself agree with Jim Cripwell, Arno Arrak, David Wojick etc, who have stated that ‘climate sensitivity is indistinguishable from zero’. Here in this comment of mine I try to explain why all the climate sensitivity in its present form should be totally forgotten.

    Judith Curry; an excerpt from summary of her lecture “A 21st century perspective on climate models from a climate scientist” in Workshop on the Roles of Climate Models; http://judithcurry.com/2013/10/31/workshop-on-the-roles-of-climate-models :

    ”Over the past two decades, the climate modeling community has increasingly interlinked the dual objectives of advancing scientific understanding of the climate system and providing actionable projections for decision makers. Arguments are provided that climate models are inadequate for both of these objectives and that the current path of climate model development is unlikely to significantly improve this situation. It is argued that the power and authority that is accumulating around GCMs and the expended resources, if continued, could be detrimental to both scientific progress and policy applications.”

    You, Judith Curry, state that ‘climate models are inadequate’ for objectives specified. On the basis of the scrutiny of my own the ‘inadequate’ makes me regard that the climate models used in the way adopted by IPCC are incompetent tools to properly prove that the recent global warming has been dominated by anthropogenic CO2 emissions. As long as the model simulations are used only in order to make anthropogenic CO2 emissions seem to be the main cause of the recent global warming, there is no hope of any working solution for decision-makers. Instead, there are plenty of arguments according to which the anthropogenic CO2 emissions have had no empirically distinguishable role on the recent global warming. So far the climate models adopted by IPCC have expressed only assumptions with which the recent global warming is made seem to be caused by anthropogenic CO2 emissions; at least some of the main assumptions seem to be based on circular arguments (i.e. inverse calculations) without any proper empiric evidence.

    Since the question, whether or not human CO2 emission have controlled recent global warming, has been the first priority, I focus my attention on that.

    During the last 15+ years there has been no global warming even though the increase of CO2 content in atmosphere has been continuing.This even questions the claim according to which the global warming during recent decades have been dominated by the increase of CO2 content in atmosphere. Yet more impossible is that the recent warming could be caused by anthropogenic CO2 emissions.

    According to natural laws the atmospheric CO2 content is controlled together by all CO2 emissions from sources and all CO2 absorptions to sinks. All CO2 absorptions to sinks together determine how much CO2 from total CO2 emissions stay in atmosphere to maintain a certain CO2 content in atmosphere. The continuous strive for dynamic balance makes any change of CO2 emissions or of absorptions cause a change of CO2 content in atmosphere, and even potential changes in other CO2 sinks and/or CO2 sources. What is the share of a certain CO2 emission in a quantity of CO2 remaining in the atmosphere depends on the share of the single CO2 emission of the total CO2 emissions. As the share of recent anthropogenic emissions have been only about 4 % of the total CO2 emissions, in the recent total atmospheric CO2 increase of about 2 ppm a year the anthropogenic share has been only about 0.08 ppm CO2 a year, which even alone can invalidate the hypothesis of climate warming believed to be dominated by anthropogenic CO2 emissions.

    When all CO2 emissions and all CO2 absorptionsa are in a dynamic balance the CO2 content of atmosphere continues to be unchanging. As you add there a recent, record-breaking increase of anthropogenic CO2 emission of about 0.5 GtC (CO2 calculated as C) a year, and as all of that would be assumed to stay in atmosphere, you would make the recent CO2 content in atmosphere rise with the same 0,5 GtC i.e. then the yearly increase of atmospheric CO2 content would be about 0.25 ppm. Even already this proves that the anthropogenic CO2 emissions have not dominated the recent, yearly increase of CO2 content of about 2 ppm in atmosphere. In addition, in reality the record-breaking, anthropogenic CO2 emission of about 0.5 GtC a year does not remain in atmosphere as such. In order to reach a new dynamic balance, in accordance with natural laws, the rising CO2 content in atmosphere makes the system strive for a new dynamic balance including sychronously both an increase of absorption of CO2 from atmosphere to CO2 sinks, and even a potential lowering of CO2 emissions from some other sources (e.g. sea surface CO2 sources) to atmosphere. All of this proves that the increase of CO2 content in atmosphere is lower than the increase of CO2 emission. Consistently with the total CO2 emissions the recent, yearly share of anthropogenic CO2 emissions staying in atmosphere has been only about 2 % i.e. from the yearly anthropogenic increase of 0.5 GtC the share remaining in atmosphere is only about 0.01 GtC which means an increase of about 0.005 ppm CO2 in atmosphere. Since above I have calculated that the recent share of anthropogenic CO2 in atmosphere is 0,08 ppm instead of 0.005 ppm, in addition to direct influence of anthropogenic CO2 emissission there must be some mechanism (or mechanisms) that makes CO2 content in atmosphere rise.

    In earlier comments of mine I have argued that the recent decadal trends of global CO2 increase in atmosphere have followed warming and not vice versa. The increase of CO2 content in atmosphere during recent decades is dominated by warming of sea surface in the area where CO2 sinks are, which follows natural climate warming with lag. The warming of sea surface makes partial pressure of CO2 soluted in water exponentially rise, which on sea surface sinks areas slowers the absorption of CO2 from atmosphere to sea surface sinks, and which thus increases the CO2 content in atmosphere. This even explains the mildly exponential rise of the CO2 content in the Mauna Loa measurements.

    The concept ‘climate sensitivity’ means how many degrees the climate temperature rises when the CO2 content in atmosphere is doubled by anthropogenic CO2 emissions. Since the trends of CO2 increase in atmosphere are dominated by global warming and not vice versa, and since, in addition, anthropogenic CO2 emissions have recently caused only about 4 % of total increase of CO2 in atmosphere, as a concept the ‘climate sensitivity’ is insignificant.

    (More e.g. in my comment http://judithcurry.com/2011/08/04/carbon-cycle-questions/#comment-198992 )

    • Not bad; but you might like in future to express this more coherently: “The continuous strive for dynamic balance makes any change of CO2 emissions or of absorptions cause a change of CO2 content in atmosphere, and even potential changes in other CO2 sinks and/or CO2 sources.” ‘Strive’ is not a noun, and the rest isn’t quite grammatical; (‘striving’ is perhaps what you want, though it’s pretty much limited to humans and other living agents.)

      • Lauri Heimonen

        Brian H,

        I appreciate your comment for many reasons:

        a) It gives me pleasure that you have understood my complicated thoughts. For instance, in general, the small, anthropogenic share of the recent increase of CO2 in atmosphere seems to be difficult to understand.

        b)You have paid attention to one of the most complicated issue of climate change: any change of CO2 emissions or absorptions makes all CO2 emissions and all CO2 absorptions ‘strive’ for a new dynamic balance which at the same time determines the CO2 content of atmosphere. For instance a mere increase of anthropogenic CO2 emissions to atmosphere does not determine how much the CO2 content of atmosphere rises. The increase of CO2 content in atmosphere depends even on how much CO2 absorption from the atmosphere to CO2 sinks increases and how much CO2 emissions to atmosphere from other CO2 sources decreases in consequence of the increase of CO2 emissions to atmosphere. Separately the CO2 content of atmosphere changes even as other than anthropogenic CO2 emissions and/or as CO2 emissions from CO2 sources change. For instance in the comment above I have expressed how the recent record-breaking, yearly, anthropogenic increase of CO2 emissions has added to the atmosphere only about 0.005 ppm CO2 a year, whereas warming of sea surface temperature on the areas CO2 sinks has dominated the increase of anthropogenic CO2 content of 0.08 ppm year.

        c) I thank you for your grammatical advices. Because the English language is not my mother tongue my text should be revised. However, on this kind of comments it is seldom possible.

        d) Etc.

  52. ‘The concept ‘climate sensitivity’ means how many degrees the climate temperature rises when the CO2 content in atmosphere is doubled by anthropogenic CO2 emissions. ”

    wrong. climate sensitivity is defined as the change in temperature given a change in forcing.

    For example, if the sun increases by 1 watt, what change in temperature will we see?

    This is why climate sensitivity cannot be zero.

    Fundamental physics says the climate sensitivity ( absent feedbacks ) is
    around .4C. That is 1 watt of change in forcing gets you .4C of warming.

    Now, climate sensitivity to a doubling of C02 adds a complication

    How much forcing will doubling C02 bring? The answer, about 3.7 Watts

    To get the sensitivity to doubling C02 then you just multiply.

    Sensitivity to a change of 1 watt. = .4C
    Sensitivity to a change of 3.7W = about 1.5C

    Many people confuse this. so I’ll do it slowly for you

    1. The climate sensitivity to changes in forcing is called lambda
    2. First principles puts lambda at around .4C per watt

    That mean if the sun goes up by 1 watt we see .4C of warming.

    next,

    How much forcing do we see from doubling C02? 3.7Watts.
    That’s engineering.

    Whats the sensitivity to doubling C02?

    easy peasy Lambda * 3.7

    or

    .4C * 3.7

    no feedbacks case

    In other terms

    Sensitivity to doubling = lambda * watts from doubling C02

    For sensitivity to doubling to be zero, you have to make the argument
    that either lambda is really small or watts from doubling C02 is really small.

    But, watts from doubling C02 is a known measured quantity. We use it everyday in building weapon systems. Its 3.7W

    So, you have to argue, not assert, that lambda is close to zero.

    But if lambda is close to zero, then the sun would have no effect.

    • easy peasy. Why not explain the assumptions involved with that first principles “no feedback” sensitivity?

      Since you used the “about” 1.5 and the current estimate is closer to about 1, that could imply that some of the first principle linear assumptions along with absolute surface temperature assumption might have been a tad high. That is not that unusual when the assumed model is ideal. Real world performance is typical a fraction of “ideal”.

      Since the original assumped surface temperature was 288K with an effective S-B energy of 390Wm-2, if all of the 3.7Wm-2 was absorbed the increase in surface energy/temperature would be 3.7Wm-2 to 393.7Wm-2 or 289.67K or ~0.67K increase/3.7Wm-2 requiring ~0.8C of temperature increase due to amplification of the 3.7Wm-2 absorbed at the surface. As the average surface temperature increases the impact of 3.7 Wm-2 of forcing decreases.

      So by first principles 1.5C per 3.7Wm-2 requires 1.5/.68= 2.2 times amplification. Due to latent and convective heat transfer the actual “temperature” at the surface has cooling of approximately 120Wm-2 which will increase as surface energy increases which Kimoto noted requires a little bit more detail than the first principles 0.68C times 2.2 to infinity per 3.7Wm-2 of forcing.

      http://edberry.com/SiteDocs/PDF/Climate/KimotoPaperReprint.pdf

      So the first principles 1.5C/3.7Wm-2 is based on an ideal performance at a fictitious ERL which is always perfectly linear provided it can be reasonably assumed to be “orderly” if not isothermal.

      Carnot would spin in his grave :)

  53. “You can endlessly make models of supersymmetry,” says Eugene Commins, an emeritus professor of physics at the University of California, Berkeley, who led the last search for the dipole moment in atoms. “A good theorist can invent a model in half an hour, and it takes an experimentalist 20 years to kill it.”

    http://news.yahoo.com/electron-appears-spherical-squashing-hopes-physics-theories-130000989.html

    “A good theorist can invent a model in half an hour, and it takes an experimentalist 20 years to kill it.”

    That sounds familiar.

  54. Pingback: Confronting the Fundamental Uncertainties of Climate Change | Fabius Maximus