Questioning the robustness of the climate modeling paradigm

by Judith Curry

Are climate models the best tools? A recent Ph.D. thesis from The Netherlands provides strong arguments for ‘no’.

The usefulness of GCM climate models, particularly for projections, attribution studies, and impact assessments has been the topic of numerous Climate Etc. posts:

A new thesis from The Netherlands (pointed out to me by Jeroen van der Sluijs) provides a very interesting and unique perspective on this topic.

The Robustness of the Climate Modelling Paradigm, Ph.D. thesis by Alexander Bakker from The Netherlands [link]

The context for the thesis is related in the Preface:

In 2006, I joined KNMI to work on a project “Tailoring climate information for impact assessments”. I was involved in many projects often in close cooperation with professional users.

In most of my projects, I explicitly or implicitly relied on General Circulation Models (GCM) as the most credible tool to assess climate change for impact assessments. Yet, in the course of time, I became concerned about the dominant role of GCMs. During my almost eight year employment, I have been regularly confronted with large model biases. Virtually in all cases, the model bias appeared larger than the projected climate change, even for mean daily temperature. It was my job to make something ’useful’ and ’usable’ from those biased data. More and more, I started to doubt that the ’climate modelling paradigm’ can provide ’useful’ and ’usable’ quantitative estimates of climate change.

After finishing four peer-reviewed articles, I concluded that I could not defend one of the major principles underlying the work anymore. Therefore, my supervisors, Bart van den Hurk and Janette Bessembinder, and I agreed to start again on a thesis that intends to explain the caveats of the ’climate modelling paradigm’ that I have been working in for the last eight years and to give direction to alternative strategies to cope with climate related risks. This was quite a challenge. After one year hard work a manuscript had formed that I was proud of and that I could defend and that had my supervisors’ approval. Yet, the reading committee thought differently. 

According to Bart, he has never supervised a thesis that received so many critical comments. Many of my propositions appeared too bold and needed some nuance and better embedding within the existing literature. On the other hand, working exactly on the data-related intersection between the climate and impact community may have provided me a unique position where contradictions and nontrivialities of working in the ’climate modelling paradigm’ typically come to light. Also, not being familiar with the complete relevant literature may have been an advantage. In this way, I could authentically focus on the ’scientific adequacy’ of climate assessments and on the ’non- trivialities’ of translating the scientific information to user applications, solely biased by my daily practice.

The thesis is in two Parts:  Part I addresses the philosophy of climate modeling, while Part II describes specific impact assessment studies.  I excerpt here some text from Part I of the thesis.:

In the light of this controversy about the dominant role of GCMs, it might be questioned whether the state-of-the-art fully coupled AOGCMs really are the only credible tools in play. Are there credible methods for the quantitative estimation of climate response at all? And more important, does the current IPCC approach with large multi-model ensembles of AOGCM simulations guarantee a range of plausible climate change that is relevant for (robust) decisions?

Another important consideration is about the expenses. Apart from the very large computation time (and costs), the post processing and storage of the huge amounts of data ask lots of the intellectual capacity among the involved researchers. The used capacity is not available anymore for interpretation and creativity. This might be at the expense of the framing and communication of uncertainties and of the quality of some doctoral dissertations. 

The ’climate modelling paradigm’ is characterized by two major axioms:

  1. More comprehensive models that incorporate more physics are considered more suitable for climate projections and climate change science than their simpler counterparts because they are thought to be better capable of dealing with the many feedbacks in the climate system. With respect to climate change projections they are also thought to optimally project consistent climate change signals.
  2. Model results that confirm earlier model results are perceived more reliable than model results that deviate from earlier results. Especially the confirmation of earlier projected Equilibrium Climate Sensitivity between 1.5″C and 4.5″C degree Celsius seems to increase the perceived credibility of a model result. Mutual confirmation of models (simple or complex) is often referred to as ’scientific robustness’.

This chapter explores the legitimacy and tenability of the ’climate modelling paradigm’. It is not intended to advocate other methods to be better nor to completely disqualify to use of GCMs. Rather it aims to explore what determines this perception of GCMs to be the superior tools and to assess the scientific foundation for this perception. First, section 2.2 explains the origin of the paradigm and illustrates that the paradigm is mainly based on the great prospects of early climate change scientists. Then section 2.4 elaborates on the pitfalls of fully relying on physics. Subsequently, section 2.3 argue that empirical evidence for the perceived GCM-superiority is weak. Thereafter, section 2.5 argues that biased models cannot provide an internally consistent (and plausible) climate response, which is especially problematic for local and regional climate projections. Next, the independence of the multiple ’lines of evidence’ is treated in section 2.6. Finally, in section 2.7 it is concluded that the climate modelling paradigm is in crisis.

 The state-of the-art fully coupled AOGCMs do not provide independent evidence for human-induced climate change. GCM-based multi-model ensembles are likely to be (implicitly) tuned to earlier results. The confirmation of earlier results by GCMs is therefore no reason for higher confidence. The confidence in the GCMs originates primarily from the fact that, after extensive tuning of the feedbacks and other processes, a radiative balance is found for the Top-Of-Atmosphere. This is indeed quite an achievement, but the tuning usually provides only one of countless solutions. Multi-model ensembles tuned to a particular response give us only little insight in the possible range of outcomes. Besides, the GCMs only include a limited selection of potentially importantfeedbacks and sometimes artefacts have to be incorporated to close the radiative balance.

The founding assessments of Charney et al. (1979) and Bolin et al. (1986) did see the great potential of future GCMs, but based their likely-range of ECS on expert judgment and simple mechanistic understanding of the climate system. And even today, the IPCC acknowledges that the model spread (notably of multi-model ensembles) is only a crude measure for uncertainty because it does not take model quality and model interdependence into account. Nevertheless, in practice, GCMs are often applied as a ’pseudo-truth’.

The paradigm that GCMs are the superior tools for climate change assessments and that multi-model ensembles are the best way to explore epistemic uncertainty has lasted for many decades and still dominates global, regional and national climate assessments. Studies based on simpler models than the state-of-the-art GCMs or studies projecting climate response outside the widely accepted range have always received less credence. In later assessments, the confirmation of old results has been perceived as an additional line of evidence, but likely the new studies have been (implicitly) tuned to match earlier results.

Shortcomings, like the huge biases and ignorance of potentially important mechanisms, have been routinely and dutifully reported, but a rosy presentation has generally prevailed. Large biases seriously challenge the internal consistency of the projected change, and consequently they challenge the plausibility of the projected climate change.

Most climate change scientists are well aware of this and a feeling of discomfort is taking hold of them. Expression of the contradictions is often not countered by arguments, but with annoyance, and experienced as non-constructive. “What else?” or “Decision makers do need concrete answers” are often heard phrases. The ’climate modelling paradigm’ is in ’crisis’. It is just a new paradigm we are waiting for.

I was gratified to see three of my recent papers in his reference list:

JC comments

There isn’t too much in Part I of the thesis that is new or hasn’t been discussed elsewhere.  His discussion on model ‘tuning’ – particularly implicit tuning – is very good.  Also I particularly like his section ‘Lines of evidence or circular reasoning’.

However,  the remarkable aspect of this to me is that the ‘philosophy of climate modeling’ essay was written not by a philosopher of science or a climate modeler, but by a scientist working in the area of applied climatology.  His experiences in climate change  impact assessments provide a unique perspective for this topic.  The thesis provides a very strong argument that GCM climate models are not fit for the purpose of regional impact assessments.

I was very impressed by Bakker’s intellectual integrity and courage in tackling this topic in the 11th hour of completing his Ph.D. thesis.  I am further impressed by his thesis advisors and committee members for allowing/supporting this.  Bakker notes many critical comments from his committee members.  I checked the list of committee members, one name jumped out at me  – Arthur Petersen – who is a philosopher of science that has written about climate models.  I suspect that the criticisms were more focused on strengthening the arguments, rather than ‘alarm’ over an essay that criticizes climate models. Kudos to the KNMI.

I seriously doubt that such a thesis would be possible in an atmospheric/oceanic/climate science department in the U.S. – whether the student would dare to tackle this, whether a faculty member would agree to supervise this, and whether a committee would ‘pass’ the thesis.

Bakker’s closing statement:

The ’climate modelling paradigm’ is in ’crisis’. It is just a new paradigm we are waiting for.

I have made several suggestions re ways forward, contained in these posts:

652 responses to “Questioning the robustness of the climate modeling paradigm

  1. Pingback: We must rely on forecasts by computer models. Are they reliable? | The Fabius Maximus website

  2. As you said Judith, there is nothing new or revolutionary about what Bakker says. The interesting part is that the climate community KNOWS that the models don’t do what they are supposed to do, but it is always possible to get funding to start a new model or continue work on an existing one. It’s as if producing models is an end in itself that will somehow please the God of Climate and persuade him/her/it not to demand human sacrifices.

  3. “The ’climate modelling paradigm’ is in ’crisis’. It is just a new paradigm we are waiting for.”

    I agree. GCMs are based on a conventional earth system theory of a non-living earth. There is another earth system theory, Gaia theory. Some thoughts on a living earth model as a way forward . . .

    http://www.theecologist.org/blogs_and_comments/commentators/2511300/climate_change_and_the_living_gaia.html

    • Thanks for your link. It provides an interesting perspective on an alternative framework/paradigm.

      The notion of millions of biology/ecology driven PDEs vs the limited number of physics and chemistry driven ones solved via the GCMs makes sense. To take the GCMs as gospel one would have to conclude that the “living” aspects of the system can be ignored except for man’s impact via GHG emissions.

      • I appreciate your comment. For various reasons Gaia theory has been all but ignored by climate scientists on both sides of the debate. Personally, I find the theory plausible and intriguing. Whether or not Gaia theory is valid, it certainly deserves fair consideration and testing.

    • Hi Lee,

      I went to the British Museum over Xmas where they have James Goldsmith’s archive and a small exhibition. It was really interesting – particularly the scientists arguing against it.

      It reinforced some earlier ideas I’d had about climate – that Life adapts to the environment – sometimes very quickly & there are microbes in the atmosphere. Recently I’ve looked at the CERN CLOUD experiments and noticed that molecules associated with trees improve cloud seeding. Definately room for thought there.

    • Lee, Gaia theory is not the only other paradigm that is possible. In the philosophy of Henri Bergson and Alfred North Whitehead it is argued that mathematics and physics are inherently not able ot predict the behaviour in natural systems because they have a wrong perception of time for these kind of systems. In essence these systems are always evolving or in a state of ‘becoming’ while physics and mathematics work with states of ‘being’ in which processes can be described by cutting them in time. There is also no need to make a distinction between living and non-living systems in these process philosophies. It means that we will never be able to predict the future in an evolving system except the very near future which will more or less be the same as the present. After that it is not chaos cause that too is a mathematical concept. Predicting the future climate is thus not very different then predicting the weather or the stock market on a longer time-scale. You can write all the software you want it just will not work out consistently.

      • Predicting nonreproducable results fed the classic stock brokers industry since conception and will continue for a long while. If wrong there is always unforeseen events to blame it on. If temperature begins to fall over next decade the claim will be natural time for the end of interglacial is knocking. Or, perhaps if an ocean current changes it will of course, be man.

  4. For another discussion of this subject see — and recommendations — see this article by two ecologists (like climate science, a field struggling with the power and limitations of models): “Are Some Scientists Overstating Predictions? Or How Good are Crystal Balls?“, Tom Stohlgren and Dan Binkley, EcoPress, 28 October 2014 — Conclusion:

    Given the … large and unquantifiable uncertainties in many long-term predictions, we think all predictions should be:

    stated as hypotheses,
    accompanied by short-term predictions with acceptance/rejection criteria,
    accompanied by simple monitoring to verify and validate projections,
    carefully communicated with model caveats and estimates of uncertainties.

    Seems like good advice that could be easily adoptive — but probably will be only as a result of pressure on climate scientists by people outside their community. Perhaps from the funding agencies, or Congress.

    • FM, thanks very much for this link

    • Matthew R Marler

      Fabius Maximus, thank you for the link.

    • Add my thanks for the link to the list, Fabius Maximus.

      Cheers

      • This is topic drift, but since the subject has been raised. Our website was named after Fabius Maximus (280 – 203 BC), who saved Rome from Hannibal by recognizing Rome’s weakness and therefore the need to conserve its strength. He turned from the easy path of macho “boldness” to the long, difficult task of rebuilding Rome’s power and greatness. His life holds profound lessons for 21st Century Americans.

        We advocate a conservative strategy for America. “Conservative” in strategic sense (we anger Left and Right equally).

        I started writing about climate in 2008, using it to illustrate our inability to see and think clearly about public policy issues (which has not improved since then). My recommendations start from Steven Mosher ‘s ast observation “We don’t even plan for the past” (source). That matches with a recent comment by Roger Pielke Sr about using models to prepare for future climate:

        @FabiusMaximus01 Using the recent paleorecord, historical data and worst case sequence of observed is much better approach.

        — Roger A. Pielke Sr (@RogerAPielkeSr) February 3, 2015

    • Well,duh. But, yeah, thanks.
      =======

      • Kim

        Can you relay my thanks as well to Fabius maximus or Fm as I expect he is known to you.
        Tonyb

      • By the way, for those that don’t know, the original Fabius maximus was a Roman dictator whose chief claim to fame was his campaign to defeat hanibal.

        I came across this intriguing reference as to Fabian battle strategies which seem to me to be the perfect tactics to be used to counter The forces that will be massed in Paris later in the year in readiness for their concerted climate offensive.

        http://en.m.wikipedia.org/wiki/Fabian_strategy

        Tonyb

      • From Plutarch, just after the Trebian trouble and the traverse of Tuscany:

        ‘Besides the more common signs of thunder and lightning then happening, the report of several unheard-of and utterly strange portents much increased the popular consternation.’
        ================

      • Lost a lot of elephants along the way, didn’t he?

  5. I’m thinking that we won’t find much discussion of this part among “skeptics”:

    oops.

    Shortcomings, like the huge biases and ignorance of potentially important mechanisms, have been routinely and dutifully reported…

    Because it doesn’t quite fit with the standard “skeptic” narrative. The standard narrative being that biases and ignorance of potentially important mechanisms are routinely and dutifully ignored or hidden, Try to find a single thread in the “skept-o-sphere” where that standard “skeptical” narrative isn’t asserted!

    But for the same reason, I’m thinking we will find discussion among “skeptics” of this part:

    “…. but a rosy presentation has generally prevailed.”

    Because it’s a characterization that they can take on faith – even though it amounts to argument by assertion, with no need for skeptical treatment of counterarguments (that a different view has prevailed because the view being presented is analyzed and rejected).

    Too bad reasonable engagement is such a difficult achievement with issues that become so ideologically polarized.

    • ‘I’m thinking that we won’t find much discussion of this part among skeptics”:
      oops.
      Shortcomings, like the huge biases and ignorance of potentially important mechanisms, have been routinely and dutifully reported…’
      An issue that you invented is an “oops”?
      I think we should stick to saying “oops” on our own mistakes, rather than making up ones for others to make.

      Would you call it an “oops” that this paper says forthrightly and repeatedly that the models are implicitly tuned? After all, at ATTP we are currently having a discussion where several commenters insist that the models are not tuned, since that is what realclimate says and they are prominent climate scientists.

      • miker –

        Would you call it an “oops” that this paper says forthrightly and repeatedly that the models are implicitly tuned? After all, at ATTP we are currently having a discussion where several commenters insist that the models are not tuned, since that is what realclimate says and they are prominent climate scientists.

        I followed the discussion a bit. I think that your characterization of their arguments (that they are basically nothing but appeal to authority) is not terribly accurate. But I think the discussion is interesting.

        As Judith says, there’s not really a lot new here. I have seen these discussions about modeling a number of times – and I would imagine that people who are really knowledgeable about the field have seen the discussions quite a bit. I have seen modelers from within the “consensus” community acknowledge these issues.

        Not being someone who can evaluate the technical arguments, I focus on other elements of the discussion. One of the elements that I follow here is the oft’ hear refrain from “skeptics” that I see directly contradicted in the excerpt that Judith provided – as I mentioned above.

        Can you address that issue? Was there something inaccurate about my point? Do we not see it stated in practically every thread in the “skept-o-sphere” that biases and ignorance of potentially important mechanisms are routinely and dutifully ignored or hidden?

      • Miker613, anyone who says models aren’t tuned does not know what they are talking about. All GCMs are parameterized. Parameter sets for CMIP5 were selected to give the best 10, 20, and 30 years hindcasts per the experimental design. See Taylor et. al. BAMS 93: 485-498 (2012) open acsess on line. For parameterizarion, see the technical documentarion to NCAR CAM3. Available free on line as NCAR/TN-464+STR (2004). Or read essay Models all the way Down in ebook Blowing Smoke.

      • Steven Mosher

        Rud

        “Parameter sets for CMIP5 were selected to give the best 10, 20, and 30 years hindcasts per the experimental design. See Taylor et. al. BAMS 93: 485-498 (2012) open acsess on line”

        wrong.

      • Joshua, I can’t address your point because I don’t know. I expect you’re right – there is a lot of nonsense written by skeptics. And by non-skeptics. Pay no attention.

      • Steven, is this also…wrong?

        “Model results that confirm earlier model results are perceived more reliable than model results that deviate from earlier results.”

      • Matthew R Marler

        Steven Mosher: wrong.

        Details please. I’d take Rud Istvan’s authority over yours any day. So if you think he is wrong, please provide the details. For example, you could quote text from the sources he cited proving that he misinterpreted them.

      • Joshua:
        ” Shortcomings, like the huge biases and ignorance of potentially important mechanisms, have been routinely and dutifully reported…

        Because it doesn’t quite fit with the standard “skeptic” narrative. The standard narrative being that biases and ignorance of potentially important mechanisms are routinely and dutifully ignored or hidden, Try to find a single thread in the “skept-o-sphere” where that standard “skeptical” narrative isn’t asserted!

        Hmm, “routinely…reported” and “routinely…ignored” are not mutally exclusive. If “huge biases and ignornce of…mechanisms” is “routinely…reported” in the literature, and IPCC AR’s are “our best summary of the work”, why are the “huge biases” not noted in the AR’s? “ignorance of potentially important mechanisms” could perhaps be defended as present in the AR’s, but I don’t think I’ve ever seen anyone acknowledge, let alone defend, this “ignorance of …mechanisms” being missing from the ARs.

      • Steven Mosher

        Matthew

        read the paper

        “Details please. I’d take Rud Istvan’s authority over yours any day.”

        Ruds claim

        “Parameter sets for CMIP5 were selected to give the best 10, 20, and 30 years hindcasts per the experimental design. See Taylor et. al. BAMS 93: 485-498 (2012) open acsess on line”

        http://journals.ametsoc.org/doi/pdf/10.1175/BAMS-D-11-00094.1

        1. There is NO SUCH THING in the document he references
        2. Input data sets ( forcings ) are proscribed for the experiments.
        3. There are no “parameter sets” proscribed
        4. That document is the overview. the actual technical description is on the CMIP5 page.
        5. the ONLY thing that comes close to mentioning 10 20 and 30 year periods are the EXPERIMENTAL decadel forecasts. and even there parameters are not proscribed.
        6.Different models have different tunable paramaters. HADGCM for example has 32. in no way does the design of experiment document proscribe the setting of these parameters.

      • Steven Mosher, you perhaps misunderstood my comment. The CMIP5 experimental design required decadal hindcasting back to 3 decades. All GCM models are massively parameterized. They must be. The NCAR CAM3 technical documentation shows what,why, and how. The individual modelers choose parameter sets for their models that give the best hindcasts. Duh. That is all. And that amounts to tuning.
        Explains why high sensitivy models have high aerosols. And low sensitivity models don’t. Got to find a parameter set that hindcasts reasonably well for any model to have street cred.
        This was explained with many footnotes in the Climate chapter of The Arts of Truth.

      • It is a bit like when SteveMc says keep an eye on the key; the models are not tuned, because they call tuning something else, paramiterization.

      • Matthew R Marler

        Rud Istvan: The individual modelers choose parameter sets for their models that give the best hindcasts

        Where exactly is that described? I did not find it in the document that you linked to, nor in the first few documents that it linked to.

      • Steven Mosher

        Rud
        ‘Steven Mosher, you perhaps misunderstood my comment. The CMIP5 experimental design required decadal hindcasting back to 3 decades.”

        the DECADAL experiments were not required.
        They were experimental.
        They are started at various time steps. They are initialized with climate state variables.
        “Parameter sets for CMIP5 were selected to give the best 10, 20, and 30 years hindcasts per the experimental design.”
        is what you said.
        it is wrong.

        As you note some models have many parameters. But the design did not proscribe these.

        The models are not tuned in any but the most vague sense of the word.
        Were they tuned they would hindcast BETTER than the do.
        The only example of REAL tuning I know of is the FAMOUS excercise which used a very reduced model.

      • Steve, what do they do with all the models that did not hindcast due to paramertization?
        They went under the bus and the ones that closest matched the past and the climate sensitivity they wished were retained.
        By the way, doing a fit is not an experiment; experiment has a defined meaning and abusing its meaning is not helpful.

      • Matthew R Marler, it isnt documented anywhere. That is the point. Mosher points out that HADGCM has 32 parameters. The keepers of that model are free to select any set of those 32 values. They only submit their model results to CMIP5, not the parameterization. So, they fooled round to find a set of their 32 parameters that did a good job in the required hindcasting ‘experiments’ (note the perversion of the idea of an experiment.) They froze their parameterization in some set giving their best hindcasts per the ‘experimental design’, then ran all the cententenial stuff using that set.
        And that’s why CMIP5 missed the pause and runs hot. All the different (and unreported) parameterizations were chosen to best fit the whatever model to the period about 1975-2005 per the CMIP5 experimental design. That only makes sense if you think all the delta T over that period was caused by CO2. Which warmunists believe. Subsequent results say that was not true. The attribution problem in a different guise.
        Either Mosher knows this and was blowing smoke, or he shoild read more and comment less. It is sometimes hard to take one’s own advice.
        Regards, and thanks for the kind words upthread.

      • Is Mosher simply misspelling or playing one of his games when he writes “proscribed” (i.e., prohibited) where a naive reader would expect to find “prescribed” (i.e, mandated)? I can just see his later “read harder” admonition (in contrast to his usual “charitable reading” admonition) as part of Mosherball.

      • Matthew R Marler

        Rud Istvan: Matthew R Marler, it isnt documented anywhere.

        I think it likely that the modelers have good ideas about what parameter values work well for the models that they have developed for the data that they have, data that have been studied now for decades. However, you made specific claims, like this one: Parameter sets for CMIP5 were selected to give the best 10, 20, and 30 years hindcasts per the experimental design. See Taylor et. al. BAMS 93: 485-498 (2012) open acsess on line. For parameterizarion, see the technical documentarion to NCAR CAM3. Available free on line as NCAR/TN-464+STR (2004).

        and

        The individual modelers choose parameter sets for their models that give the best hindcasts.

        They didn’t “tune” the parameters for the latent heat of vaporization of water or the specific heat of dry air at standard temperature and pressure, for example, but took those and many other “physical constants” from the textbooks and review literature.

        Unless you have better references than what you have provided so far (which you may indeed have), then it looks to me like Steven Mosher is correct on this point.

      • Look! A Cloud!

      • MRM, your above latest comment indicates you did not read the NCAR GCM manual I referenced above. Never mind the differential equations or the algorithmic numerical approximation methods. Just read the chapter outlines to understand my point.
        Parameters are not know physical constants in GCMs. You make the mistake of thinking like a pre normal scientist rather than a post normal ‘climate scientist’. Climate model parameters have to do with guessing stuff the GCMs are inherently incapable of simulating. For illustrations from the NCAR CAM3 technical manual referenced above, here are some Chapter 4 (model physics) chapter subsection headings, all direct quotes. 4.5 Prognostic Condensate and Precipitation Parameterization (humidity fail, the missing modeled tropical troposphere hotspot!). 4.7 Parameterization of cloud fraction (ah, the AR5 WG1 chapter seven cloud uncertainty). 4.9.3 Trace gas parameterization. And so forth. And that was just chapter 4 subsection headings! Chapter 7 is a bigger revelation– initial and boundary conditions.
        Please, read either my references or my book essays. Or both.

        Parameterization has little to do with known physical constants like the latent heat of evaporation. That erroneous notion is another example of how warmunists have obscured things for the rest of us–IMO on purpose, like Mosher here who either knew or should have known.

      • Matthew R Marler

        Rud Istvan: Matthew R Marler, it isnt documented anywhere.

        Rud Istvan: MRM, your above latest comment indicates you did not read the NCAR GCM manual I referenced above. Never mind the differential equations or the algorithmic numerical approximation methods. Just read the chapter outlines to understand my point.

        I wish you would make up your mind on this point. So far, I have not found anything that looks like “tuning” or parameters.

        Climate model parameters have to do with guessing stuff the GCMs are inherently incapable of simulating. For illustrations from the NCAR CAM3 technical manual referenced above, here are some Chapter 4 (model physics) chapter subsection headings, all direct quotes. Prognostic Condensate and Precipitation Parameterization (humidity fail, the missing modeled tropical troposphere hotspot!). 4.7 Parameterization of cloud fraction (ah, the AR5 WG1 chapter seven cloud uncertainty). 4.9.3 Trace gas parameterization. And so forth.

        So far, I have not found where those are based on “tuning” instead of physical considerations.

        Here is Appendix A from NCAR/TN-464+str(2004) (by the way, is that thing available in pdf format?);

        A. Physical Constants
        Following the American Meteorological Society convention, the model uses the International System of Units (SI) (see August 1974 Bulletin of the American Meteorological Society, Vol. 55, No. 8, pp. 926-930).

        \begin{displaymath}\begin{array}{lcll} a & = & 6.37122 \times 10^{6} \quad\mathr… …y \: air \: at \: standard \: pressure/temperature} \end{array}\end{displaymath}

        {oops, the table does not display here, but there are 23 constants listed, including the dry air density at STP, and the specific heat of air at STP}

        The model code defines these constants to the stated accuracy. We do not mean to imply that these constants are known to this accuracy nor that the low-order digits are significant to the physical approximations employed.

        section 4.7 includes this: Convective cloud fraction in the model is related to updraft mass flux in the deep and shallow cumulus schemes according to a functional form suggested by Xu and Krueger [192]:

        $\displaystyle \cfrac _{shallow} = k_{1,shallow} ln(1.0+k_2 M_{c,shallow})$ (4.170)

        $\displaystyle \cfrac _{deep} = k_{1_deep} ln(1.0+k_2 M_{c,deep})$ (4.171)

        where $ k_{1,shallow}$ and $ k_{1_deep}$ are adjustable parameters given in Appendix C, $ k_2 = 500$, and $ M_c$ is the convective mass flux at the given model level.

        Then Appendix C contains in total:
        C. Resolution and dycore-dependent parameters
        The following adjustable parameters differ between various dynamical cores and model resolutions in CAM 3.0.

        Table C.1: Resolution and dycore-dependent parameters Parameter FV T85 T42 T31 Description
        $ q_{ic,warm}$ 8.e-4 4.e-4 4.e-4 4.e-4 threshold for autoconversion of warm ice
        $ q_{ic,cold}$ 11.e-6 16.e-6 5.e-6 3.e-6 threshold for autoconversion of cold ice
        $ k_{e,strat}$ 5.e-6 5.e-6 10.e-6 10.e-6 stratiform precipitation evaporation efficiency parameter
        $ RH_{\min}^{low}$ .91 .91 .90 .88 minimum RH threshold for low stable clouds
        $ RH_{\min}^{high}$ .80 .70 .80 .80 minimum RH threshold for high stable clouds
        $ k_{1,shallow}$ 0.04 0.07 0.07 0.07 parameter for shallow convection cloud fraction
        $ k_{1,deep}$ 0.10 0.14 0.14 0.14 parameter for deep convection cloud fraction
        $ p_{mid}$ 750.e2 250.e2 750.e2 750.e2 top of area defined to be mid-level cloud
        $ c_{0,shallow}$ 1.0e-4 1.0e-4 2.0e-4 5.0e-4 shallow convection precip production efficiency parameter
        $ c_{0,deep}$ 3.5E-3 4.0E-3 3.0E-3 2.0E-3 deep convection precipitation production efficiency parameter
        $ k_{e,conv}$ 1.0E-6 1.0E-6 3.0E-6 3.0E-6 convective precipitation evaporation efficiency parameter
        dif4 N/A 1.0e15 1.0e16 2.0e16 horizontal diffusion coefficient

        I can see why you don’t quote from this sucker.

        But all things considered, I do not find “tuning” in the sense most people understand it, but consideration of physics, standard physical constants, and other published literature. With evidence of changes from CAM 3.0, perhaps you could say that some of the parameter values were changed in light of model performance (“tuned” to a degree, at least, in a manner of speaking), but mostly this looks like thinking carefully and at length about the physics.

      • Matthew R Marler | February 3, 2015 at 2:27 pm |

        I wish you would make up your mind on this point. So far, I have not found anything that looks like “tuning” or parameters.

        Ummm, really. This isn’t defensible.

        http://www.researchgate.net/publication/259539033_Efficient_screening_of_climate_model_sensitivity_to_a_large_number_of_perturbed_input_parameters

        The actual atmosphere doesn’t have a setting:
        “Stratiform precipitation evaporation efficiency”

        That is a model parameter for something that is wildly variable in the actual atmosphere. The chance of it being exactly 5.0e-6 in the atmosphere on a given day at a given point is very poor.

        If you overestimate the CO2 forcing you can compensate by adjusting some of the 27 or more more model parameters to get reasonable output during the training period.

        The model parameters are by definition tuning parameters. If you change them the model output is different.

      • Matthew R Marler

        PA: Ummm, really. This isn’t defensible.

        Can you find where Rud Istvan’s specific claims ( tuning to get best 15 year etc hindcasts; etc.) are supported in the references that he cites?

        The model parameters are by definition tuning parameters. If you change them the model output is different.

        The claim was that they had been specifically tuned to get the best hindcasts. The actuality seems to be that some were physical constants, others were based on reading of the physics in the published literature. That they might have been “tuned” or could have been “tuned” is not evidence in support of the claim that they were in fact tuned.

      • MRM, to a late and maybe dead thread,
        You cite code from NCAR CAM3. Bravo. Now look at the parameter constants in that cloud code. Not some lab determined physical constant, rather, stuff like cloud fraction (observationally about 2/3) which still does not adequately express net cloud feedback since that also depends on:
        1. Cloud altitude
        2. Cloud type
        3. Cloud opacity
        QED.
        Dig ever deeper, and the wonders of post modern climate science become ever clearer. The only issue on this thead was whether GCMs were ‘tuned’ via hindcast parameterization. Well, what say you know? Having numerically identified some of those parameters in the code itself?

      • Matthew R Marler | February 3, 2015 at 6:33 pm |
        PA: Ummm, really. This isn’t defensible.

        The claim was that they had been specifically tuned to get the best hindcasts. The actuality seems to be that some were physical constants, others were based on reading of the physics in the published literature. That they might have been “tuned” or could have been “tuned” is not evidence in support of the claim that they were in fact tuned.

        Well gee, lets look at some of these physics based parameters:
        Minimum relative humidity for high stable cloud formation
        Cloud particle density over sea ice
        Initial cloud downdraft mass flux
        Evaporation efficiency parameter
        Minimum overshoot parameter
        Land vegetation roughness scaling factor

        Few of the parameters are based on physics. Some of them are synthetic parameters to make the model work. Some are constants representing atmospheric variables The acceptable value range for some parameters is over an order of magnitude.

        If you wish to continue the “physical constants” or “based… on physics” claims please identify by name which parameters conform to your description.

      • Matthew R Marler

        Rud Istvan: The only issue on this thead was whether GCMs were ‘tuned’ via hindcast parameterization. Well, what say you know? Having numerically identified some of those parameters in the code itself?

        I think that your claim that those parameter values were tuned to get accurate hindcasts (Parameter sets for CMIP5 were selected to give the best 10, 20, and 30 years hindcasts per the experimental design ) is not supported in the references that you cited.

        I’ll repeat something from the overview: Treatment of cloud condensed water using a prognostic treatment (section 4.5): The original formulation is introduced in Rasch and Kristjánsson [144]. Revisions to the parameterization to deal more realistically with the treatment of the condensation and evaporation under forcing by large scale processes and changing cloud fraction are described in Zhang et al. [200].The parameterization has two components: 1) a macroscale component that describes the exchange of water substance between the condensate and the vapor phase and the associated temperature change arising from that phase change [200]; and 2) a bulk microphysical component that controls the conversion from condensate to precipitate [144].

        The references to the published literature support a claim that they were doing something other than “tuning” the parameter estimates to get better fits to extant data.

      • Matthew R Marler

        PA: Few of the parameters are based on physics. Some of them are synthetic parameters to make the model work. Some are constants representing atmospheric variables The acceptable value range for some parameters is over an order of magnitude.

        I thought I had presented the entire overview, but I do not see it displayed, so I’ll put it here.

        1.2 Overview of CAM 3.0

        The CAM 3.0 is the fifth generation of the NCAR atmospheric GCM. The name of the model series has been changed from Community Climate Model to Community Atmosphere Model to reflect the role of CAM 3.0 in the fully coupled climate system. In contrast to previous generations of the atmospheric model, CAM 3.0 has been designed through a collaborative process with users and developers in the Atmospheric Model Working Group (AMWG). The AMWG includes scientists from NCAR, the university community, and government laboratories. For CAM 3.0, the AMWG proposed testing a variety of dynamical cores and convective parameterizations. The data from these experiments has been freely shared among the AMWG, particularly with member organizations (e.g. PCMDI) with methods for comparing modeled climates against observations. The proposed model configurations have also been extensively evaluated using a new diagnostics package developed by M. Stevens and J. Hack (CMS). The consensus of the AMWG is to retain the spectral Eulerian dynamical core for the first official release of CAM 3.0, although the code includes the option to run with semi-Lagrange dynamics (section 3.2) or with finite-volume dynamics (FV; section 3.3). The addition of FV is a major extension to the model provided through a collaboration between NCAR and NASA Goddard’s Data Assimilation Office (DAO). The AMWG also has decided to retain the Zhang and McFarlane [199] parameterization for deep convection (section 4.1) in CAM 3.0.

        The major changes in the physics include:

        Treatment of cloud condensed water using a prognostic treatment (section 4.5): The original formulation is introduced in Rasch and Kristjánsson [144]. Revisions to the parameterization to deal more realistically with the treatment of the condensation and evaporation under forcing by large scale processes and changing cloud fraction are described in Zhang et al. [200].The parameterization has two components: 1) a macroscale component that describes the exchange of water substance between the condensate and the vapor phase and the associated temperature change arising from that phase change [200]; and 2) a bulk microphysical component that controls the conversion from condensate to precipitate [144].
        A new thermodynamic package for sea ice (chapter 6): The philosophy behind the design of the sea ice formulation of CAM 3.0 is to use the same physics, where possible, as in the sea ice model within CCSM, which is known as CSIM for Community Sea Ice Model. In the absence of an ocean model, uncoupled simulations with CAM 3.0 require sea ice thickness and concentration to be specified. Hence the primary function of the sea ice formulation in CAM 3.0 is to compute surface fluxes. The new sea ice formulation in CAM 3.0 uses parameterizations from CSIM for predicting snow depth, brine pockets, internal shortwave radiative transfer, surface albedo, ice-atmosphere drag, and surface exchange fluxes.
        Explicit representation of fractional land and sea-ice coverage (section 7.2): Earlier versions of the global atmospheric model (the CCM series) included a simple land-ocean-sea ice mask to define the underlying surface of the model. It is well known that fluxes of fresh water, heat, and momentum between the atmosphere and underlying surface are strongly affected by surface type. The CAM 3.0 provides a much more accurate representation of flux exchanges from coastal boundaries, island regions, and ice edges by including a fractional specification for land, ice, and ocean. That is, the area occupied by these surface types is described as a fractional portion of the atmospheric grid box. This fractional specification provides a mechanism to account for flux differences due to sub-grid inhomogeneity of surface types.
        A new, general, and flexible treatment of geometrical cloud overlap in the radiation calculations (section 4.8.5): The new parameterizations compute the shortwave and longwave fluxes and heating rates for random overlap, maximum overlap, or an arbitrary combination of maximum and random overlap. The specification of the type of overlap is identical for the two bands, and it is completely separated from the radiative parameterizations. In CAM 3.0, adjacent cloud layers are maximally overlapped and groups of clouds separated by cloud-free layers are randomly overlapped. The introduction of the generalized overlap assumptions permits more realistic treatments of cloud-radiative interactions. The parameterizations are based upon representations of the radiative transfer equations which are more accurate than previous approximations in the literature. The methodology has been designed and validated against calculations based upon the independent column approximation (ICA).
        A new parameterization for the longwave absorptivity and emissivity of water vapor (section 4.9.2): This updated treatment preserves the formulation of the radiative transfer equations using the absorptivity/emissivity method. However, the components of the absorptivity and emissivity related to water vapor have been replaced with new terms calculated with the General Line-by-line Atmospheric Transmittance and Radiance Model (GENLN3). Mean absolute differences between the cooling rates from the original method and GENLN3 are typically 0.2 K/day. These differences are reduced by at least a factor of 3 using the updated parameterization. The mean absolute errors in the surface and top-of-atmosphere clear-sky longwave fluxes for standard atmospheres are reduced to less than 1 W/m$ {}^2$. The updated parameterization increases the longwave cooling at 300 mb by 0.3 to 0.6 K/day, and it decreases the cooling near 800 mb by 0.1 to 0.5 K/day. The increased cooling is caused by line absorption and the foreign continuum in the rotation band, and the decreased cooling is caused by the self continuum in the rotation band.
        The near-infrared absorption by water vapor has been updated (section 4.8.2). In the original shortwave parameterization for CAM [27], the absorption by water vapor is derived from the LBL calculations by Ramaswamy and Freidenreich [140]. In turn, these LBL calculations are based upon the 1983 AFGL line data [152]. The original parameterization did not include the effects of the water-vapor continuum in the visible and near-infrared. In the new version of CAM, the parameterization is based upon the HITRAN2k line database [153], and it incorporates the CKD 2.4 prescription for the continuum. The magnitude of errors in flux divergences and heating rates relative to modern LBL calculations have been reduced by approximately seven times compared to the old CAM parameterization.
        The uniform background aerosol has been replaced with a present-day climatology of sulfate, sea-salt, carbonaceous, and soil-dust aerosols (section 4.8.3). The climatology is obtained from a chemical transport model forced with meteorological analysis and constrained by assimilation of satellite aerosol retrievals. These aerosols affect the shortwave energy budget of the atmosphere. CAM 3.0 also includes a mechanism for treating the shortwave and longwave effects of volcanic aerosols. A time history for the mass of stratospheric sulfuric acid for volcanic eruptions in the recent past is included with the standard model.
        Evaporation of convective precipitation (section 4.1) following Sundqvist [169]: The enhancement of atmospheric moisture through this mechanism offsets the drying introduced by changes in the longwave absorptivity and emissivity.
        A careful formulation of vertical diffusion of dry static energy (section 4.11).

        Other major enhancements include:

        A new, extensible sea-surface temperature boundary data set (section 7.2): This dataset prescribes analyzed monthly mid-point mean values of SST and ice concentration for the period 1950 through 2001. The dataset is a blended product, using the global HadISST OI dataset prior to 1981 and the Smith/Reynolds EOF dataset post-1981. In addition to the analyzed time series, a composite of the annual cycle for the period 1981-2001 is also available in the form of a mean “climatological” dataset.
        Clean separation between the physics and dynamics (chapter 2): The dynamical core can be coupled to the parameterization suite in a purely time split manner or in a purely process split one. The distinction is that in the process split approximation the physics and dynamics are both calculated from the same past state, while in the time split approximations the dynamics and physics are calculated sequentially, each based on the state produced by the other.

        I don’t find any support in that document for Rud Istvan’s claim that the parameters had been “tuned” to get the best hindcasts. What I find supports the idea that they were trying to “get the physics right”.

      • Matthew R Marler

        PA: If you wish to continue the “physical constants” or “based… on physics” claims please identify by name which parameters conform to your description.

        In response I quoted the entire Overview, but it is not displayiing, so I’ll just put the link:http://www.cesm.ucar.edu/models/atm-cam/docs/description/node8.html

        I think it is clear that they were trying to “get the physics right”.

      • Hope this thread right. Yup. Read book, get back. Otherwise too many individual rebuttals without footnotes.

      • Matthew R Marler

        Rud Istvan: All GCMs are parameterized. Parameter sets for CMIP5 were selected to give the best 10, 20, and 30 years hindcasts per the experimental design. See Taylor et. al. BAMS 93: 485-498 (2012) open acsess on line For parameterizarion, see the technical documentarion to NCAR CAM3. Available free on line as NCAR/TN-464+STR (2004). Or read essay Models all the way Down in ebook Blowing Smoke.

        I have read those documents now, and I have found nothing that supports your claim that the parameter sets were selected to give the best 10, 20, and 30 year hindcasts. I have quoted extensively from the technical documentation contradicting your claim.

        You want I should buy your book as a substitute for the primary literature? Perhaps you could give it to me? That’s just as reasonable if you can not supply us with even one quote from the technical documentation that supports the claim that you posted here.

      • Matthew R Marler

        A quick comment on my attitude, in case anyone is still reading. I am suspicious that the free parameters may have been tuned to get good performance on extant data, at least in some vague fashion other than least-squares estimation or another estimation algorithm. However, modelers deny that, and I can’t automatically judge all of them to be liars. For the specific claim made by Rud Istvan, challenged by Steven Mosher and repeated several times by me, I can not find Rud Istvan’s claim supported in the documents that he cited, from which I quoted extensively. So I am in a kind of limbo.

      • Lets just stop the arguing and put it into technical terms. ‘Parameter settings and scope of differing runs are chosen to suit the required investigative experiments about to be done’

      • Parameterization.

        In the real world of engineering we call those model parameters “fudge factors”. And do we change them in order for the model to perform better? It would be kind of stupid if we didn’t. Sometimes we have to invent fudge factors but we don’t know why Einstein had to invent something called the cosmological constant. Empiricism rules. Toy models drool.

        http://blog.ridingtohellinahandbasket.com/content/images/2014/Oct/einstein-duh.jpg

      • Rud Istvan | February 2, 2015 at 12:56 pm |

        “anyone who says models aren’t tuned does not know what they are talking about”

        Yup.

        http://img1.wikia.nocookie.net/__cb20110412235023/uncyclopedia/images/1/15/Capt._Obvious.jpg

      • Marler, with a seemingly serious intent, writes:

        But all things considered, I do not find “tuning” in the sense most people understand it, but consideration of physics, standard physical constants, and other published literature.

        Most people understand tuning as adjusting the tension of a string on a guitar so it makes the proper musical note when plucked. Or maybe for older folks turning the dial in a radio to get the channel you want. Stay tuned to this station!

        So Marler, even you might be able to imagine the need in a climate model for a tunable parameter called albedo. The earth’s average albedo isn’t fixed, it isn’t well understood how/why/when it changes, how much it changes, or what it’s precise value is at any given time.

        See here for more background on this very important fudge factor.

        http://www.atmos.umd.edu/~zli/PDF_papers/94JD00225_Albedo.pdf

        “Estimation of Surface Albedo From Space: A Parameterization for Global Application”

        Given global albedo estimates are generally 35% +-3% what value does the modeler choose for it? After all, we’re talking about a range of 6% of the average solar constant (340W/m2 * 0.06)= 20.4W/m2.in the range of uncertainty. ALL human attributable forcings are estimated to be a mere 3.5W/m2 so the uncertainty in how much solar energy is rejected by reflection without ever effecting global climate is 6 times the total alleged human influence on climate.

        Are you getting the picture, now Matthew? Listen to Rud. Ignore Mosher.

      • Matthew R Marler

        David in TX: Are you getting the picture, now Matthew? Listen to Rud. Ignore Mosher.

        I am open to the possibility that the models may have been tuned. The specific claims made by Rud Istvan are not supported in the documents that he cited. On this point Steven Mosher is correct.

        I am also open to the possibility that anthropogenic CO2 may be warming the Earth surface and atmosphere. Many of the specific claims made by the proponents of the theory of AGW are not supported by the science.

        Do you not understand that specific claims require adequate support?

      • Steven Mosher

        Rud is wrong again.
        And springer gets the assist on the own goal.

        I suggest that rud actually download a gcm
        I suggest he join the user group
        I suggest he plow through code. I started in 2007.
        I suggest he look at metadata requirements for
        Cmip submission to find some things he thinks there is no record of.

        I mean something very specific by tuning.
        The systematic adjusting of uncertain parameters to force a match between model outputs and observations.

        If someone has 32 knobs to turn and tweaks a couple
        To match 20% of the observations… That’s not tuning. It’s not curve fitting. Calibration might be a better term. Or BFM

        If someone has 32 knobs and they systematically turn them to force an agreement between the model and the entire time series of observation.. Then they have tuned or fit the curve.

        You think they tuned?
        The hindcast isn’t that good.

        Very simple point

      • Steven Mosher

        I like springers guitar example.
        To tune a guitar you twist a knob until the note produced matches a reference note.

        Look at gcm output a single run.
        Compare it to the reference.
        See how frickin out of tune it is.

        It would be like springer passing you a guitar that was off by half a note and then you accused him of tuning it.

        Well he played with knob. Wouldn’t be the first time

      • The blog owner, who happens to be a PhD climate scientist, former dept chair of Earth Sciences at a Georgia Tech, with published papers dealing with climate modeling disagrees with Steven Mosher who has zero professional credentials in any field of science.

        http://judithcurry.com/2013/07/09/climate-model-tuning/

        JC comment: This paper is indeed a very welcome addition to the climate modeling literature. The existence of this paper highlights the failure of climate modeling groups to adequately document their tuning/calibration and to adequately confront the issues of introducing subjective bias into the models through the tuning process.

        Tuning/calibration is unavoidable in a complex nonlinear coupled modeling system. The key is to document the tuning, both the goals and actual calibration process, in the manner in which the German climate modeling group has done.

        Stop pretending to be an expert Mosher. You’re not going to win this argument by redefining what is understood as model tuning by professional experts in the art. If your goal is to look like an uninformed argumentitive piker then mission accomplished.

      • oops, wrong thread:
        Nice find David in Tx. I forgot to search here first again, dang!

      • http://www.gfdl.noaa.gov/news-app/story.78/title.cloud-tuning-in-a-coupled-climate-model-impact-on-20th-century-warming

        Climate models incorporate a number of adjustable parameters in their cloud formulations that arise from uncertainties in cloud processes. These parameters are tuned to achieve a desired radiation balance and to best reproduce the observed climate.

        Oh dear. Another august source that disagrees with the ill-informed piker.
        .

      • Steven Mosher | February 5, 2015 at 3:32 pm |

        Look at gcm output a single run.
        Compare it to the reference.
        See how frickin out of tune it is.

        Haha. Obviously you fail to recall the old saying “you can tune up a pig, but it’s still a pig” and also “you can’t tune a sow’s ear into a silk purse”.

        You don’t think if someone could come up with a set of tunable parameters in credible ranges that better matched the observed climate that they wouldn’t do it? You want us to believe that it’s for lack of trying by about a million climate scientists who’d win a Nobel for it if they found the magic model that could faithfully reproduce the observed climate? One would have to be an incredibly dense knob to swallow that line of BS. Of course that doesn’t rule you believing it…

      • Matthew R Marler

        David in TX, from the same blog post of Prof Curry that you cited:

        In addition to targeting a TOA radiation balance and a global mean temperature, model tuning might strive to address additional objectives, such as a good representation of the atmospheric circulation, tropical variability or sea-ice seasonality. But in all these cases it is usually to be expected that improved performance arises not because uncertain or non-observable parameters match their intrinsic value – although this would clearly be desirable – rather that compensation among model errors is occurring. This raises the question as to whether tuning a model influences model-behavior, and places the burden on the model developers to articulate their tuning goals, as including quantities in model evaluation that were targeted by tuning is of little value. Evaluating models based on their ability to represent the TOA radiation balance usually reflects how closely the models were tuned to that particular target, rather than the models intrinsic qualities.

        To me, that is different from the specific claim of Rud Istvan that the parameters had been tuned to get the best fitting 10 year, etc hindcasts. And as described on the overview page of the specific document that he cited in support, it was shown that the “tunable” parameters (i.e. the parameters other than the “physical constants”) had values selected based on other published papers about the physics being modeled.

        And even the quoted text is an inference, not citing any descriptions of any actual tuning of any particular set of parameters for any particular model by any particular group.

      • Matthew R Marler

        David in TX, from the gfdl.noaa web page that you linked: We investigate the impact of uncertainties in cloud tuning in CM3 coupled climate by constructing two alternate configurations (CM3w, CM3c). They achieve the observed radiation balance using different, but plausible, combinations of parameters. The present-day climate is nearly indistinguishable among all configurations. However, the magnitude of the indirect effects differs by as much as 1.2 Wm−2 , resulting in significantly different temperature evolution over the 20th century (Figure below). CM3 predicts a warming of 0.22 °C. CM3w, with weaker indirect effects, warms by 0.57 °C, whereas CM3c, with stronger indirect effects, shows essentially no warming. CM3w is closest to the observed warming (0.53-0.59 °C).

        They provided several sets of “tuned” parameters in order to explore the quantitative uncertainties in the projections of cloud cover changes. They did not tune the parameters to achieve the best fit to particular sequences of data, as asserted by Rud Istvan.

      • ‘tuning’, ‘fitting’, semantics.

        AT the end of the day I prefer ‘fudge factor’.

        http://www.cesm.ucar.edu/events/workshops.html

        Feel free to sift through the presentations. I think you will find half or more of the presentations are about toying with parameterizations.

        I just remember sitting through a few of these. Let me recap the average gist:

        “We played with our paramaterization, fiddled with our parameters and ran the model. These values gave the best agreement (with data or other models). However this bias reared its ugly head. So we constrained the solution to fit the other model everywhere except in this area and played with the parameter. This was what we found fit best. We got pretty good agreement.”

        A few years of these conferences and one might understand my derision for concluding anything from these models.

      • I lost track of this argument a couple of days ago and don’t feel like reading all this again, but hopefully Matt indulge me with a reply on my question re. his statement:

        “They did not tune the parameters to achieve the best fit to particular sequences of data, as asserted by Rud Istvan.”

        Why did they tune the parameters?

      • You can tune a piano, but you can’t tune a climate model?

      • Well my take away from this conversation is as follows:

        It’s only tuning if you get it dead right. But tuning is a BAD THING and so of course no real scientists do it.

        If you don’t get it dead right it’s called parameterisation, which sounds so much more sciencey and of course is a GOOD THING. All real scientists do it.

      • Matthew R Marler

        Don Monfort: Why did they tune the parameters?

        They tuned the parameters whose values were not published in standard texts and tables. Parameters whose values have been well studied and measured are called “physical constants”, such as the density and specific heat of dry air at STP. Others were updated in response to publications.

      • Matthew R Marler

        nickels: Feel free to sift through the presentations. I think you will find half or more of the presentations are about toying with parameterizations.

        Something that Rud Istvan did not write is certainly supported by presentations that he did not cite.

      • Thanks, Matt. I reviewed some of the discussion and found that Rud had recharacterized the comment that started the argument:

        “The only issue on this thead was whether GCMs were ‘tuned’ via hindcast parameterization.”

        Are you saying that the reference he gave doesn’t prove that, or that GCMs are never ‘tuned’ via hindcast parameterization?

      • Matthew R Marler

        Don Monfort: Are you saying that the reference he gave doesn’t prove that,

        In the references that he gave, I did not find support for his claim that the models had been tuned to give the best 10, 20, and 30 year hindcasts. The excerpts that I copied and pasted here suggest to me a more thoughtful and physics-oriented approach to parameterisations, including the citations of other published papers.

        It will be interesting to see whether, and how, the modeling community responds to Greg Goodman’s analysis of the Mt Pinatubo eruption in the post of today.

      • Thanks, Matt. I get your point. But I am not sure that a thoughtful and more physics oriented approach doesn’t involve trying a lot iterations of plausible tunings of various parameters to get the best 10, 20, and 30 year hindcasts.

        Did you watch the video on HadCM3 that Mosher recommended?

        Yes, it will be interesting to see if there is a reaction from the goon squad to Greg’s work and also to Nic Lewis deconstruction of the latest excuse for the pause, over on CA. My guess is they will endeavor with all their sciency might to studiously ignore the unpleasantness.

    • Engagements become more strained when one side of a dialog opts to silence the other. Like when the duct tape is being placed over one’s mouth.

      • And they take off the duck tape only to call men in white coats to analyze the poor soul’s illness.

    • Joshua, I would say that both are true. The biases and shortcomings are routinely discussed in the technical literature on the models but ignored by the CAGW advocates. Note that the technical discussions are very technical hence very difficult to convey to non-experts.

      • David –

        ==> “but ignored by the CAGW advocates. ”

        When you use such undefined terms and make such broad characterizations based on them, I don’t know how to proceed.

        I could easily say that recognition of the shortcomings in the technical literature are ignored by anti-mitigation advocates.

        That does not seem like a way forward to me, but IMO, resembles identity-aggressive/identity-defensive behaviors.

        What do you think of the author’s proposed alternative strategies?

      • Thus the need for the executive summary for mass consumption (AP wire).

      • Joshua can please you give chapter and verse in the IPCC summaries where these “huge biases and ignorance of potentially important mechanisms, have been routinely and dutifully reported” ?

      • Steven Mosher

        In general the modeling experts do a better job than skeptics of criticizing the models.

        The difference is tone.

        Expert. This is a problem in the model.
        Skeptic. It’s a fraud, disaster , destruction of
        The scientific method, a travesty, they are ignorant
        Gold digging socialists

      • Eli, said the same thing.

      • Steven Mosher : “In general the modeling experts do a better job than skeptics of criticizing the models.”

        I am both a sceptic and someone who has been paid to develop computer models – for financial purposes, admittedly.

        I can assure you that if I had handed those who invested my expertise some schtick about how the models couldn’t be expected to bear any resemblance to reality, they would certainly not have paid me.

        But there again, I work in the real world, so the criteria are vastly different.

        You want to try it some time.

    • You seem to have a structural conception of how skeptics will behave. Almost like a model.

    • Rud is definitely correct, and anything in the ‘Model Physics’ chapter is probably fair game for something that is determined experimentally and is basically a ‘fudge factor’.
      But I can’t really help MM because I don’t know anything about the fitting process. I suspect if you have the time to follow the various references in chapter 4 of the CAM manual you will find the fitting procedure.

      The one thing I do know is that these ‘fudge factors’ are not grid independent and must be recalculated when grid size changes. Which leads me to believe they are true ‘fudge factors’ and do not have any actual units.

      I know this doesnt really help, but having sat through enough CCSM (what is used to be called) talks I know Rud is correct.

      • For instance, here is a discussion of tuning the physic package for a different grid resolution…..

        http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.422.141&rep=rep1&type=pdf

      • Another paper discussing comparisons of CAM cloud physics parameterizations with field data and short range weather forecasts.

        “Testing cloud microphysics parameterizations in NCAR CAM5
        with ISDAC and M-PACE observations”

        Not saying this answers MM questions, but it will hopefully give at least a taste of the hocus pocus’ery that goes on in these models with their various ‘fudge factors’.

        http://www.ecd.bnl.gov/pubs/BNL-96872-2012-JA.pdf

      • “Given the uncertainty in representing various cloud
        processes in climate models…”

        “MG08 includes the treatment of subgrid cloud variability
        for cloud liquid water by assuming that the probability
        density function (PDF) of in-cloud liquid water
        follows a gamma distribution function.”

        “subgrid variability of cloud liquid”

        “The temperature of homogeneous freezing of rain was
        changed from 40°C in the original MG08 scheme to 5°C
        in the released version of CAM5 in order to improve the
        Arctic surface flux and sea ice in the coupled climate
        simulations [Gettelman et al., 2010]. We note that this
        change has no physical basis,…”

        “There are still large uncertainties in the mechanisms
        of ice nucleation,…”

        “..partially related to the subgrid-scale dynamics that are
        not resolved in large-scale models..”

        “The modeled cloud fraction, phase and spatial distribution
        of cloud condensates have a significant impact on
        modeled radiative fluxes…”

        “In general, CAM5 underpredicts the
        observed downwelling SW flux by up to 100 W m2…”

        “Lognormal fitting parameters for the best estimate
        aerosol particle size distribution are given in Table 1…”

        “There is a threshold size separating cloud
        ice from snow (Dcs), which is largely a tuning parameter…”

        “CAM5 severely underestimates aerosol optical depth
        (AOD) by a factor of 5–10…”

        I mean, look. I dont know squat about cloud microphysics, and modelling them as outlined in this paper is truly fascinating and interesting work.

        However, I know enough reading this paper to scoff at the statement:
        “The science is settled….”

        http://www.ecd.bnl.gov/pubs/BNL-96872-2012-JA.pdf

    • Nice find David in Tx. I forgot to search here first again, dang!

  6. Pingback: The Climate-Modeling Paradigm | Transterrestrial Musings

  7. I would like to know how many climate papers rely on the assumption that climate models are correct as a starting point. Very likely over half the entire science would be null if we accept the truth that these models are, in fact, not right.

    • nickels –

      ==> “Very likely over half the entire science would be null if we accept the truth that these models are, in fact, not right.”

      Is that what you get from these excerpts – that the author thinks that “the truth” is that these models are, in fact, not right – beyond in the sense that it is “true” that all models are wrong?

      • Models are cool, but wrong.
        So any paper that runs models and makes conclusion from their output will be wrong (although possibly interesting).
        So the question is simply, if we look at the literature how many papers does this make interesting but irrelevant….

      • Joshua

        The author thinks that the models are insufficiently accurate to reach meaningful conclusions about future conditions. Therefore, scientists who have written assessments that future conditions will be significantly worse as a result of higher levels of CO2 and resulting temperature increases have reached conclusions based on poor models and their assessments are generally invalid

      • (Not talking of course about papers who use models to determine structural and sensitivity issues).
        Im taking about papers who use model output as predicitions….

      • Rob –

        It is an interesting post based on an interesting paper. I’ve seen the issues discussed before, and based on seeing smart and knowledgeable people weigh in on different sides of these issues, I’m not inclined to use this one author’s opinion as dispositive.

        At any rate, what do you think of the author’s suggested alternative strategies to cope with climate related risks?

      • Joshua
        “what do you think of the author’s suggested alternative strategies to cope with climate related risks?”

        Some of the ideas seem like they make sense, but something making sense does not mean it will be implemented.

        The construction and maintenance of good infrastructure is the #1 thing that can be done to avoid damage from adverse weather. A very large portion of the world puts very little priority on this issue. It simply doesn’t matter much what else is done if nations don’t to that.

      • Rob –

        => “Some of the ideas seem like they make sense, but something making sense does not mean it will be implemented.”

        That’s interesting – because I think that a lot of what the author discusses closely resembles what I have discussed with you many times – where I talk about the implicitly subjective simplifications in the “mental modeling” that is unconsciously confused with reality – only for you to tell me that it makes no sense.

        At any rate, “surprise-free scenario-planning” is pretty much what I suggest in these threads on a regular basis – only to get in return all sorts of derision and pejoratives.

        It’s a good thing I consider all that dreck to be based on implicitly subjective simplifications in my interlocutors mental modeling, or my feelings might get hurt.

        :-)

      • @nickels says

        ‘Models are cool, but wrong’.

        My understanding is the opposite. They are hot and wrong.

        Mother Gaia is stubbornly refusing to cooperate with their high-end Thermogeddonist predictions and has sat on her hands for fifteen years or so.

      • Rob –

        From the paper:

        Scenario planning, like other frameworks that aim to guide decision makers to think about (deep) uncertainties (e.g. Cash et al. 2003; Tang and Dessai 2012), stresses the importance of user involvement during the entire development phase.

        If I had a quarter for every time I’ve said essentially that in these threads (only to be insulted in return by my much beloved “denizens”), I could…well…go out for a steak dinner tonight (if the plow ever arrives)….

      • “They are hot and wrong.”
        ha!
        I was think as an example about one the papers here a few months back that did a bunch of model runs to try and determine when we would see the CO2 signal pull away from the ‘background’ signal.
        What possible sense could such a paper have if its using models as the underlying driver?

      • Joshua

        ““surprise-free scenario-planning” is pretty much what I suggest”

        And I suggest what you have written is meaningless in regards to the implementation of policy. Just because something is possible does not mean that people take actions in response.

        Limited resources force decisions to be made regarding priorities.

      • Rob –

        ==> “And I suggest what you have written is meaningless in regards to the implementation of policy. ”

        Well, he speaks to some of the practical limitations, and offers some alternatives. I’m not in complete agreement with his analysis.

        At any rate…

        ==> “Just because something is possible does not mean that people take actions in response.”

        Right. Some people are absolutely insistent on not taking action (until unrealistic criteria – based on subjective, simplistic mental modeling that doesn’t reflect reality are met).

        ==> “Limited resources force decisions to be made regarding priorities.”

        I love how one day, conz accuse libz of “limited pie” thinking and then the next day talk about how the pie is limited.

      • Joshua

        I have been consistent in what I have written. I evaluate your position based on what you write–not because there are other alarmist who write unsupported positions. The US needs to seek to balance its budget. That needs to be taken into consideration when evaluating realistic alternatives.

        A poor model provides very little useful information. I don’t see how the outputs of the current GCMs are any better than merely looking at historical records and taking into account expected popuilation changes.

        Joshua–How about writing specifically what policies you think the US should enact in response to AGW in the next 5 years???

      • As someone who has been engaged with discussions of policy, I can say that Joshua has been consistent in wanting scenario planning to include broad based user involvement during the planning phase. He and I don’t always agree on policy, but both of us have agreed to broad based comprehensive planning.

      • John writes- “both of us (he and Joshua) have agreed to broad based comprehensive planning.”

        Can you think of anyone against such planning? Isn’t the issue that there are limited resources to fund things and tough choices have to be made?

      • I confess I typically jump over your comments, but I don’t recall you ever writing anything similar to that.

      • Joshua,

        You write:-
        “Scenario planning, like other frameworks that aim to guide decision makers to think about (deep) uncertainties (e.g. Cash et al. 2003; Tang and Dessai 2012), stresses the importance of user involvement during the entire development phase.
        … every time I’ve said essentially that ”

        I don’t think you ever have said that. The users are the politicians who re-write the IPCC SPMs every time. Not little us. We’re just fodder … And climate scientists are just the doers ….

    • Nickels

      About as many as assume the data they use as a Starting point is correct.

      Tonyb

    • “I’m not inclined to use this one author’s opinion as dispositive.”

      I’d say that’s wise, Josh. And yet none of this is new. The scary predictions we read about on a regular basis in the NYT”s are generally model based.

      Yet the models do not seem to be performing well…to be polite.

      Therefore:……..

      I’ll let you fill in the blanks

    • Heh, ‘state of the art fully coupled AOGC’s do not provide
      independent evidence for human induced climate change.’
      Artful minxes!

      What Nassim Taleb calls the nerd effect,’ failing to take
      into account external uncertainty, the unknown unknowns,
      ‘stems from the mental elimination of off – model risks or
      focusing on what you know. You view the world from within
      a model,’

      The Black Swan. Ch 10.

  8. Judith, your comment about such a thesis probably not being possible in the US is very worrisome. ‘Political correctness’ has been a large and growing problem in the social sciences for decades. See, for example, the NAS report to the University of California regents at http://www.nas.org/images/documents/A_Crisis_of_Confidence.pdf
    That it has intruded into earth sciences is not good at all. Eisenhower’s farewell address did not only warn about the military industrial complex. It warned about the dangers of research becoming politicized by much federal funding dependency. And now here we are, with the federally climate gravy train producing climategate, and such severe PC that Soon and Briggs had to go to the Chinese Academy of Science to get their irreducibly simple sensitivity model (an equation with just 5 parameters) published.

    • Agree! But your link doesn’t seem to work for me.

    • Got it. Just read the Intro. Looks really interesting.

      I’m glad I’ve got my blood pressure under control.

    • Rud, I have read through most of the NAS report. I find it to be a compelling document. Thanks for pointing it out.

      Has there been any response from the UC Regents? I would suspect that most of them didn’t even read it and dismissed it as the ravings of a bunch of conservatives. The politicization and radicalization of our education system is a particularly insidious and difficult nut to crack.

      • Not to my knowledge. But for sure the problem is not limited to U. Cal. Look at Oreskes hire by Harvard out of that corrupted system.

        It slow poisons the next generation, as Judith suggested.My daughter experienced it personally. But she learned much from the experience. My hope is that many others might, also.

    • Rud

      Mark found the correct link. I just wanted to say that I actually lived through a major tipping point in the politicization of the university in the early 1980s. By that time, faculty tenure committees routinely selected candidates of a certain ideology in the social sciences. I don’t know if that was true in the physical sciences – I think it was not. I was disappointed by the absence of true political and philosophical debate that I had naively expected at a university. The thought and speech police were everywhere, it was oppressive. I heard an interview of the author in which he stated that he thinks most students are careerists that keep their heads low and avoid controversial topics – the go along to get along.

  9. Willis Eschenbach

    I seriously doubt that such a thesis would be possible in an atmospheric/oceanic/climate science department in the U.S. – whether the student would dare to tackle this, whether a faculty member would agree to supervise this, and whether a committee would ‘pass’ the thesis.

    I gotta say, that’s one of the saddest and truest comments
    I’ve ever read in this entire climate discussion … somewhere, Einstein is weeping.

    w.

  10. The natural resource of the US was its peoples’ faith in the ethic of Americanism. It’s the peoples’ faith that created the wealth and with it, a government-education complex that has now become an institutional curse.

      • It is the Academia/UN alliance, a wedding of ideals that are grounded in their mutual opposition to the despised, foundational principles of Americanism (which is based on a respect for individual liberty and the need for personal responsibility, guided by a Judeo-Christian heritage and the liberal philosophy of the founders that are now branded by the Left as, conservative ideals).

  11. “it might be questioned whether the state-of-the-art fully coupled AOGCMs really are the only credible tools in play”. Wrong question. As I and others have explained a number of times here and at wattsupwiththat, the GCMs are not credible as climate models. They try to replicate climate using small space-time slices, which means that they aren’t climate models at all. They are simply weather models, and not very good ones. The probability of them being able to predict climate months, years or decades ahead is zero, because of the exponential build-up of errors in such a model. Even the good weather models can only predict a few days ahead. A real climate model would not have small date-time slices, but it would have coded the things that drive climate : orbit, solar activity, GCRs, clouds, GHGs, ocean oscillations, atmospheric circulations, etc, etc. Unfortunately, most of those things are not understood yet, so a reasonably functional climate model is not yet possible.

    • Steven Mosher

      “The probability of them being able to predict climate months, years or decades ahead is zero, because of the exponential build-up of errors in such a model. ”

      Wrong.

      All GCMs are able to predict the climate.
      every last one of them.
      We can even test the predictions.
      Look, the predictions are high.

      One way to use this is simple.

      If I have a model that starts in 1850 and predicts 2014 a little bit high
      what can I do?
      What can I say about its prediction in 2100?
      Suppose it predicts 3C in 2100.

      Well, That’s information. I might say, that +3C is an upper boundary.
      Unless of course you want to argue that i should plan for MORE than 3C of warming.

      And on the other hand I might just run a statistical model and it would say
      1C of warming.

      So I have one approached that is biased high, and another approach that will miss any acceleration.

      Personally, I’d be happy using the high side estimate. Dont tell me I can’t cause I use high side estimates every day in business. every fricking day I use a model that is wrong to the high side. If my boss suggests a decision that is above the high side model.. I remind him that my best model, which is always biased high, is below the figure he wants to use.

      GCMs can predict.
      GCMs do predict
      Whether or not you can use the prediction is a pragmatic decision.

      pragmatic decisions cannot be made in a vaccum.
      They are made in a space where alternatives are offered.

      Unless you have an alternative, you dont get to play the game.
      sorry.

      • I can ask the politician to stop funding their game. They can go back to modelling the climate in the pub after a few pints of ale.

        My supervisor used to tell the research group that if we couldn’t explain the importance of our work to a truck driver in a honkytonk bar in Texas then we should ask ourselves why we wetre doing it and why said truck driver should fund it.

      • Software has to be proved correct. Not the other way around.
        Unvalidated, untested software is just unvalidated untested software.
        Anything it says is no better than a wild guess.

      • “Personally, I’d be happy using the high side estimate.”

        I used to have one customer service rep who always estimated longer than expected to get their shipment out in order to never disappoint. I had another rep who would over-promise delivery estimates trying to please the customer on the phone. I set a policy that we always give our customers our true best estimate. I had never realized before that this is not natural human nature. There are always temptations to misuse power by mis-information, (usually for the best of reasons).

      • I remember hearing that a couple of decades ago all the airlines, plagued by late arrivals and missed connections, all brought their on-time percentage way way up – within a single year. How? By adding to every arrival time as a buffer.
        Cheap trick? No. It was a good idea: turned out that customers would much rather wait around an extra half an hour in an airport than miss a connection.

      • Mosher writes_ “GCMs can predict. GCMs do predict. Whether or not you can use the prediction is a pragmatic decision.”

        Steve- sometimes you seem to be intentionally obtuse in your comments. Do GCM make you better at determining whether any place on the planet will be better off or worse off as a result of AGW at some point in time?

        What modelled output from a GCM would lead you to conclude with reasonable confidence that any particular place on the planet will have a worse climate as a result of AGW in say 100 years?

      • “sometimes you seem to be intentionally obtuse in your comments”

        That’s because he is.

        Andrew

      • Look, I disagree with mosher on many things, but I don’t think that it could be said that he’s intentionally obtuse.

        :-)

      • Matthew R Marler

        Steven Mosher: Unless you have an alternative, you dont get to play the game.

        Luckily, we have many alternative models. And we are blessed to live in a republic where everyone can vote, or correspond with elected officials, or otherwise “play the game”..

        Besides that, we have the cautionary tales from history of the best available models being (catastrophically) wrong, as was the case with the Tacoma Narrows Bridge, and the model-based projections of Malthus and the neo-Malthusians. The “precautionary principle” tells us that we should not trust the models before they have been shown to be accurate.

      • When each particular quill stand on end like quills upon the fretful porpentine.
        ============

      • “Well, That’s information. I might say, that +3C is an upper boundary.”

        For all we know a circulation change could occur in 5 years and it could warm 10C….

        Once a simulation reaches 100% error (as a climate model will after a few months) any predictive value is nil. Your just looking at discretization noise at that point, nonsensical fluctuations.

        Yeah, you can call it a prediction and use it, but I’m not sure I see the point unless its to try and pull a fast one.

        The alternative is to simply admit you dont know what’s going to happen, which is, in fact, the truth.

      • Steven Mosher. I do have an alternative. In my model, tomorrow is the same as today. It is more accurate than any of the GCMs. Go on, just test it. You will find that over time, it shows less error than any GCM.

      • We certainly live in different worlds. You believe models are “data” and even if they are wrong we can discuss and act on them. Try that attitude with large scale chemical processes with nitric acid or if significantly off you shut down a process plant for days or weeks. I couldn’t afford an attitude quite as cavalier as yours. Why should we use models that are inherently inaccurate, predicting to the high side as you say, to rearrange the global economy? Why are these inaccurate projections treated as gold standards with all kinds of government and “scientific” pronouncements treating them as fact? I believe we should demand more rigor.

      • Mosher says: “Unless you have an alternative, you dont get to play the game. sorry.” In what world is that how things work? I can say for a fact that the proposed bullet train for California 1) will go way over budget, 2) will never be finished and 3) if it were finished would never have the ridership claimed because there aren’t that many people traveling that route in the first place. Do I need a model? No. Do I need to build an alternate train? No. I just need to audit their plan and financials and see that they employ both unicorns and pixie dust to make it all work, that the state does not now have and will never have enough $ to pull this off, etc.
        Just because you need to make a decision does NOT mean that you must use some lame-ass tool just because it is there, whether a GCM or a financial model or a stock market forecast (do you believe those?).

      • Joshua from War Games:

        A strange game. The only winning move is not to play. How about a nice game of chess?

      • Steven Mosher : “The probability of them being able to predict climate months, years or decades ahead is zero, because of the exponential build-up of errors in such a model. ”

        Wrong.”

        No, not wrong.

        Absolutely 100% correct.

      • Mosher, it is true they predict. The question is, with what quality. The answer is with very poor quality. The Pause is one diagnostic. Empirical sensitivity estimates are another. The problems are inherent. Read several essays in this in Blowing Smoke. You can find them using the the introductions road map. Read them, check the footnotes, then get back. With facts, not opinions.

      • Steve, the models project and never predict.
        You should perhaps read around the subject before you pontificate.

      • I thought Joshua was the scientist’s son and the computer was WOPR.

      • Craig nails it. Mosher, as often is the case, is simply full of it.

        Mosher, tell FDA the next time they come to audit your medical device development and manufacturing site that they can’t play unless they have a QMS of their own.

        Tell SEC the next time they come to audit your financial services firm they can’t play unless they have analysis models of their own.

        Your naiveté and/or arrogance shows little bounds.

        Putz.

      • Steven Mosher,

        You wrote –

        “GCMs can predict.
        GCMs do predict
        Whether or not you can use the prediction is a pragmatic decision. . . .

        . . . Unless you have an alternative, you dont get to play the game.

        sorry.”

        First, the GCM predictions are no more useful than no prediction at all, if results to date are taken into account.

        Second, I presume the game to which you refer is the game of transferring money by force from the taxpayers’ often meagre accounts, into one’s own. I accept that people such as yourself set the rules. The Warmists have played the game well.

        A possible problem with your admonition is that it depends on people, whom you dislike, wanting to play the game under conditions imposed by yourself. Your ersatz apology may be seen as patronising and condescending, and not sincerely meant. Of course, I may be wrong.

        It is a symptom of delusional psychosis that the sufferers believe, sincerely, that they are obliged to provide expert guidance and direction to people who don’t share their delusion.

        You might find that people don’t want to play your game, as they stand no chance of winning, or even breaking even. I wish you luck with recruiting new players. Unfortunately, the supply of gullible simpletons willing to fund the game organisers is diminishing – do you agree?

        Live well and prosper,

        Mike Flynn.

      • Proscribe or prescribe? Dolt.

      • Using a high side estimate in private behavior is one thing. But when you are seeking public policy changes (and public funding) you run the risk of significant overspending (aka, inefficient resource allocation) in your response and necessarily short-changing competing needs. If you base policy on high side projections the fear of “mushroom clouds” can always trump the opposition. Examples include the US defense budget and health care costs. Heaven help us if climate change policy follows that path.

      • “All GCMs are able to predict the climate.”

        So can astrologists, phrenologists, and tea leaf readers.

        And do we really need to deal with this hackneyed argument again?

        “pragmatic decisions cannot be made in a vaccum.
        They are made in a space where alternatives are offered.

        Unless you have an alternative, you dont get to play the game.
        sorry.”

        No. No one can model the Earth’s climate with the precision, accuracy and certainty claimed by those who would have the government seize control over the global energy economy.

        No one has to propose an alternative method of predicting that which cannot be predicted (accurately – obscurantists can so boring in their consistency sometimes), in order to reject the enormous public policy proscriptions of those who have no ability to do so themselves.

      • Basil Newmerzhycky

        Moshers coment “All GCMs are able to predict the climate.
        every last one of them.” is basically correct. Some models running a little off (but still within the margins of error when you include the long term trend not just short term of past decade) is not a reason to not use the models.

        To suggest GCM’s need improving is OK. To say we need to do away with them is as ludicrous as saying “Since the WRF model missed the last NYC snowstorm we should do away with all models”. This “book-burning” is the antithesis of climate science. I would hope Dr Curry would come out and tell everyone this.

      • I am with bay-zil, on this one. All climate models can predict the climate. That’s why they call them climate models! Just like all econometric models can predict the economy. Try to follow this…that’s why they call them econometric models! Simple as that. Get it? Now ask me why we need so many freaking models to predict the same freaking thing and I will refer you back to bay-zil. He really smart.

      • We don’t need to burn the climate models. I’m not sure if you can keep bits lit anyway. No matter.

        All we have to do is adjust the internal parameters and/or approximations, so the mean of the models lies on top of the global temp record. It would be great if we could get the 14,000 foot temp mean to match the global sat record. But it would be an improvement the model’s mean could simply match the land temp construction.

      • OMG! Jim2, that sounds like tuning parameters. They don’t do that, according to Mosher. Watch Mosher’s video:

        https://www.newton.ac.uk/seminar/20101207110011401

        After seeing that, I am convinced our money would be better spent hiring lingerie models to make climate predictions.

      • Basil Newmerzhycky

        ” Now ask me why we need so many freaking models to predict the same freaking thing and I will refer you back to bay-zil. He really smart.:

        There is a very good reason why short term weather forecasting and longer term climate forecasting involves a number of models instead of one.

        No model is perfect…at best there are rough but good estimates of future conditions and many models have built in biases that occur randomly.

        But running several models in an ensemble, and taking the ensemble mean has proven to be a superior way of pushing the model accuracy past the limits of just one model by minimizing outlier runs. This model interpretation is the same for modeling anything…weather, climate, economics…etc.

        For example, thats why the best medium range forecasters often use a combo of the ECMWF, GFS, and UKMET solutions for the best forecast. They know one of the models is great in the Arctic but weak in the tropics. Another is better in the tropics but weak in the mid latitudes. One under-does air-sea interactions but really nails continental air-masses. Impossible to build all into one model., whether its “Weather” or “Climate”.
        I’m sure there is even a model on how many penthouse magazines Don burns thru in one evening sitting. Hope that helps. Otherwise If there is anything else I can help you with, just let me know. :)

      • We know about the inaccuracy of medium range forecasts and the story about model ensembles, bay-zil. We can get that dogmatic drivel from jimmy dee ten times a day. You are superfluous.

      • In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions.

        http://www.ipcc.ch/ipccreports/tar/wg1/505.htm

        Here the IPCC in the TAR is talking perturbed physics models. I do wish people would catch up with 20th century climate science.

    • A fan of *MORE* discourse

      Mike Jonas  proclaims [utterly wrongly] “The probability of them [GCMs] being able to predict climate months, years or decades ahead is zero, because of the exponential build-up of errors in such a model.”

      Please allow me to join Steven Mosher in noting that this widespread belief (among denialists) is utterly mistaken.

      A terrific example — that fortunately is free of ideological entanglements — is the dynamics of the solar system, and simulated in general orbital models (GOMs).

      GOM dynamics is known to be chaotic, with a characteristic Lyapunov time 5–10 million years (roughly). Despite these chaotic features, GOM integrations spanning billions of years are commonplace and useful — provided that the GOM integration scrupulously respects dynamical conservation of energy, angular momentum, and symplectic phase-space volume — in that the statistical properties of solar system dynamics are modeled accurately.

      For mathematical details see Konstantin Batygin and Gregory Laughlin’s On the dynamical stability of the solar system (2008). Also good is Renu Malhotra, Matthew Holman, and Takashi Ito’s Chaos and stability of the solar system.

      https://www.youtube.com/watch?v=Ycs0wHku5Cw

      Best wishes for happy learning regarding general orbital models (GOMs) and general climate models (GCMs) are extended to all Climate Etc readers!

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • Orbital mechanics requires only a few degrees of freedom to solve. Therefore it is more or less computable.
        Navier Stokes is, of course, infinite dimensional. Therefore it must be discretized which immediately introduces error. Then these resulting millions of degrees of freedom must be integrated while trying to preserve proximity to the original equation–not computable.

        Its like comparing a high school 100m with an olympic 100M.

      • And only a mildy chaotic system to boot (planets).
        Many chaotic systems show exactly this kind of behavior: long term stability (and linear error growth) follow by regions of chaotic instability and exponential error growth.

        Statistics? What exactly is your claim?

        pure drivel, sorry.

      • “Conclusion General circulation models (GCMs) are not presently, and never have been, generally regarded as providing the strongest scientific evidence for the accelerating reality and harmful consequences of anthropogenic global warming (AGW).”

        I mean, scroll down a bit FOM. On the same thread you are both conceeding that GCM are not accurate and then here you go claiming they are accurate.

        Logic fail.

        GCM’s are accurate. Statistics of GCM’s are accurate.

        Yeah and God is the tooth fairy.

      • A fan of *MORE* discourse

        These physical principles are not complicated “Nickels”!

        Remember  local dynamics is chaotic; global thermodynamics is well-conditioned:

        https://www.youtube.com/watch?v=YgGik5q1JSA

        Again  local weather is chaotic; global climate is well-conditioned:

        https://www.youtube.com/watch?v=CCmTY0PKGDs

        Yet again  that’s why the sea-level is rising, the oceans are heating (and acidifying), and the polar ice is melting … all without pause or near-term limit … despite the chaotic dynamics of local weather.

        http://scienceblogs.com/gregladen/files/2014/09/HockeyStickOverview_html_6623cbd611-610×391.png

        Nickels, it is a pleasure to assist your mathematical and physical understanding!

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • So the claim is that even tho we cannot compute local weather. If we do so, everywhere, and sum all the results they will be correct?

        So, in other words, if we run a climate simulation, and 1 month out the solution has 100% error (which it will) nevertheless averaging we will have the correct climate.

        Well, no.

        Let me explain to you how mathematical modeling works.

        1) You have reality.
        2) you have your model.

        3) You run your model. It begins to diverge from reality. Before it completely diverges some things (like global averages) might still have some accuracy, which can be demonstrated by a-posteriori analysis.

        4) Your solution finally diverges completely.

        5) Your solution is worthless except for making pretty pictures and feeding the CAGW propaganda machine.

        There is no diverging completely from the local solution but still maintaining some sort of global accuracy.
        This is nonsense.

      • A fan of *MORE* discourse

        nickels deposes  “Let me explain to [Climate Etc] readers how mathematical modeling works.”

        Broad-range scientific researchers like Frenkel and Smit

        http://ecx.images-amazon.com/images/I/51v7oypRQuL.jpg

        and corporations like ANSYS

        https://www.youtube.com/watch?v=M-Z-wzyTs04

        alike will be very surprised to learn (from nickels) that their (prospering!) dynamical simulation enterprises cannot possibly generate predictions of value.

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • Well, if only climate models had the capabilities of a tool like ANSYS.
        ANSYS has adaptivity as well as some basic error estimators.
        And, more importantly, engineers know how to use it. They understand the limitations. Its one thing to try and find the max stress on a part within a vat of fluid, quite another to integrate the entire world forward 100 years.

      • SkepticGoneWild

        FOMBS knows how to cut and paste. That’s it.

      • Basil Newmerzhycky

        “Fan of More Discourse” is absolutely correct in that short term “weather models” used to forecast days 1-10 are very susceptible to “chaos” whereas longer term global climate operates in a closed system, with greatly minimized chaotic effects.

        Nickel, I suggest you familiarize yourself with how both numerical weather models and GCM’s work. They are totally different and a 7 day weather model prognosis is probably less challenging than a 7 year climate model run. You might want to look over some of those links that Discourse provided, they might help you understand modeling and chaos theory.

      • Models are not constrained at all and continue to diverge from close starting points.

        http://d29qn7q9z0j1p6.cloudfront.net/content/roypta/369/1956/4751/F2.medium.gif

        Climate states shift abruptly and more or less extremely every 20 or 30 years – and the physics of these internal climate shifts are utterly unpredictable.

    • Anyone who claims that an effectively infinitely large open-ended non-linear feedback-driven (where we don’t know all the feedbacks, and even the ones we do know, we are unsure of the signs of some critical ones) chaotic system – hence subject to inter alia extreme sensitivity to initial conditions – is capable of making meaningful predictions over any significant time period is either a charlatan or a computer salesman.

      Ironically, the first person to point this out was Edward Lorenz – a climate scientist.

      You can add as much computing power as you like, the result is purely to produce the wrong answer faster.

      Adding “ensembles” of models together and hoping to get anything sensible is about as helpful as producing an ensemble of tea leaf readers or black cock entrail experts.

  12. Two premisis are odds. If it is assumed that model input complexity increases reliability, why then should feedbacks be adjusted to yield sensitivity rates congruent with early less-sophisticated models?

    Obviously, it can be argued that increased input complexity will yield more reliable results. This begs the question, regarding the potential existence of a universal downward bias on initial simulations, prior to adjustments for congruence with early model sensitivity ranges?

  13. I especially like the line: “Mutual confirmation of models (simple or complex) is often referred to as ’scientific robustness’. “

  14. A thoroughly enjoyable read. I am sure many skeptics feel vindicated, since many of the author’s comments confirm their perceptions.

    I like this paragraph in particular:

    Model results that confirm earlier model results are perceived more reliable than model results that deviate from earlier results. Especially the confirmation of earlier projected Equilibrium Climate Sensitivity between 1.5″C and 4.5″C degree Celsius seems to increase the perceived credibility of a model result. Mutual confirmation of models (simple or complex) is often referred to as ’scientific robustness’.

    In other words, rock the boat at your peril.

    The doubts are going to have to come from within. No amount of exogenous criticism will do the trick. Institutional instability begins at the core and works out. Termites come to mind.

  15. Curious George

    I question a mathematical erudition of GCM modelers. They use a latitude-longitude grid, which is perfectly good for low and middle latitudes, but is woefully suboptimal in polar regions. There is a whole body of experience on grid optimization, so-called “finite element method”.

    But even in an article titled “The Finite Element Sea Ice-Ocean Model (FESOM) v.1.4” we read “To avoid the singularity [on a triangle covering the North Pole] a spherical coordinate system with the north pole over Greenland (40◦ W, 75◦ N) is used.” Q.E.D.
    http://www.geosci-model-dev.net/7/663/2014/gmd-7-663-2014.pdf

    • There is definitely some crazy stuff going on with a lot of these legacy climate models. Not only the pole issue, but various pressure coordinate systems, etc. Switching to spherical coordinates, dropping terms, etc, etc…
      The math is all sound, but a lot of these decisions and approaches are more historical than optimal.

      To me the failing and where finite elements has the (untapped) potential to help with climate modeling is in the error estimate and convergence statements, as well as the simplicity of rolling the coordinate transforms into the method as opposed to changing coordinates before discretizing.

      A model without some estimate of its error and without the ability to do grid convergence studies leaves much to be desired.

  16. Matthew R Marler

    Mutual confirmation of models (simple or complex) is often referred to as ’scientific robustness’.

    Held and Soden, in their paper entitled “Robust Respnses of the Hydrological Cycle to Climate Warming” make that claim explicitly. With almost all of the model runs being too high for out-of-sample data since the models were run, it is just as reasonable to hypothesize that what the models have in common is a mistake. Or a common set of mistakes.

    More comprehensive models that incorporate more physics are considered more suitable for climate projections and climate change science than their simpler counterparts because they are thought to be better capable of dealing with the many feedbacks in the climate system.

    With finite data, or with a modeling system that is growing more and more complex at a faster rate than the data sets are growing, model complexity can reduce model reliability significantly, unless the more complex models are significantly more accurate. That is due to instability (multiple correlation) and inaccuracy in the parameter estimates — inaccuracy and variability of the parameter estimates are generally greatly underestimated.

    Personally, I don’t believe that a new “paradigm” is needed: there are models of many levels of complexity and “physical reality”. What I think is needed is much more work in collecting and analyzing the most appropriate data possible, and continued work within the existing “paradigm”, or “paradigms” (the word was a little vague at the start, and has morphed beyond any definition at all, imho.)

    I wish the author success in getting the main points published in a prestigious journal.

    • Matthew R Marler

      Ah. It is done in the European style, and some of the work has already been published in peer-reviewed journals.

  17. A fan of *MORE* discourse

    Climate Etc readers will be delighted to learn that Alexander Bakker’s thesis — which questions the scientific primacy of numerical general circulation models (GCMs) — has found strong support from James Hansen and a long list of collaborators, who represent strong climate-research groups from around the world!

    An Old Story, but Useful Lessons
    James Hansen, 26 September 2013

    Introduction  International discussions of human-made climate change (e.g., IPCC) rely heavily on global climate models, with less emphasis on inferences from the paleo record. A proper thing to say is that paleoclimate data and global modeling need to go hand in hand to develop best understanding — almost everyone will agree with that. […]

    There is a tendency in the literature to treat an ensemble of model runs as if its distribution function is a distribution function for the truth, i.e., for the real world.

    Wow. What a terrible misunderstanding.

    Today’s models have many assumptions and likely many flaws in common, so varying the parameters in them does not give a probability distribution for the real world, yet that is often implicitly assumed to be the case. […]

    Conclusion  It is not an exaggeration to suggest, based on best available scientific evidence, that burning all fossil fuels could result in the planet being not only ice-free but human-free.

    ——
    Q&A with James Hansen
    13 December, 2013

    Our paper [“Assessing Dangerous Climate Change: Required Reductions of Carbon Emissions to Protect Young People, Future Generations and Nature”] is based fundamentally on observations, on studies of earth’s energy imbalance and the paleoclimate rather than on climate models.

    Although I’ve spent decades working on [climate models], I think there probably will remain for a long time major uncertainties, because you just don’t know if you have all of the physics in there.

    Some of it, like about clouds and aerosols, is just so hard that you can’t have very firm confidence.

    So yes, while you could say most of these [messages] you can find one place or another, but we’ve put the whole story together. The idea was not that we were producing a really new finding but rather that we were making a persuasive case for the judge.

    Please note the long list of coauthors associated to these strong statements: James Hansen, Pushker Kharecha, Makiko Sato, Valerie Masson-Delmotte, Frank Ackerman, David J. Beerling, Paul J. Hearty, Ove Hoegh-Guldberg, Shi-Ling Hsu, Camille Parmesan, Johan Rockstrom, Eelco J. Rohling, Jeffrey Sachs, Pete Smith, Konrad Steffen, Lise Van Susteren16, Karina von Schuckmann, James C. Zachos.

    And Naomi Oreskes too has long been a critic of over-reliance upon complex numerical models:

    Naomi “Merchants of Doubt” Oreskes
    Slams “Corrosive” Climate Change Skepticism

    Our 1994 paper in the journal Science [“Verification, Validation, and Confirmation of Numerical Models in the Earth Sciences”] took a critical look at numerical simulation models.

    It’s never been my view that we should trust science uncritically. I’ve always been interested in the questions: How do we know when to trust science? How do we distinguish between healthy and corrosive skepticism? In short, how do we judge scientific claims?

    In climate science, the case for the reality of anthropogenic climate change does not rest solely (or even primarily) on climate models. If it did, I’d be a skeptic too.

    I still believe what I wrote in 1994: models are a tool for exploring and testing systems. Their primary value is heuristic. But together with other lines of evidence they can be part of a persuasive scientific case. Or not.

    Conclusion  General circulation models (GCMs) are not presently, and never have been, generally regarded as providing the strongest scientific evidence for the accelerating reality and harmful consequences of anthropogenic global warming (AGW).

    To assert otherwise is — as Bakker, Hansen, and Oreskes all emphasize — “a terrible misunderstanding.”

    Good on `yah Alexander Bakker … for joining James Hansen and Naomi Oreskes in publicly proclaiming this common-sense scientific reality!

    \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • AFOMD, let’s take the simple approach to becoming ardent fans of more discourse:

      Climate scientists have moved the goalposts for evaluating the predictions of the climate models by adopting a new approach to GCM verification. Rather than using the central tendency of the GMT mean, they are, for all practical purposes, now using the trend in peak hottest years which occur every four or five years as their basis for GCM verification: 1998, 2005, 2010, 2014 etc.

      Gavin Schmidt: “With the continued heating of the atmosphere and the surface of the ocean, 1998 is now being surpassed every four or five years, with 2014 being the first time that has happened in a year featuring no real El Niño pattern.” Gavin A. Schmidt, head of NASA’s Goddard Institute for Space Studies in Manhattan, said the next time a strong El Niño occurs, it is likely to blow away all temperature records.

      With that as background, please tell us in 100 words or less why — while CO2 is currently increasing at the higher RCP8.5 emission scenario — there is now a dramatic slowdown in global warming in progress. Tell us in simple, understandable words why there is not an AGW Credibility Gap developing, as is illustrated on this graph:

      http://i1301.photobucket.com/albums/ag108/Beta-Blocker/GMT/The-AGW-Credibility-Gap-01-c1020_zpsf0b0fabd.png

      • I assume you meant “now” in:

        “Tell us in simple, understandable words why there is not an AGW Credibility Gap developing, as is illustrated on this graph:”

      • rogerknights, since I am addressing my question to AFOMD, who apparently believes that the recent pause in global warming in the face of ever-rising CO2 concentrations doesn’t represent the kind of scientific issue which should cast doubt on the supposed mechanisms of global warming, and on the ability of the GCM’s to project future global warming, then I choose to phrase my question to him using the negative element ‘not’.

      • A fan of *MORE* discourse

        Betablocker, your concerns are addressed in the reply (above) to “nickels.”

        In a nutshell  local dynamics (namely “weather”) is chaotic; global thermodynamics (namely “climate”) is well-conditioned

        Betablocker (and nickels too), it is a continuing pleasure to assist your understanding of the Bakker/Hansen/Oreskes thesis. Namely, that thermodynamics is primary, while numerical models are secondary, in regard to the scientific understanding of climate-change!

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • Well, good on ye Hanson and others for taking care with climate models.
      But as far as I can tell, the substitute is:

      A) Make an interesting but unsupported simplification (essential assume a linear system –the greenhouse model):

      “We hypothesize that the global climate variations of the Cenozoic (figure 1) can be understood and analysed via slow temporal changes in Earth’s energy balance, which is a function of solar irradiance, atmospheric composition (specifically long-lived GHGs) and planetary surface albedo. ”

      B) Come up with a ‘climate sensitivity’ factor (using a simplified climate model). Again, assumptions of extreme linearity; there is no reason such a ‘factor’ need exists….

      “Climate sensitivity (S) is the equilibrium global surface temperature change (ΔTeq) in response to a specified unit forcing after the planet has come back to energy balance, Embedded Image
      5.1
      i.e. climate sensitivity is the eventual (equilibrium) global temperature change per unit forcing. Climate sensitivity depends upon climate feedbacks, the many physical processes that come into play as climate changes in response to a forcing. Positive (amplifying) feedbacks increase the climate response, whereas negative (diminishing) feedbacks reduce the response.”

      C) Forecast doom:

      “We conclude that the large climate change from burning all fossil fuels would threaten the biological health and survival of humanity, making policies that rely substantially on adaptation inadequate.”

      All they seem to have done in place of models is make an extreme simplifying assumption and then run their extremely simple model to forecast doom.

      All interesting work, but not any more conclusive.

      (from FOMS link)
      http://rsta.royalsocietypublishing.org/content/371/2001/20120294

  18. Matthew R Marler

    Here is a humorous quote from the thesis: In the few decades’ history of climate change assessments, a lot of effort has been invested in the construction and in the procedural aspects. Remarkably enough, these assessments are rarely evaluated (Hulme and Dessai 2008b; Enserink et al. 2013). This seems a little strange as the sparse evaluations of climate assessments all conclude that there is a huge gap between the provided information on climate change and the user needs (e.g. Tang and Dessai 2012; Keller and Nicholas 2013; Enserink et al. 2013).

  19. Judith Curry

    “I seriously doubt that such a thesis would be possible in an atmospheric/oceanic/climate science department in the U.S. – whether the student would dare to tackle this, whether a faculty member would agree to supervise this, and whether a committee would ‘pass’ the thesis.”

    My query: At what time in academic and government funding priorities has NOT been driven by political correctness? From the earliest tomb of Ptolemy’s Almagest geocentric mathematical models that survived into the Middle Ages, to current tuned GCMs that survive learned societies’ (Royal, American Academy of Science, etc) scrutiny. I think Eisenhower’s farewell address was merely recapitulating what was obvious from the past as most military leaders have been students of history.

    To me, Bakker’s most heretical statement is: “Then section 2.4 elaborates on the pitfalls of fully relying on physics.” Bingo!

    I know Judith that you have a new edition of your atmospheric physics textbook coming out. However, if we knew all the physics that was relevant to making climate predictions and that physics were incorporated into the GCM’s and nothing more, we would all be talking about something else like: when are we going to achieve warp speed.

    Time to trash GCMs, and start all over again; after all, we have learned a thing or two since: “The founding assessments of Charney et al. (1979) and Bolin et al. (1986) did see the great potential of future GCMs, but based their likely-range of ECS on expert judgment and simple mechanistic understanding of the climate system.”

    • From the earliest tomb of Ptolemy’s Almagest geocentric mathematical models that survived into the Middle Ages, […]

      In fact it goes much farther back than that. Ptolemy lived in late Antiquity, at which time the original Academy founded by Plato had long since been perverted by subsidies and other favors from Hellenistic “kings” in Alexandria and Pergamon. Documents surviving from the “classical” age of Philosophy were cherry-picked for preservation and publication depending on how well they fit the preconceptions of a bunch of political academics dependent on such “royal” favor. Then they, and Hellenistic writings depending on them, were cherry-picked over again depending on how well they fit the interests of Roman conquerors, first late Republican, then associated with the Caesars/Imperium.

      Then they were all burned, the surviving documents being those popular enough to survive in country houses and private libraries of powerful Romans. Then they were cherry-picked again by early Christian “Philosophers” looking to rationalize their religious beliefs with natural science. Then some of those rationalizations were adopted by the Roman church as “immutable” doctrine, Ptolemy’s astrology among them.

      • AK

        I see not only military leaders are students of history. Thank you.

        I enjoy learning more about the past in spite of the reconstructions performed to fit a current popular narrative. The more I read different perspectives, I can see the similarities to the present.

      • > Documents surviving from the “classical” age of Philosophy were cherry-picked for preservation and publication depending on how well they fit the preconceptions of a bunch of political academics dependent on such “royal” favor.

        Citation needed.

      • > Then some of those rationalizations were adopted by the Roman church as “immutable” doctrine, Ptolemy’s astrology among them.

        See for yourself:

        The Catechism of the Catholic Church states, “All forms of divination are to be rejected: recourse to Satan or demons, conjuring up the dead or other practices falsely supposed to ‘unveil’ the future. Consulting horoscopes, astrology, palm reading, interpretation of omens and lots, the phenomena of clairvoyance, and recourse to mediums all conceal a desire for power over time, history, and, in the last analysis, other human beings, as well as a wish to conciliate hidden powers. They contradict the honor, respect, and loving fear that we owe to God alone” (CCC 2116).

        http://www.catholic.com/tracts/astrology

      • Everybody knows we have already run out of space…

        http://www.independent.co.uk/news/science/eternal-life-could-be-achieved-by-procedure-to-lengthen-chromosomes-10020974.html

        yet scientists could be ready to practice on their view of eternity tomorrow.

      • Citation needed.

        The Vanished Library: A Wonder of the Ancient World. by Luciano Canfora, Translated by Martin Ryle. Berkeley and Los Angeles: University of California Press, 1990. Pp. ix + 205; 6 figures. ISBN 0-520-07255-3 (pb). As with all such references I give, I’ve done considerable research beyond it, and am willing to vouch for it, somewhat. The paraphrases are mine.

        It’s a book highly worth reading, IMO.

        As for “astrology”, even Isaac Newton would have described himself as an “astrologer”. The division of the subject into “astrology” which is fortune-telling, and “astronomy” came later:

        Since the 18th century they have come to be regarded as completely separate disciplines. Astronomy, the study of objects and phenomena originating beyond the Earth’s atmosphere, is a science[2][3][4] and is a widely-studied academic discipline. Astrology, which uses the apparent positions of celestial objects as the basis for the prediction of future events, is defined as a form of divination and is regarded by many as a pseudoscience having no scientific validity.[5][6][7]

        Just out of curiosity, do you really care about this, or are you just trying to waste my time?

        @RiHo08…

        You’re welcome. Always glad to pass along some of what I’ve learned in my misspent life, to anyone actually interested.

        AK

      • > As for “astrology”, even Isaac Newton would have described himself as an “astrologer”.

        Sure, and Galileo himself, the first Denizen, did some too:

        http://www.skyscript.co.uk/galast.html

        These factoids are far from establishing that “some of those rationalizations were adopted by the Roman church as “immutable” doctrine, Ptolemy’s astrology among them,” whatever “some of those rationalizations” may be. Unless we have some real Doctrine on the table, all we have is handwaving. More so that this thesis might very well be undermined yet another Wiki entry:

        Christianity has generally been in opposition to astrology, but its popularity led to it being woven the Christian beliefs at some periods and in some populations.

        http://en.wikipedia.org/wiki/Christian_views_on_astrology

        However hard William Lilly tried to sell his astrology as Christian, it never really was.

        ***

        Canfora is interesting, but it might not the the slam dunk AK’s “paraphrases” show:

        http://www.jstor.org/discover/10.2307/41592179

        The truth is out there even in ancient history, I guess.

      • These factoids are far from establishing that “some of those rationalizations were adopted by the Roman church as “immutable” doctrine, Ptolemy’s astrology among them,” whatever “some of those rationalizations” may be.

        You’re missing my point. (Deliberately?) It’s generally accepted that the Ptolemaic model was considered “part of Church doctrine”, although I suppose some nit-picking could be done whether anything beyond its basic geocentrism was considered “mandated by Scripture”. However, at that time the entire Ptolemaic model was considered “Astrology”, because the term “astronomy” was limited to identifying names of stars. That’s why I used the word I did. Primarily to remind readers that there have been many changes since then.
        http://upload.wikimedia.org/wikipedia/commons/9/9e/1550_SACROBOSCO_Tractatus_de_Sphaera_-_%2816%29_Ex_Libris_rare_-_Mario_Taddei.JPG
        For that matter, the dividing line between “divination” and predicting the positions of “Heavenly objects” based on mathematical models has also changed.

      • Hey, the flights of birds and the perambulations of guts are way more pertinent and perspicacious, particularly to the birds and the turds.
        ====================

      • > It’s generally accepted that the Ptolemaic model was considered “part of Church doctrine” […]

        This conflates Ptolemy’s astrology with the Ptolemaic model, AK.

        Here’s Ptolemy’s Tetrabiblos:

        http://www.astrologiamedieval.com/tabelas/Tetrabiblos.pdf

        Also note that to speak of the “Ptolemaic model” presume it was his invention. It was not. Nor was it “his” astrology.

        You may not find any astrological elements in the Christian doctrines, AK. There’s an obvious reason for that. Think.

        ***

        Here’s how to substantiate a claim that “documents” get “cherry-picked for preservation and publication depending on how well they fit the preconceptions of a bunch of political academics dependent on such “royal” favor” in case you ever feel like it:

        The Harper government has dismantled one of the world’s top aquatic and fishery libraries as part of its agenda to reduce government as well as limit the role of environmental science in policy decision-making.

        http://thetyee.ca/News/2013/12/09/Dismantling-Fishery-Library/

        ***

        I don’t know who’s wasting whose time here.

      • This conflates Ptolemy’s astrology with the Ptolemaic model, AK.

        No, the Ptolemaic model was part of “Ptolemy’s astrology”.

        Also note that to speak of the “Ptolemaic model” presume it was his invention. It was not. Nor was it “his” astrology.

        Ptolemy was a librarian. We call it the “Ptolemaic model” because the usual source was summary documents he wrote. Including a good deal of what today would be considered plagiarism of previous authors. Just as with “Euclidean geometry”, which, AFAIK, is generally thought to be summaries of “current knowledge”.

        Here’s how to substantiate a claim that “documents” get “cherry-picked for preservation and publication depending […]

        Well, IIRC it’s not too different from some of the original material cited by Canfora. In both cases, highly biased sources, of which observers are free to make their own interpretations.

        The parallel with the burning of the Library in Alexandria (assuming it actually happened which Canfora questions) seems valid to me. In each case, the cherry-picking took place prior, in terms of which documents were copied an made available elsewhere.

        Unless they were all scanned and made available online by Google? I don’t know, and don’t care enough to research it, although Κυδος to Google if they were!

        I don’t know who’s wasting whose time here.

        I do.

      • > The Ptolemaic model was part of “Ptolemy’s astrology”.

        The Ptolemaic model is not in the Tetrabiblos, but in the Almagest:

        http://en.wikipedia.org/wiki/Almagest

        Maths and astronomy in one book, astronomy in another one.

        Fancy that.

      • Arch:

        > astronomy in another one.

        astrology in another one.

        ***

        Also note that the picture above is of the Almagest.

        ***

        We have yet to see the “rationalizations” the Church adopted as Doctrine.

        More on the Galileo controversy:

        http://www.catholic.com/tracts/the-galileo-controversy

        ***

        All this from a guy who pretends having read Kuhn.

      • Maths and astronomy in one book, astronomy [sic: astrology?] in another one.

        Nope. Maths and one kind of astrology in one book, another kind of astrology (divination) in another.

        Only later did they diverge:

        For a long time the funding from astrology supported some astronomical research, which was in turn used to make more accurate ephemerides for use in astrology. In Medieval Europe the word Astronomia was often used to encompass both disciplines as this included the study of astronomy and astrology jointly and without a real distinction; this was one of the original Seven Liberal Arts. Kings and other rulers generally employed court astrologers to aid them in the decision making in their kingdoms, thereby funding astronomical research. University medical students were taught astrology as it was generally used in medical practice. [my bold]

        Astronomy and astrology diverged over the course of the 17th through 19th centuries. Copernicus didn’t practice astrology (nor empirical astronomy; his work was theoretical[12]), but the most important astronomers before Isaac Newton were astrologers by profession – Tycho Brahe, Johannes Kepler, and Galileo Galilei. Newton most likely rejected astrology, however (as did his contemporary Christiaan Huygens),[13][14][15] and interest in astrology declined after his era, helped by the increasing popularity of a Cartesian, “mechanistic” cosmology in the Enlightenment.

      • We have yet to see the “rationalizations” the Church adopted as Doctrine.

        Well, perhaps you have trouble seeing over your own towering preconceptions. From your own link:

        Centuries earlier, Aristotle had refuted heliocentricity, and by Galileo’s time, nearly every major thinker subscribed to a geocentric view. Copernicus refrained from publishing his heliocentric theory for some time, not out of fear of censure from the Church, but out of fear of ridicule from his colleagues.

        Many people wrongly believe Galileo proved heliocentricity. He could not answer the strongest argument against it, which had been made nearly two thousand years earlier by Aristotle: If heliocentrism were true, then there would be observable parallax shifts in the stars’ positions as the earth moved in its orbit around the sun. However, given the technology of Galileo’s time, no such shifts in their positions could be observed.

        […]

        In 1614, Galileo felt compelled to answer the charge that this “new science” was contrary to certain Scripture passages. His opponents pointed to Bible passages with statements like, “And the sun stood still, and the moon stayed . . .” (Josh. 10:13). This is not an isolated occurrence. Psalms 93 and 104 and Ecclesiastes 1:5 also speak of celestial motion and terrestrial stability. A literalistic reading of these passages would have to be abandoned if the heliocentric theory were adopted. Yet this should not have posed a problem. As Augustine put it, “One does not read in the Gospel that the Lord said: ‘I will send you the Paraclete who will teach you about the course of the sun and moon.’ For he willed to make them Christians, not mathematicians.” Following Augustine’s example, Galileo urged caution in not interpreting these biblical statements too literally.

        Unfortunately, throughout Church history there have been those who insist on reading the Bible in a more literal sense than it was intended.

        Etc. Given the source, and its jesuitical efforts to pretend to a church open to scientific challenges to its “immutable truths”, I would say this constitutes good evidence that the heliocentric astrology of Aristotle and Ptolemy were regarded as doctrine.

        Note that the linked sophistry from the Roman Church completely ignores the issues of the moons of Jupiter and the phases of Venus:

        From September 1610, Galileo observed that Venus exhibited a full set of phases similar to that of the Moon. The heliocentric model of the solar system developed by Nicolaus Copernicus predicted that all phases would be visible since the orbit of Venus around the Sun would cause its illuminated hemisphere to face the Earth when it was on the opposite side of the Sun and to face away from the Earth when it was on the Earth-side of the Sun. On the other hand, in Ptolemy’s geocentric model it was impossible for any of the planets’ orbits to intersect the spherical shell carrying the Sun. Traditionally the orbit of Venus was placed entirely on the near side of the Sun, where it could exhibit only crescent and new phases. It was, however, also possible to place it entirely on the far side of the Sun, where it could exhibit only gibbous and full phases. After Galileo’s telescopic observations of the crescent, gibbous and full phases of Venus, therefore, this Ptolemaic model became untenable.

      • > Maths and one kind of astrology in one book, another kind of astrology (divination) in another.

        Fascinating. Let’s return to the main claim:

        Then some of those rationalizations were adopted by the Roman church as “immutable” doctrine, Ptolemy’s astrology among them.

        Was Ptolemy’s model the only “rationalization” adopted by the Roman church as “immutable” doctrine, or is there “some” more?

        Still no Doctrine.

        You know what a “Doctrine” means for the Church?

        Here’s a list:

        https://carm.org/basic-christian-doctrine

        Not much about the Ptolemaic model there.

        Galileo’s trial was not over the Ptolemaic model, by the way.

      • dont forget there were two suns

      • Was Ptolemy’s model the only “rationalization” adopted by the Roman church as “immutable” doctrine, or is there “some” more?

        Still no Doctrine.

        You know what a “Doctrine” means for the Church?

        doc·trine
        ˈdäktrən/
        noun
        noun: doctrine; plural noun: doctrines

        a belief or set of beliefs held and taught by a church, political party, or other group.
        “the doctrine of predestination”
        synonyms: creed, credo, dogma, belief, teaching, ideology; More
        tenet, maxim, canon, principle, precept
        “the doctrine of the Trinity”
        US
        a stated principle of government policy, mainly in foreign or military affairs.
        “the Monroe Doctrine”

        Creed?:

        Whosoever will be saved, before all things it is necessary that he hold the catholic faith. Which faith except every one do keep whole and undefiled; without doubt he shall perish everlastingly. And the catholic faith is this: That we worship one God in Trinity, and Trinity in Unity; Neither confounding the Persons; nor dividing the Essence. For there is one Person of the Father; another of the Son; and another of the Holy Ghost. But the Godhead of the Father, of the Son, and of the Holy Ghost, is all one; the Glory equal, the Majesty coeternal. Such as the Father is; such is the Son; and such is the Holy Ghost. The Father uncreated; the Son uncreated; and the Holy Ghost uncreated. The Father unlimited; the Son unlimited; and the Holy Ghost unlimited. The Father eternal; the Son eternal; and the Holy Ghost eternal. And yet they are not three eternals; but one eternal. As also there are not three uncreated; nor three infinites, but one uncreated; and one infinite. So likewise the Father is Almighty; the Son Almighty; and the Holy Ghost Almighty. And yet they are not three Almighties; but one Almighty. So the Father is God; the Son is God; and the Holy Ghost is God. And yet they are not three Gods; but one God. So likewise the Father is Lord; the Son Lord; and the Holy Ghost Lord. And yet not three Lords; but one Lord. For like as we are compelled by the Christian verity; to acknowledge every Person by himself to be God and Lord; So are we forbidden by the catholic religion; to say, There are three Gods, or three Lords. The Father is made of none; neither created, nor begotten. The Son is of the Father alone; not made, nor created; but begotten. The Holy Ghost is of the Father and of the Son; neither made, nor created, nor begotten; but proceeding. So there is one Father, not three Fathers; one Son, not three Sons; one Holy Ghost, not three Holy Ghosts. And in this Trinity none is before, or after another; none is greater, or less than another. But the whole three Persons are coeternal, and coequal. So that in all things, as aforesaid; the Unity in Trinity, and the Trinity in Unity, is to be worshipped. He therefore that will be saved, let him thus think of the Trinity.

        Furthermore it is necessary to everlasting salvation; that he also believe faithfully the Incarnation of our Lord Jesus Christ. For the right Faith is, that we believe and confess; that our Lord Jesus Christ, the Son of God, is God and Man; God, of the Essence of the Father; begotten before the worlds; and Man, of the Essence of his Mother, born in the world. Perfect God; and perfect Man, of a reasonable soul and human flesh subsisting. Equal to the Father, as touching his Godhead; and inferior to the Father as touching his Manhood. Who although he is God and Man; yet he is not two, but one Christ. One; not by conversion of the Godhead into flesh; but by assumption of the Manhood by God. One altogether; not by confusion of Essence; but by unity of Person. For as the reasonable soul and flesh is one man; so God and Man is one Christ; Who suffered for our salvation; descended into hell; rose again the third day from the dead. He ascended into heaven, he sitteth on the right hand of the God the Father Almighty, from whence he will come to judge the living[16] and the dead. At whose coming all men will rise again with their bodies; And shall give account for their own works. And they that have done good shall go into life everlasting; and they that have done evil, into everlasting fire. This is the catholic faith; which except a man believe truly and firmly, he cannot be saved.

        How much of the above actually depends on the principles of Paul and the Synoptic Evangelists, and how much on Neoplatonist rationalizations?

        Certain central tenets of Neoplatonism served as a philosophical interim for the Christian theologian Augustine of Hippo on his journey from dualistic Manichaeism to Christianity. As a Manichee, Augustine had held that evil has substantial being and that God is made of matter; when he became a Neoplatonist, he changed his views on these things. As a Neoplatonist, and later a Christian, Augustine believed that evil is a privation of good and that God is not material. Perhaps more importantly, the emphasis on mystical contemplation as a means to directly encounter God or the One, found in the writings of Plotinus and Porphyry, deeply affected Augustine. He reports at least two mystical experiences in his Confessions which clearly follow the Neoplatonic model. According to his own account of his important discovery of ‘the books of the Platonists’ in Confessions Book 7, Augustine owes his conception of both God and the human soul as incorporeal substance to Neoplatonism.

        Many other Christians were influenced by Neoplatonism, especially in their identifying the Neoplatonic One, or God, with Yahweh. The most influential of these would be Origen, who potentially took classes from Ammonius Saccas (but this is not certain because there may have been a different philosopher, now called Origen the pagan, at the same time), and the late 5th century author known as Pseudo-Dionysius the Areopagite.

        Granted, Augustine may have later formally abandoned Neoplatonism, but his methods of thought remain grounded in it (IMO).

  20. This is interesting, He was using GCMs to do real calculations for the real world but he didn’t have the option of adjusting the real world to fit the GCMs.

    I expect that in the near future CGMs will become more widely used for just such practical applications. This thesis was therefore only a matter of time – but better sooner than later.

    I applaud Judith for giving it the attention it deserves. There aren’t many venues where this can happen.

  21. From the post:

    “Model results that confirm earlier model results are perceived more reliable than model results that deviate from earlier results. Especially the confirmation of earlier projected Equilibrium Climate Sensitivity between 1.5″C and 4.5″C degree Celsius seems to increase the perceived credibility of a model result. Mutual confirmation of models (simple or complex) is often referred to as ’scientific robustness’.”

    Looks like an example of the anchor bias.

    http://en.m.wikipedia.org/wiki/Anchoring

  22. Pielke Jr. makes a pretty good case that the world has already reached consensus on the topic of how “robust” GCMs are in the only way that really counts.
    “While people will no doubt continue to enjoy debating about and witnessing to climate policies, the fact is, at the meta-level, that debate is pretty much over. Climate policy has entered its middle aged years.”

    https://theclimatefix.wordpress.com/2015/02/02/future-trends-in-carbon-free-energy-consumption-in-the-us-europe-and-china/

  23. ‘Especially the confirmation of earlier projected Equilibrium Climate Sensitivity between 1.5″C and 4.5″C degree Celsius seems to increase the perceived credibility of a model result.’ Huh – I think Nic Lewis would be very interested in seeing some of the “failed” models that produced ECS<2, that apparently never saw the light of day.

  24. The Netherlands, not to be confused with Holland…
    ‘Holland vs the Netherlands’ on YouTube – http://youtu.be/eE_IUPInEuc

    • nottawa rafter

      So confusing I am surprised they got this far to found their nice little town in West Michigan.

      Ahhh, the Dutch jokes I hear. About as good as the Yooper jokes. (U. P) . Thanks for the link.
      :)

      • My personal favorite (40 years in West MI) – How was electrical wire invented? Two Hollanders fighting over a penny.

  25. Svend Ferdinandsen

    I speculate on why models are tuned to balance TOA radiation. With all the changes in temperature it can never be in balance, not even over 100’s of years,.

    • Curious George

      Unsure what balance you mean. Where most of us live, mornings are cold and then the day warms us. And winters are cold and summers are warm. An instantaneous balance? – clearly a fiction. A daily balance? Approximate at best. A yearly balance? Maybe. And remember that a blackbody radiation depends on T**4 – not that I propose that treating the Earth as a black body is an acceptable model.

  26. Another natural bias is lack of comprehension of paleo time-spans leading one to a false assumption in the early 1970s, looking back forty years, that we might be slipping back into ice age, and after the warm 80s eighties dusting off Arrhenius’ Green House Effect with a question mark, then in the warmer 90s with an exclamation point. By 2001 the science was settled.

    But meanwhile looking at more and better paleo-reconstructions we get a better perspective of what Earth’s baseline is on her timescale.

    I think many more of the climate modeling community need to take not solve paleo climate first. Knowing the mega-influences (causing +/- 3C) may just go a long way to get a start on solving the minor ones (+/1 0.3C).

  27. The key: “Model results that confirm earlier model results are perceived more reliable than model results that deviate from earlier results.”

    News flash — Newton’s law of gravity rejected by climate modellers for not confirming earlier models based on epicycles.

    The key above is an admission of scientific corruption of the first order.

  28. The latest study from the United Nations Intergovernmental Panel on Climate Change found that in the previous 15 years temperatures had risen 0.09 degrees Fahrenheit. The average of all models expected 0.8 degrees. So we’re seeing about 90% less temperature rise than expected… In other words, for at least the next two decades, solar and wind energy are simply expensive, feel-good measures that will have an imperceptible climate impact. ~Bjorn Lomborg

    • “..15 years temperatures had risen 0.09 degrees …”

      Data uncertainty? Year-year fluctuation; decade-decade; century-century ?

      • …it’s always good to question these things –e.g., it’s really only been a rise of 2/100ths but, close enough for government work.

  29. Perhaps it should be noted that The Netherlands already is built to a considerably extend on ground that is below present sea water level. Relying on the world to mitigate is therefore not an option, they just have to adapt. I imagine that statistical and probalistic models also are used and give results that makes it less controversial to question the GCM:s.

  30. A model is a great Gothic cathedral of complexity which has been built on foundations of assumption, extrapolation and simplification…then plugged with bias and guess.

    In short, not something for adults, which is why we’re in the present mess.

    Need adults, so badly.

  31. Hi Judy – I agree; KNMI has shown some leadership in the climate issue.

    Several years ago I participated in a review of KNMI [https://pielkeclimatesci.wordpress.com/2012/03/19/climate-research-assessment-and-recommendations-in-the-report-of-the-2004-2009-research-review-of-the-koninklijk-nederlands-meteorologisch-instituut/]

    Our findings included

    “The generation of climate scenarios for plausible future risk, should be significantly broadened in approach as the current approach assesses only a limited subset of possible future climate conditions.”

    Roger Sr.

  32. Group of physicists

    THESE ARE THE FAULTS IN THE CLIMATE MODELLING PARADIGM …

    Hansen, Trenberth et al made the huge mistake of thinking they could explain Earth’s surface temperature by treating the surface as a black body (which it is not because there are sensible heat transfers also involved) and then adding the flux from the colder atmosphere to that from the Sun and then deducting the non-radiative outward flux and finally using the net total of about 390W/m^2 in Stefan-Boltzmann calculations to get a temperature of 288K.

    Of course to get the right result they had to fiddle the back radiation figure up to 100% of the incident Solar radiation before it enters the atmosphere. Thus they devised an energy-creating atmosphere which delivered more thermal energy out of its base than entered at its top.

    To obtain their “33 degrees of warming” they effectively assumed that the main “greenhouse” gas water vapor warms the surface by 10 t 15 degrees for each 1% concentration in the atmosphere. Then they had to promulgate the myth (proven contrary to evidence) that deserts are colder than rain forests, though they did not enlarge on that and admit their conjecture meant at least 30 degrees colder where there is a 3% difference in water vapor.

    Then they worked out their 255K figure (ignoring the T^4 relationship) and said it was the temperature about 5Km above the surface. Perhaps it is, but they then used school boy “fissics” and assumed the surface temperature would be the same in the absence of their GH gases In fact the surface would receive less solar radiation than the region 5Km further up.

    So they had to reinvent the Second Law of Thermodynamics incorporating two major errors into their version of that law. The first error was to disregard the effect of gravitational potential energy on entropy, and the second error was to disregard the fact that the law applies to each independent process. Their version of the Second Law could be used to “prove” that water could flow up a mountainside provided that it flowed further down on the other side.

    They need to think, like Newton, and realize that when an apple falls off a tree then entropy increases, just as the Second Law says it will. So too does entropy increase when a molecule “falls” between collisions unless, that is, the sum of molecular gravitational potential energy and kinetic energy remains constant and there is thus a gravitationally induced temperature gradient.

    To prove their paradigm they would need to construct a Ranque Hilsch vortex tube and somehow ensure that the huge centrifugal force did not cause a huge temperature gradient in the cross-section of the tube. Until those promulgating the hoax can do that they have contrary evidence staring them in the face.

    • You make a good a lot of good points, especially gravity’s role. The GHE assumes that GHG is transparent to incoming, but of course there is a lot of IR in Sun light and that is heating the CO2 in upper atmosphere on the way in. This warmer CO2 that did not used to be there is emitting IR at the same rate it is absorbing in both directions, day and night. This warmer outer layer is somewhat immune to convection, stratifying instead because of it’s lower density. It’s sits like an electric blanket on the top of a bedspread, losing a lot of energy to cold night, (even cloudy ones), especially if CO2 density is at saturation of opacity above cloud height. A lot of the incoming heat that used to penetrate much further is now intercepted in day and emitted at night. Is this part of the models?

    • Does a warmer upper atmosphere make clouds condense at a higher average height? If so, again their reflective influence in day is increased as less atmosphere is being heat above, more shaded below. At night heat would be less trapped near the surface and again the clouds would have their warmth from condensation less insulated from emission outward.

    • Matthew R Marler

      Group of Physicists: Their version of the Second Law could be used to “prove” that water could flow up a mountainside provided that it flowed further down on the other side.

      It would have to be in enclosed watertight pipes, as with siphoning. Or is there a maximum height (say 14 feet of elevation) over which that would work?

      The Earth receives a steady stream of energy from the sun, some of which gets stored in tight chemical bonds in cellulose, sugar and bones. What are the implications of this steady stream of incoming energy for second law arguments in the atmosphere?

      Until those promulgating the hoax can do that they have contrary evidence staring them in the face.

      Could you avoid the word “hoax”, which, like “lie”, implies that they know that what they are promulgating is wrong?

    • Group  of  physicists

      So is it appropriate to call the claim that water vapor and carbon dioxide warm the surface to be hoax, a lie and a fraud? I say it is because climatologists must have questioned the role of water vapor and realised that the sensitivity cannot possibly be 10 to 15 degrees of warming for each 1% in the atmosphere. Climatologists have tried to kid the world that they know more about thermodynamics than do physicists. They have ignored how physicists define black bodies and then treated the Earth’s surface, the thin transparent surface layer of the oceans and even layers in the atmosphere as if they are black bodies. They have ignored what Loschmidt said about the autonomous formation of a temperature gradient resulting from the force of gravity acting on molecules in flight between collisions. In fact they have published papers (like Verkley et al) which use equations pertaining to thermodynamic potentials which are specifically derived by ignoring gravitational potential energy. So they “prove” using wrong assumptions that the assumptions are true, namely that there would be isothermal conditions in the absence of greenhouse gases. That state is not what the Second Law of Thermodynamics indicates will evolve. Instead gravity does set up a temperature gradient and the resulting sensible heat transfers into the surface explain the observed surface temperatures on planets like Earth and Venus.

  33. Alexander Bakker astutely asked:

    “Are there credible methods for the quantitative estimation of climate response at all?”

    Mosher seems to have all the answers, maybe he can take this one on.

  34. irritated engineer

    This excerpt came from a project I am working on. The computer model, eQuest/DOE2, is used for analyzing existing building energy usage:.

    “Some calibration of data was used to establish the baseline for energy modeling purposes. You can see
    that after calibration, it is extremely close to the most recent consumption data. (Annual is close but monthly is off by as much as 100%.)

    Internal loads (lighting, equipment, and infiltration) were adjusted along with (occupancy) schedules as tuning
    variables to achieve a match with gas and electric use history.”

    Now the kicker:
    “The baseline model matches actual use quite well on an annual basis, but varies significantly on a monthly
    basis. This is primarily judged to be driven by the unique event-driven nature of the building. Activity and event schedules as input into the model were simplified, and
    did not align with the specific event schedules that occurred during the calibration period.
    Additional information was requested about operating hours of the heating and cooling plant and the ice
    making chillers, to allow greater resolution and analysis, but that information is either not available or was
    not made available.
    Nevertheless, significant effort was invested in matching baseline model input to the best definition we
    could derive as to how the facility is constructed and operated, and the model is deemed to be a reliable
    tool for predicting average energy savings associated with the measures analyzed.”

    A potential project worth between $5 million to $30 million is being decided partly on this model output. The building leaks air like a sieve due to age, failed window gasketing, wall caulking, and failed HVAC dampers being open,. plus general neglect. This could account for as much as 30% of recent energy usage..

    The modeler has a paragraph in the report that states they have no liability associated with work results.

    • Who is the idiot who agreed to pay for that model?

    • I should have asked-(to stay out of moderation)

      Who agreed to pay for such a model? Does it perform per what was agreed upon?

      • irritated engineer

        Owner is paying for it, they believe it is the right method since everyone in the industry says the same thing. I can’t fault them or much of the industry since no one has ever challenged the issue before. They (owners) always wonder afterwards why the promised realization rate rarely gets met.
        The model is “calibrated, or tuned” to a false energy-use baseline, so how can the results be usable?

        Why not fix the problems and let the building run for 18 months to get a new realistic baseline before looking at modeling?

      • As a model it is poorly representing what is happening in the building. The specification defining how well the model is to perform seems to have been poorly written.

      • irritated engineer

        My initial recommendation was to not even spend the money to run the model since its output would be invalid. Unfortunately the building industry has become addicted to this approach to looking at existing buildings. Many energy codes now require it.
        No one talks about the problems associated with it.
        I had one case where the modeler increased the chiller (heat recovery) efficiency from 0.9KW/ton to 1.6KW/ton. They also increased operating hours from 3,120/yr to 4,300/yr. Took 3 weeks to dig this info out of the modeler. The property management kicked them out once all this was presented.

      • Cap’n’s got a ride for you. Let me bait the hook.
        ============

      • Have you ever talked to the people who prepare environmental impact reports? I’ve met some very disillusioned folks over the years.

      • irritated engineer, “Why not fix the problems and let the building run for 18 months to get a new realistic baseline before looking at modeling?”

        That is just why too simple, it will never fly. That’s why I prefer fishing :)

        btw, just about every building complex has/had pretty comprehensive TAB reports that detailed “as installed” versus “as designed” performance. The TAB company can survey the system in a fraction of the time and cost while”fixing” as in re-balance the systems by finding out which dampers are failed, belts are slipping, RPMs are off, fans are rotating backwards, pump flows that are off etc. etc. then with that information engineers might be less irritated.

    • Dear IE, why don’t the computer controlled elevators lurk at the ground floor entrance of our brand new building in the morning?

      • irritated engineer

        They are optimized not to lurk. you input the floor you want on a local screen, correct? The system determines which cab is closest to you, as well as others who may be on adjacent floors. It then scopes out a path of least distance to get everyone to their floor.
        Having idle cabs rest at their last stop is most energy efficient.

        Big question – Where are you located regionally, and do you have a lot of infiltration into the lobby in the morning through the doors? Airflow into the elevator shafts? This is a preliminary symptom that you have envelope leaks. The problem is not the incoming cold air but the outgoing (somewhere higher in the bldg) warm air you already paid for. The other prime symptom for this problem is sign on the exit man-doors telling you to use the vestibules or revolving doors to exit the bldg.

      • blueice2hotsea

        Having idle cabs rest at their last stop is most energy efficient.

        DocMartyn’s point is more about improving transportation efficiency (and perhaps annoyance at waiting for an elevator at ground floor when late for work :). Improved transportation efficiency could potentially allow fewer elevator shafts and thus improved energy efficiency. No?

      • blueice2hotsea

        Duh. # of shafts would be sized for peak traffic. So ‘no’, accommodating morning stragglers does not improve energy efficiency.

    • Somebody forgot to set the clock?

    • irritated engineer

      OT but a response anyway:
      CaptDallas says:
      “btw, just about every building complex has/had pretty comprehensive TAB reports that detailed “as installed” versus “as designed” performance. The TAB company can survey the system in a fraction of the time and cost while”fixing” as in re-balance the systems by finding out which dampers are failed, belts are slipping, RPMs are off, fans are rotating backwards, pump flows that are off etc. etc. then with that information engineers might be less irritated.”

      This view is typical of the industry problem: BIAS, BLINDERS, and BLUNDERS.

      Companies only look for issues they can sell their specific services for, they are not capable of seeing the building as a whole – WHOLISM.

      What would happen to TAB/CX/engineers/contractors/modelers if every building held off any work for 18 months after fixing envelope and associated issues? Most would fold.

      “Fraction of the time and cost” – as compared to what?

      TAB, and CX, is sometimes complicit in the original problem. I’ve seen reports that intentionally hide engineering/construction problems to prevent liability issues from surfacing. Companies who expose such issues tend to have short lifespans in the community they serve.
      Who says the original TAB/CX report, or latest version, is valid for the current tenants? Who says fan speed or pump flow is wrong?
      Who usually makes changes to the systems? The operators, and why? Typically because of tenant complaints. Making changes back without acknowledging this will just result in future tenant complaints, which will force the operator to make the system change again back to where it was before the TAB/CX contractor came in and spent the owner’s money.
      TAB/CX companies are not design engineers and typically do not have liability insurance to make system design changes.

  35. Reblogged this on JunkScience.com.

  36. The Sun is the 13th Floor of Western Climate Science

  37. Here we present results from a multi-thousand-member perturbed-physics ensemble of transient coupled atmosphere–ocean general circulation model simulations. We find that model versions that reproduce observed surface temperature changes over the past 50 years show global-mean temperature increases of 1.4–3 K by 2050, relative to 1961–1990, under a mid-range forcing scenario. http://www.nature.com/ngeo/journal/v5/n4/full/ngeo1430.html

    https://scienceofdoom.files.wordpress.com/2014/12/rowlands-2012-fig1.png

    These 1000’s of ‘solutions’ of the same model are constrained to those that reproduce recent temperature change. But two questions remain.

    1. Which solution doe you choose to represent the model in the grand opportunistic IPCC ensemble? 1.4 or 3K by 2050 – or something arbitrarily mid range?

    2. Given Incomplete understanding of three aspects of the climate system—equilibrium climate sensitivity, rate of ocean heat uptake and historical aerosol forcing—and the physical processes underlying them lead to uncertainties in our assessment of the global-mean temperature evolution in the twenty-first century – these uncertainties must include large changes in energy dynamics from changes in albedo and water vapour – how do we know that any of the solutions is a realistic projection?

    There are 2 critical issues with models. Divergence of solutions from arbitrarily close initial starting points – sensitive dependence – and the implausibility of realistic representation of physical processes and couplings – structural instability. Even minor changes in processes and couplings lead to unpredictable divergence of solutions. The first results in irreducible imprecision – the second undermines the credibility of the entire exercise.

    Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable. They echo other famous limitations on scientist’s expectations, namely the undecidability of some propositions within axiomatic mathematical systems (Gödel’s theorem) and the uncomputability of some algorithms due to excessive size of the calculation.
    http://www.pnas.org/content/104/21/8709.full

    • # Divergence problem.# Attribution problem
      # Uncertainty problem. # Waning public
      confidence problem.
      Must be perturbing for them. Lewandowsky might
      need ter do a new, er, study on cli-sci hang-ups
      soon. Someone give him some more funding.

    • I would like to see this model’s results graphed out 10,000 more years. Has CO2 saved us from the inevitable next ice age, (due any century now) or not?

      • “I would like to see this model’s results graphed out 10,000 more years.”

        just color it all in.;-))

  38. Ah, here we go again.

    Anyone who claims that an effectively infinitely large open-ended non-linear feedback-driven (where we don’t know all the feedbacks, and even the ones we do know, we are unsure of the signs of some critical ones) chaotic system – hence subject to inter alia extreme sensitivity to initial conditions – is capable of making meaningful predictions over any significant time period is either a charlatan or a computer salesman – possibly both.

    Ironically, the first person to point this out was Edward Lorenz – a climate scientist.

    You can add as much computing power as you like, the result is purely to produce the wrong answer faster.

    As to “equilibrium climate sensitivity”, that is meaningless, such a system as the Earth’s climate which is in a continuous state of perturbation from an unknown number of influences and incorporates numerous feedbacks which involve those perturbing influences can never reach equilibrium.

    • “As to “equilibrium climate sensitivity”, that is meaningless, such a system as the Earth’s climate which is in a continuous state of perturbation from an unknown number of influences and incorporates numerous feedbacks which involve those perturbing influences can never reach equilibrium.”

      Agreed. There is the concept of a linearized model and a climate sensitivity of that linearization. And, of course, every different state of the climate has its own linearization. So there is in fact a whole continuum of climate sensitivities….

      But I guess stating the sensitivity in that way would bring things to close to math, which has the tendency to constrain ones conclusions, so, jettison that!

  39. High on the priority list would be educating people about what can ever be expected from GCM’s. They are excellent at uncovering dynamics and displaying long-term trends, and actually have far less systematic errors than many “skeptics” would posit:

    http://phys.org/news/2015-02-global-slowdown-systematic-errors-climate.html

    • Gates
      Can you tell me where the climate will be most likely to be less favorable in 20 years as a result of AGW based on the output of a GCM? What climatic conditions will be worse for humans and by how much in the location you select? Where will the climate improve???

      How in your opinion are these models useful for making government policy in any particular country???

      • Rob,

        Sure thing, as there are a few real obvious ones:

        1) Investing in beach front property in many areas of the world is increasingly a bad proposition and will get harder and harder to get insured.
        2) I would not invest in any activity which requires a regular sustained water supply in many areas. (CA and SW USA, water is going to be a big issue– even bigger than it has been up to now)
        3) Don’t build in areas requiring solid permafrost (i..e the sub-Arctic and Arctic) for foundational support. Very bad choice.
        4) If you live in New Orleans, might want to plan to migrate elsewhere. Generally going to be lost cause, but many coastal areas will be, only that city is both sinking and the sea level rising.
        5) Alaska & Siberia – ouch. Forest fires will be generally on the increase.
        6) Australia – Hot & Hotter. Sorry Aussies, the decades and century ahead look most unpleasant.

        The GCM’s are exceptionally useful for dictating policy. We have a general warming trend, general intensification of the hydrological cycle as accompanies all rising GH gas climates, so harden your systems to expect these things.

      • Group of physicists

        What R. Gates ought to invest in is a course in physics. There is no valid physics which can explain any warming of Earth’s surface by water vapor or carbon dioxide. The surface temperature is what it is because the force of gravity induces a temperature gradient that is the state of thermodynamic equilibrium which the Second Law of Thermodynamics states will evolve autonomously as maximum entropy is approached.

      • nottawa rafter

        Gates
        Given Rob’s 20 year test and using the CU sea level rise of 3.2mm/yr, I would be very comfortable investing in beach front property. The IPCC projections made in 1990 overshot the 2015 level by 100%. There is still no acceleration in the rate of rise in the CU data.

        In lieu of beach front property as an investment, you may want to consider Danish bonds with a negative yield.

        Sounds peachy.

      • “There is no valid physics which can explain any warming of Earth’s surface by water vapor or carbon dioxide.”
        ______
        I realize now why I’ve been staying away from CE. Thank you for the reminder!

      • Gates

        Gates
        What you have written is not based on the output of GCM’s. You are merely writing your concerns about AGW.
        Are you claiming that the output of a GCM has led you to believe that the rate of sea level rise will be increasing beyond the current observed rate? Which one are you referencing?
        “ I would not invest in any activity which requires a regular sustained water supply in many areas. (CA and SW USA, water is going to be a big issue– even bigger than it has been up to now)”
        You have written more general bologna. You seem to be taking any generally unfavorable weather condition and claiming it will get worse faster due to AGW.
        Gates writes- “The GCM’s are exceptionally useful for dictating policy”
        WRONG- Government policy involves knowing reasonably accurately what is likely to occur over the next few decades. For a GCM to be useful it would need to be able to reasonably accurately forecast what regions within nations will get substantially more vs. less rainfall. They are not reliable for this purpose are they?

        Gates writes- “general intensification of the hydrological cycle as accompanies all rising GH gas climates”
        You make many unsupportable claims.

      • “Gates writes- “general intensification of the hydrological cycle as accompanies all rising GH gas climates”
        You make many unsupportable claims.”
        _____
        Suggest you do a bit more research on the subject of natural feedbacks to rising GH gases before making this ignorant claim. The rock-carbon cycle and intensification of the hydrological cycle are natural ways for the sequestration of carbon to occur. Problem is, the Human Carbon Volcano has vastly over-whelmed this natural negative feedback as each is working on completely different time scales. The net result is that, without significant downscaling of the rate to which humans are transferring carbon to the atmosphere, we’ll have to commit to serious sequestration of carbon ourselves.

      • Gates

        I have read the theory. I have also not read anything reliable that shows that extreme weather events are actually increasing. Some (alarmists) try to use the value of property damages over time, but that doesn’t take inflation into account. What what observations are you referencing to confirm the theory???

      • Group  of  physicists

        R Gates: If you wish to debate us regarding the physics here feel free to do so, but note the ‘Evidence’ page which supports what we say.

    • Matthew R Marler

      R. Gates: The rock-carbon cycle and intensification of the hydrological cycle are natural ways for the sequestration of carbon to occur.

      How much additional energy (or power, if you prefer) is consumed by the intensification of the hydrological cycle of which you just wrote?

    • Matthew R Marler

      R. Gates: They are excellent at uncovering dynamics and displaying long-term trends, and actually have far less systematic errors than many “skeptics” would posit:

      Where is the evidence that the display of long-term trends by GCMs has been “excellent”?

    • @gates
      But you do agree that climate models diverge from reality pretty fast, like in a month or so, yes?
      I mean, otherwise we could predict the weather long term.

      So, how is it that models which diverge completely from the solution still provide useful information?

  40. It is now time for skeptics to relax and watch the TRAGIC-COMEDY of would-be world tyrants and puppet scientists unfold at the limit of human comprehension:

    World leaders tried to save the world and themselves from nuclear annihilation in 1945 by:

    1. Forming the UN to take totalitarian control of society, and

    2. Changing solar and nuclear physics to hide Neutron Repulsion in cores of atoms, planets, stars and galaxies heavier than 150 atomic mass units (where nuclear structure changes [1] to neutrons in the core and neutron-proton pairs at the nuclear surface).

    At the limits of comprehension, at the intersection of spiritual and scientific knowledge, an “intelligent and creative Mind” (Max Planck) guides force fields from the Sun’s pulsar core to createand sustain every atom, life and world in the Solar System . . .

    a volume of space greater than that of ten billion, billion Earth’s !

    I.e., world leaders & puppet scientists tried to hide a force of creation that is incomprehensibly more powerful than anything they could have imagined.

    Whether or not we succeed, world leaders will certainly fail to control God’s force of creation.

    1. See page 3, “Solar energy,” Adv. Astronomy (submitted for on-line review, 6 JAN 2015): https://dl.dropboxusercontent.com/u/10640850/Solar_Energy_For_Review.pdf

  41. I read Bakker’s dissertation with particular attention to the justifications for the conclusion that The ’climate modelling paradigm’ is in ’crisis’..

    Question for Dr. Curry and denizens:

    What are the original aspects of this work that qualify it as a dissertation?

    With due respect for his opinions based on his own experiences, his assertions of model biases, implicit tuning, and other possible shortcomings of GCMs rely entirely on the work (and often opinions and even speculations) of others. I see no originally produced evidence to support the critique that constitutes most of Part I. Thorough discussions of most of the points he raises already exist in numerous papers and workshop discussions, with none concluding (AFAIK) that climate modeling is in ‘crisis’.

    Aside from the issue of original contribution, some of his remarks call into question his understanding of how models are constructed and their various uses. His explanation of what climate models do (footnote 8, p. 22) is pretty muddled, which does not inspire confidence it what he has to say about their shortcomings. His comment on circular reasoning (pp 21-22) makes no sense to me. He seems to be saying that no hypothesis about a system can be tested by a mathematical model of that system. (He regards the point as “trivial”; I find it absurd.) Perhaps I misunderstand.

    Bakker has some very useful things to say about the scientist/user interface, and some sound advice (if only superficially presented) on alternative tools for input to policy. These, and the papers presented in Part II would constitute by themselves a good dissertation, IMO. Part I is a provcative essay of dubious value and inferior scholarship.

    • ” his remarks call into question his understanding of how models are constructed and their various uses.”

      What uses? Spit it out.

      • R Graf – What uses? Spit it out.

        Visit Isaac Held’s blog for many examples of how GCMs and other climate models are used for diagnosis.

        If you’re interested in understanding the uses and limitations of regional climate models, here’s a good place to start; follow the links.

        See Dr. Curry’s previous post.

        See R. Gates comment at February 2, 2015 at 5:55 pm

        There’s much more; you can find it.

    • I await with interest, some amusement, and not much positive expectation, for seeing some “denizens” give substantive responses to Pat’s questions.

    • He seems to be saying that no hypothesis about a system can be tested by a mathematical model of that system.

      The problem is ( is the system enumerable?)

      If it is then is is insolvable,hence trivial.

    • Matthew R Marler

      Pat Cassen: What are the original aspects of this work that qualify it as a dissertation?

      It is in the European style of summarizing work that the author has already published in the peer-reviewed literature.

      These, and the papers presented in Part II would constitute by themselves a good dissertation, IMO.

      That answers your first question, at least from your point of view. Occasionally, what some readers might regard as dross has been included at the insistence of one of the committee members.

      • > It is in the European style of summarizing work that the author has already published in the peer-reviewed literature.

        So the original content of the thesis is not in the thesis, Matt?

        Even po-mos don’t dare doing that!

      • Matthew R Marler

        Willard: So the original content of the thesis is not in the thesis, Matt?

        That is true. In the US the standard for a PhD thesis is that it be original work that is “publishable”, though many, if not most, of them are not in fact published. In Europe, the criterion “publishable” is established by having a series of works actually published, and then the series is written up in a document that cites the publications. In the US, it frequently happens as well that the final thesis may be a culmination of previously published papers, but usually the thesis is approved (or not) before the submitted works have in fact appeared in print.

      • > In Europe, the criterion “publishable” is established by having a series of works actually published, and then the series is written up in a document that cites the publications.

        Thanks, MattStat. I’ll check when I’ll get the Round Tuit.

    • John Vonderlin

      Hi Pat,
      “His explanation of what climate models do (footnote 8, p. 22) is pretty muddled, which does not inspire confidence it what he has to say about their shortcomings.” I’d suggested some proofreading for anybody asserting somebody else’s thinking is muddled. Typos happen, but this is incomprehensible.

      • How about:

        > His explanation of what climate models do (footnote 8, p. 22) is pretty muddled, which does not inspire confidence regarding what he has to say about their shortcomings.

        JohnV?

        Note that Pat doesn’t claim there are typos in the thesis.

      • Pople poored bog comets?

      • UAH out for January: +0.35C. Oh burr, kaicetrophe is right around the korner.

      • Steven Mosher

        Nit pickers lose points by being less than perfect.
        glass houses and all.

        Of course willard offers charity to Pat.
        No charity for the author.

        he’s a stingy prick that willard.
        zero honor.

      • > No charity for the author.

        Which author?

        A quote might be nice.

    • Matthew –That answers your first question…

      Yes.

      what some readers might regard as dross has been included at the insistence of one of the committee members.

      That does not seem to be the case here.

      John Vonderlin – I’d suggested some proofreading for anybody asserting somebody else’s thinking is muddled. Typos happen, but this is incomprehensible.

      Point taken. “…does not inspire confidence in what he has to say…”

    • I believe he means by circular reasoning, that the model embodies the hypothesized effect of CO2 in such a way that more CO2, input into the model, will result in more warming. It’s not that CO2 won’t tend to increase back IR radiation, it will, but perhaps the effect of the back radiation on the oceans aren’t well modeled, or perhaps the knock-on responses aren’t properly modeled.

      As you point out, he could have been more explicit here. But that’s my take on it.

      • There are quite a few assumptions that are likely common. Looking at CMIP5 pre-industrial tropical temperatures are estimate to be about 24.75 C for 1000AD to 1200AD. Actual SST for the tropics during that period looks closer to 27C +/- 1C or so. If all the models start with a “normal” 1 to 2 degrees below actual, they would all tend to run hot.

        Since Bakker has used the models quite a bit, I imagine he has a pretty good list of common assumptions.

    • Steven Mosher

      “His comment on circular reasoning (pp 21-22) makes no sense to me.”

      your comment makes no sense to me.

      killer argument, calling willard, willlard?

      • Makes no sense.
        Makes no sense to me.
        Spot the difference.

        As bender would say,
        Next.

        ***

        Since I’ve been hailed, Pat’s claim might be more interesting if he’d quote the relevant argument. I suspect the usual one against parametrization.

      • ==> “Pat’s claim might be more interesting if he’d quote the relevant argument. I suspect the usual one against parametrization.”

        Indeed. And counter-critiques that point that out are valid.

        And then there’s the other laughable responses in the sub-thread.

        sameolsameol.

      • interestingly enough, jim2’s wasn’t that bad!

      • Willard – Pat’s claim might be more interesting if he’d quote the relevant argument.

        Here it is:

        In the lack of other Earths, ’scientific simulation’ could be proposed. The above experiment [examining other Earth’s under different forcings] could be performed by climate model simulations rather than with other Earths. Nevertheless, conclusions on the hypothesis that “increased atmospheric GHG affects the global climate” on the basis of this approach are not valid. The climate model is a mathematical formulation of the hypothesis (together with some auxiliary hypotheses and physical laws) we want to test. The hypothesis is explicitly added to the climate model. So, the hypothesis is tested by a formalisation of the hypothesis itself.

      • Steven Mosher

        a difference that makes no difference makes no difference.
        or
        a charitable interpretation of
        ‘it makes no sense”
        is
        ‘it makes no sense to me”

        But you never were big on charity

      • Steven Mosher

        Now, imagine that I read Mann’s dissertation and made those bald assertions?

        My experience with Regional climate models is pretty much the same as detailed in the dissertation.

        What is your experience like?
        What about Willard’s?
        What about Pat’s

        Judith works with ECMWF. Day to day works with it to give advice to folks who want direction. You think her experience using models to give advice might be more relevant that Pat whats his face, or you? or Willard?

        Those of us who have actually worked with this data have reservations.
        That’s not nothing.

      • > a difference that makes no difference makes no difference.

        A difference that makes a difference does.

        Next.

      • > Now, imagine that I read Mann’s dissertation and made those bald assertions?

        Now, imagine ze Moshpit read Wegman’s report.

      • > So, the hypothesis is tested by a formalisation of the hypothesis itself.

        So now GCMs are used to test AGW.

        Fascinating.

    • Apparently his committee believed that a synthesis of these critiques coupled to a bottom-line conclusion about the use of GCMs constituted and independent contribution. I’m reminded of a history of science professor pointing out the key argument that Vesalius made about why the works of Galen were not adequate guides to human anatomy: “He studied gibbons.” Bringing that point to bear, even though everyone “knew” it, was the conceptual breakthrough needed to move on. I suspect that Vesalius’s actual beautiful anatomical drawings would have been done pretty quickly by someone else once that gibbons-aren’t-adequate-models-of humans point was taken on board generally.

    • “He seems to be saying that no hypothesis about a system can be tested by a mathematical model of that system.”

      I haven’t read it but if he is saying testing the system by a mathematical model of the hypothesis to check the hypothesis isn’t going to be useful I’d have to say I agree and it seems trivial to me.

      I don’t see regional models doing well until they can figure out if the unforced oscillations created by the GCMs are real or not and what in the models make them:

      http://www.ldeo.columbia.edu/~jsmerdon/papers/2012_jclim_karnauskasetal.pdf

    • Pat,

      We know your sincere and working hard in this area. Many others are too, undoubtedly. The fact that you are on this blog is I hope is an indication you are open for fresh ideas or challenges to old. It is hard to challenge the usefulness of climate models by lay people such as myself without knowing what successful predictions can be made now, (beyond general warming when we see El Nino). My reading sees the earlier the model for CO2 influence was created the further off its predication is to the present. When a forecast is blown one expects the forecaster to provide the explanation for the unforeseeable events that sunk their prediction. I don’t see that on C02 much. I see a changing of the subject to how many warmest years we are having. Or, “How about this weird storm,” or, a new spooky “vortex” is appearing. BTW, what happened to all the hurricanes Katrina was to have been the parade leader of?

      Is there any thought to breaking this amazingly complex puzzle down in other? Non-water planets weather should be easier to model. Is there a lot of effort currently being put into modeling Mars’ GMT? We have probes there. The atmosphere is thin, dry and mostly CO2. If models can work there that is the place to start I would think. Also, looking at paleo climate to study the glaciation cycle. If paleo climate followed just C02, (or CO2 in any respect,) that would be evidence even the public could understand. And don’t say M-cycles. A very weak, predictable and gradual force does not explain an extreme, rapid and chaotic signal. (You actually have to do a transform to be able to coax to see the M-cycle signal.) Explain paleo and you are on a solid first step.

      Is it just possible the dissertation is correct that jumping into comprehensive multi-decade climate models was a bridge too far at this point in the science?

    • “Part I is a provcative essay of dubious value and inferior scholarship.”

      1. it is provocative, perhaps explaining Judith’s point
      2. it has value.
      3. The scholarship is just fine.

      See how easy argument by assertion is.

      You get an A in being fatuous.

  42.  
    Everyone has heard by now that BAS (Bulletin of the Atomic Scientists) recently advanced the minute hand on the Doomsday Clock by 2 minutes. Now, we’re 3 minutes to midnight. This is a group of experts!

    They’re concerned about AGW leading to the demise of humanity. Basic math says, before these experts codified their current collective concern about our future, it was 5 minutes to Doomsday.

    Question: Shouldn’t Obama’s stopping of the seas from rising bought us at least a few seconds instead of costing us 2 minutes? Is it possible the Earth’s most vulnerable have more to fear from these ‘experts’ than from America’s SUV-driving soccer moms?

     

    • Did they ever move the hands back from the 1973 scare about new ice age?

      Ironically, looking at the 800,000-yr reconstructions we are darn lucky to still be riding this relatively long inter-glacial high. The party should be over by now. If CO2 is responsible for our reprieve we should be working to conserve it so it can continue to save future generations from glaciation’s increasing threat.

      • … or, we can return to an economy built on trapping and start hunting polar bears for their fur when the productive finally draw their last breath.

    • John Smith (it's my real name)

      Doomsday Clock
      hysterical
      the new secular monks admonishing ‘repent lest the end is near’
      missing heat?
      I suggest we call it the ‘messiah heat’
      better fits the religion

  43. Quinn the Eskimo

    The three lines of evidence on which attribution rests are temperature records, both instrumental and proxy, physical understanding of climate, and models.

    These are claimed by IPCC and EPA to combine into >95% certainty of attribution.

    Trillions of dollars are being spent and huge changes in public policy are being made based on this conclusion.

    Is it sound?

    Current temperatures are not anomalous in either instrumental or geologic records. The warming that has occurred is regional, not global.

    The claimed physical understanding is a joke. There are a number of excellent quotations in this thread that show this in compelling fashion. Theory and models predict and require the hot spot, but it simply doesn’t exist in nature, as demonstrated by >50 years of balloon data and 35 years of satellite data. The consensus cannot reconcile their theory with these data, but claim they’ve got the physical understanding part nailed down pat.

    The third leg of this no-legged stool is modeling. The models that are wrong about temps, wrong about the hot spot, wrong about humidity and so much else, that are incomputable representations of the inadequate physical understanding, these are the models which are indispensable to the circular reasoning of AGW attribution.

    So, out of that stack of high-density crap we are told with >95% certainty that AGW is an urgent reality and that the world must be remade in its name. The attribution analysis is a bonfire of logical fallacies, one piled on top of another in an enormous positive feedback loop of crap. The edifice is shielded by vast clouds of bafflegab and gobbledygook spewed out by millenarian zealots funded with billions in research grants.

    The AGW geniuses wailing that AGW threatens human health and welfare 100 years from now do so even though cheap energy from fossil fuels has done more to improve human health and welfare than anything in human history. Life expectancy has doubled, and population has gone up 4x or more as direct and indirect results of cheap energy from fossil fuels and industrial civilization. In the big cold waves, for example, millions of people would freeze to death were it not for fossil fuels. But we are told that fossil fuels threaten health and welfare. 100 years from now.

    I guess compared to WWI this is not the craziest thing in history, but its getting up there.

  44. “I seriously doubt that such a thesis would be possible in an atmospheric/oceanic/climate science department in the U.S. – whether the student would dare to tackle this, whether a faculty member would agree to supervise this, and whether a committee would ‘pass’ the thesis.”

    I seriously doubt that this statement could be produced by anyone other than a seriously committed ClimateBaller (TM).

    • In all fairness, Judith didn’t exactly say why she though it might not get past a defense.

      Perhaps it was for the reasons Pat outlined above?:

      http://judithcurry.com/2015/02/02/questioning-the-robustness-of-the-climate-modeling-paradigm/#comment-671003

      BTW – did I mention that I have a bridge right near Manhattan that I could let do go for real cheap?

    • ‘Atmospheric and oceanic computational simulation models often successfully depict chaotic space–time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but nonunique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model. Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.’ http://www.pnas.org/content/104/21/8709.full

      I would first of all dismiss quibbles of unoriginality and poor scholarship as purely motivated by unfounded spite. Personal disparagement that is mostly all that AGW groupthink cult of space cadets – such as Joshua and Michael – seem capable of.

      People like Pat Cassen will first need to understand perturbed physics ensembles and irreducible imprecision.

      http://judithcurry.com/2015/02/02/questioning-the-robustness-of-the-climate-modeling-paradigm/#comment-670985

      • Curious George

        Rob – you are right under an assumption that the models are not wrong. That’s a very dangerous and totally unproven assumption.

        We may learn about the level of irreducible imprecision of a model – which probably has nothing to do with a real climate. That is a very poor justification for burning tons of coal to power the Yellowstone supercomputer running the model. Especially a model that overestimates the heat transfer by evaporation from tropical seas by 3%.

      • As usual you have misunderstood completely – and substituted your own reality.

      • “People like Pat Cassen will first need to understand perturbed physics ensembles and irreducible imprecision.” – Indi

        Yeah, what a clueless fool he is.

        If only he was a rolled-gold genius like yourself….

      • Well clueless fools abound – aye Michael?

        But I actually do reference actual science. Unlike yourself.

      • Curious George

        Rob – please prove that models are right. Or that irreducible imprecision of wrong models matters.

      • Thank God for you Rob.

      • As I said George – you have misunderstood entirely – and my experience with you is that it is just no worth it. Ditto and more with Michael.

      • Curious George

        My reality is that modelers are unwilling to correct glaring errors in their models.
        http://judithcurry.com/2013/06/28/open-thread-weekend-23/#comment-338257

      • Rob,

        It will be great when you finally publish and show all those ‘climate scientist’ ninnies how it’s really done.

      • What would be great Michael was if you actually tried to understood any of the published science I routinely quote and link to – and not simply pursue your disparagement of things you don’t understand but don’t like on what is obviously the flimsiest basis. That also describes Joshua to a t and a w and an i and a t. Yea team.

      • Rob,

        I’m too much in awe of you to attempt anything that you can do.

        Besides, 1 thing I’m absolutely certain of, is that I’m no genius. Though I realise that you are unencumbered by such limitations.

      • I can’t think of anything sillier than the antics you and Joshua indulge in. Pat Cassen – in this case making trivial, irrelevant tribal comments – followed up inevitably by the tribal jesters.

        Don’t you get bored with yourself?

      • Rob,

        Just look at your first response to Pat.

        It’s cute that you think comments here are anything but a farce.

        It’s either nuttier than a fruit cake, or over-blown ego’s strutting and preening as they show off how clever they think they are.

      • Better yet – let’s repeat it so no one – but Michael – is under any illusion.

        ‘Atmospheric and oceanic computational simulation models often successfully depict chaotic space–time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but nonunique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model. Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.’ http://www.pnas.org/content/104/21/8709.full

        I would first of all dismiss quibbles of unoriginality and poor scholarship as purely motivated by unfounded spite. Personal disparagement that is mostly all that AGW groupthink cult of space cadets – such as Joshua and Michael – seem capable of.

        People like Pat Cassen will first need to understand perturbed physics ensembles and irreducible imprecision.

        http://judithcurry.com/2015/02/02/questioning-the-robustness-of-the-climate-modeling-paradigm/#comment-670985

      • “Irreducible imprecision”

        A bright shiny bauble has caught Rob’s eye.

      • Irreducible imprecision in atmospheric and oceanic simulations – http://www.pnas.org/content/104/21/8709.long

        One of the many sources I cite.

      • Curious George

        “we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures.”

        Please outline how to distinguish a plausibly formulated model from one that is not plausibly formulated.

    • Rob Ellison – I would first of all dismiss quibbles of unoriginality and poor scholarship as purely motivated by unfounded spite.

      Hmm. Quibbles. Purely motivated. Unfounded spite. Of course.

      • Heh, envy.
        ========

      • I am not prepared to even countenance unoriginallity. It is a trivial quibble that quite frankly is not your call. Its sole purpose is to discredit the entire work on the flimsiest pretext. It is moreover utterly irrelevant to a blog discussion.

        Poor scholarship based on a quibble about a potted greenhouse gas theory in a footnote is totally pathetic. This simple is presumed to have passed the review committee. But you take it upon yourself to querulously quibble in a blog comment. Very unimpressive indeed.

        Try addressing the substance of my comment instead.

      • Oh, there’s novelty; a warrior wondering.
        ===============

  45. Judith wrote: “I am further impressed by his thesis advisors and committee members for allowing/supporting this. Bakker notes many critical comments from his committee members. I suspect that the criticisms were more focused on strengthening the arguments, rather than ‘alarm’ over an essay that criticizes climate models. Kudos to the KNMI.”

    Controversial work can be strengthened by critical peer review. Unfortunately, most critical peer reviewers in climate are not motivated by a desire to make skeptical papers stronger.

  46. Dr. Curry – will Mr. Bakker be invited to respond to some of the questions?

  47. Throughout this paper, and throughout many other papers concerning various topics in climate science, the term climate change signal — or often just ‘climate signal’ — appears in a variety of places in a variety of technical contexts.

    This paper is no different from those others in that nowhere is there included in the paper a precise definition of what a ‘climate signal’ represents — what is its provenance, what are its descriptive characteristics, what are its dimensions, and so on.

    At any rate, I have been asking climate scientists for more than a decade to offer a precise definition for the term ‘climate signal’. So far, not a one of them has responded to this seemingly reasonable request.

    But maybe this week will be different.

    • Consider a parallel, like the definition of God. Were there no climate it would be necessary for us to invent one.
      =================

    • Beta, “At any rate, I have been asking climate scientists for more than a decade to offer a precise definition for the term ‘climate signal’.”

      It appears to be anything that can be declared “unprecedented”. Or perhaps “robustly” unprecedented :)

    • “Climate signal” is usually accompanied by a phalanx composed of mights, coulds, mays, and an occasional should.

    • An example of a signal is if you take the summer mean temperature in your region for 30 years, e.g. 1951-1980. If you plot this as a probability of a given temperature it will look like a Gaussian curve with a mean and standard deviation of nearly 1 C. Then you look at recent summers and the new mean is a standard deviation higher. That would be a climate change signal. A picture helps.
      http://www.giss.nasa.gov/research/briefs/hansen_17/shifting.gif

      • what about 1941 to 1960?

      • Exactly. What about it?

      • 1912 to1944 and 1945 to 1976.

      • Curious George

        Jim D – a good point. It would be even stronger if you could show such a graph for a global mean temperature. Anyway, what is the region you are showing? Is it measured data or adjusted data or homogenized data? BTW, your standard deviation looks rather narrow .. are you sure it is a correct picture?

      • Here you go, yimmy:

        http://wattsupwiththat.com/2012/08/12/a-quick-look-at-temperature-anomaly-distributions/

        See what you can do with that.

      • Why start at 1951?
        I took HADCRU4 global, the worked out ajecent 10 year average and subtracted recent from previous decade (i.e. last point is average (2014 to 2004)-average(2004-1994).
        http://i179.photobucket.com/albums/w318/DocMartyn/decadalCRU4globaldifferences_zpsfe6508ac.png

        So now you know why they started at 1951

      • Climate isn’t only global. In fact the more important climate signals to people are the regional ones. The question was about climate signals. I think the graph was for North America, but that is just an example. Europe and China would have similar shifts over the last half century or more.

      • We endure the daily weather, we suffer the invocation of climate; neither much predictable.
        ===============

      • Enduring Koldie’s easier.
        More predictable.

      • I gotta rock,
        Push to the tock
        Tic of the clock
        Watch mocks all the talk.
        ================

      • John Smith (it's my real name)

        Jim D… appreciate the explanation
        so a ‘climate signal’ is observed data
        that differs from some previous set of observations
        is there a non ‘signal’ norm?
        I’m suspicious that if your graph shifted to the left it would not be considered a ‘signal’

      • With their use of the term “climate signal” a.k.a “climate change signal”, climate scientists are attempting to draw a parallel with traditional signal processing practice & theory, which has a long and valuable history in the hard sciences. But the question must be asked, does the parallel go only so deep? And if so, how deep?

        Jim D: An example of a signal is if you take the summer mean temperature in your region for 30 years, e.g. 1951-1980. If you plot this as a probability of a given temperature it will look like a Gaussian curve with a mean and standard deviation of nearly 1 C. Then you look at recent summers and the new mean is a standard deviation higher. That would be a climate change signal. A picture helps.

        It would appear from Jim D’s example that what does, or does not, constitute a ‘climate change signal’ is context sensitive.

        The implication here is that it is not possible to apply the term ‘climate signal’ universally inside a specific research paper without first defining and enumerating the specifically applicable characteristics which allow some body of reference data, plus the scientific interpretation of that data, to be labeled as a ‘climate signal’ in supporting the conclusions of the research paper.

        On the surface of it, the graph Jim D references would be one example of a sub-class of climate signal.

        If there are multiple subclasses of climate signals, then what are the rules for establishing the characteristics which allow some body of reference data, plus the scientific interpretation of that data, to be labeled as a ‘climate signal’ within the analytical context in which it will be used?

        Without establishing a context-specific definition for the term ‘climate signal’ as it is being used in a specific research paper, then it might be easy for a climate skeptic to assert that the term is being used merely to establish a veneer of scientific credibility which isn’t necessarily present in the research itself.

        Let’s use as my graph of Central England temperature (CET) 1772-2013 as an example of how one might go about defining what a climate signal represents within the specific context in which it is being used.

        http://i1301.photobucket.com/albums/ag108/Beta-Blocker/CET/AR5-Figure-1-4–and–CET-1772-2013_zps02652542.png

        Between 1840 and 1870, CET was rising at approximately +0.3 C per decade. GMT was rising at the same time, although not quite at the same rate. Here is an example where a local temperature variation appears to be happening in rough correlation with global temperature variation. Looking at the right side of the graph, the same can be said for more recent temperature variations in CET and in GMT post 1945.

        All right, a question …. can we say with justification that any local rise in Hadley CET is a mere temperature signal, while the rise in Hadley GMT for a similar period is a certified climate change signal? Why or why not?

        Similarly, are there local climate change signals in addition to global climate change signals? If Central England is known to be warming twice as fast as the rest of the planet, does the local temperature change signal also simultaneously represent a global climate change signal?

        If we were to state that we believe the local increase in CET between 1950 and 2000 is most likely a reflection of a persistent change in global temperature which is occurring on a worldwide basis — and if we were also to believe that the localized change in CET is an example subclass of a climate signal we might call a ‘local climate change signal’ — i.e., it is something more than a mere temperature signal occurring locally — then how would we view the rise in CET of +0.4 C per decade in the period of 1810-1835 where there is no corresponding Hadley GMT data to compare with?

        Can we discount the possibility that there was a rough corresponding increase in Global Mean Temperature between 1810 and 1835, even if it wasn’t of the same magnitude as the change in CET? Why or why not? (The same question applies to all pre-1850 temperature trends in the CET record.) Moreover, if we choose to discount that possibility, what lines of evidence would we marshal to support our opinions?

        The larger point here is that within the context of a specific research paper, climate scientists must define precisely what it is they mean by a ‘temperature signal’, a ‘climate signal’, and/or a ‘climate change signal’.

        If there are multiple subclasses of climate signals which apply to different areas of climate science, then climate scientists must document the rules they use for establishing those characteristics which allow some body of reference data, plus the scientific interpretation of that data, to be labeled as a ‘climate signal’ within the analytical context of the research paper in which the term is being used.

    • Ed Hawkins did a few posts on this. They may or may not answer your questions but here is one of them

      http://www.climate-lab-book.ac.uk/2014/signal-noise-emergence/#more-2509

      I think they have spotted something other than what they think they have spotted but we all have opinions.

      Jim may want to take a look and note how well the ocean and land time of emergence match up. You wouldn’t think CO2 forcing would need beachfront property like that.

      • Hansen showed that the summer signal was clearer than the winter one because there is much more variance in winter average temperatures in the northern hemisphere. The winter curves would be broader, and the shift would be less than a standard deviation.

    • You “think” the graph was about North America and you wave your arms to include Europe and China. You are sinking, yimmy.

  48. Reblogged this on Centinel2012 and commented:
    There is one key assumption that drives the system and the physics and that is the expected CO2 sensitivity values as establish by the 1979 NAS Charney report of 1.5C to 4.5 C with the expected value being 3.0 C. IF that 3.0C number is different than the GCM’s don’t work; and it seems that more current papers fall in the lower range or even below. If the CO2 is really .5C to 1.5C with an expected value of 1.0C than there must be other factors besides CO2 alone and than makes room for other factors.

  49. It’s easier to ”predict” the wining numbers on lottery; than to ”predict” the ”localized” temp 6 months in advance! For ”overall’ global temp doesn’t need any ”predicting” because the ”global” temp is always the same!

    Therefore:: those ”predictors” should stop fleecing the Urban Sheep – and make themselves rich, by predicting the winning lottery numbers.

    B] when they are wrong in ”predicting” the ”predictors” shouldn’t be blamed – BUT the Dung Beetles, that expected correct predictions in the first place. Nobody is forcing the con ”predictors” to predict and make up lies => they are guilty as hell! As long as there is demand for bullshine – there will always be producers… {{DEMAND FOR IT, CONTROLS SUPPLY}}

  50. “Global Warming is about politics and power rather than science. In science there is an attempt to clarify; in global warming language is misused in order to confuse and mislead the public.
    The misuse of language extends to the use of models. For the advocates of policies allegedly addressing global warming, the role of models is not to predict but rather to justify the claim that catastrophe is possible. As they understand, proving something to be impossible is itself almost impossible”
    Richard Lindzen

  51. Impact assessments are usually trying to look at the climate change in a small region, like a country. This is known to be difficult, and GCMs often don’t agree with each other 100% on how much the regions change, especially in factors like precipitation that might affect crops or fresh water supplies. This level of unknown must be quite frustrating for planners. Another one the Dutch would consider important is sea level, and climate models give little to no clue on how quickly the glaciers will melt. In fact, the IPCC punted and gave an estimate assuming that the glacier melt rate doesn’t accelerate. Others like the Army Corps of Engineers and NOAA, have extended the IPCC projection to range between 0.2 and 2 meters by 2100 for planning purposes. The Dutch would probably also be extending the error bars for caution. Uncertainty in models works both ways. They all underestimated the Arctic sea-ice loss rate, as another example. So, yes, if I was trying to do regional assessments of climate change, I might double the range of the model projections as a precaution, and I would not take their upper limits of dangers as a given. Regional projection with just GCMs is a risky business, even if you think you know the emission rates in the future.

    • There is no model uncertainty – they are absolute pointless for regional ‘projections’ and for the global summations built on these grid sale and less processes.

      It is an utter nonsense and that is the point. 8 years and the penny finally drops? What a dumbass. I suppose it is better late than never.

      In reality you extrapolate from known conditions using a statistical model to devise levels at return periods – and have different immunities for different infrastructure. High for hospitals and emergency services for instances.

      Jimmy Dee’s typical methodology – on the other hand – is to pull it out of his arse and hope it sounds impressive.

      • Thanks for your input.

      • I’d suggest some actual science but actual science seems a little above your pay grade.

        http://judithcurry.com/2015/02/02/questioning-the-robustness-of-the-climate-modeling-paradigm/#comment-670985

      • Well, they just support the point I made. Want to try again?

      • Which bit supports your unfounded and profoundly lacking in any reference to actual science nonsense?

        Sorry they say that the range is even broader than the IPCC opportunistic ensembles.

        The questions remain.

        ‘Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.’ http://www.pnas.org/content/104/21/8709.full

        Which solution of many 1000’s do you pick and why? Why would you expect a correspondence with physical processes in the real world when it is hugely unknown and vastly uncertain?

        Not that you inhabit the real world Jimbo.

        ‘Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable. They echo other famous limitations on scientist’s expectations, namely the undecidability of some propositions within axiomatic mathematical systems (Gödel’s theorem) and the uncomputability of some algorithms due to excessive size of the calculation.’ op. cit.

      • You pick a broader range than the models, Rob. That’s what you have to do when planning with such uncertainty. The last thing to do is to pretend the range represented by the models is the full one.

      • No as I said – you apply historical data and extrapolate to extremes. You don’t appeal to inherent nonsense. An engineer might even use safety margins but base it on something real.

    • Guest blog Roger Pielke Sr.
      Are climate models ready to make regional projections?
      The question that is addressed in my post is, with respect to multi-decadal model simulations, are global and/or regional climate models ready to be used for skillful regional projections by the impacts and policymaker communities?
      This could also be asked as

      Are skillful (value-added) regional and local multi-decadal predictions of changes in climate statistics for use by the water resource, food, energy, human health and ecosystem impact communities available at present?

      As summarized in this post, the answer is NO.
      http://www.climatedialogue.org/are-regional-models-ready-for-prime-time/

      Kevin Trenberth Was Correct – “We Do Not Have Reliable Or Regional Predictions Of Climate”
      https://pielkeclimatesci.wordpress.com/2012/05/08/kevin-trenberth-is-correct-we-do-not-have-reliable-or-regional-predictions-of-climate/

      • How water resources and ecosystems behave under climate change is a very difficult thing to get from GCMs that don’t represent the factors affecting those components well. Does that mean we don’t have a problem? No, it means we have an even bigger problem, with an unpredictability that only gets worse with more climate change. The way the uncertainty scales with the amount of climate change means we should be doing everything we can to minimize both.

    • Regional Climate Downscaling: What’s the Point?
      https://pielkeclimatesci.files.wordpress.com/2012/02/r-361.pdf

      • > The dude is impervious

        And impermeable …

        Now he would have us double – nay, triple – the assumed boundaries of “scarey bear” just-in-case. No practical suggestions on how to prevent society from violently imploding while we shut everything down, of course

      • The only way to reduce uncertainty is to reduce the factors leading to climate change in the first place.

        And the ‘factors’ that led to the ‘Dust Bowl’ were???
        the Sahel Drought?
        the frequency of North Sea Storms?
        the Little Ice Age?
        the Anasazi Droughts?

        Religions have an all powerful deity, to the exclusion of others.

        It appears CO2 is filling that role for many.

    • Jim – Dr. Pielke (Senior) makes the point repeatedly that Regional Downscaled GCMs exhibit NO SKILL. “No Skill” is a term of art that you should understand if you intend to continue this line of argument.

      • brent – Well done, I was typing my reply as you were posting your (far more complete) responses.

      • See my comment above. Saying that regional climate change is less predictable doesn’t make it better in any way. It actually makes it worse, because you lose a comfort factor you might have by believing a model as a constraint. The only way to reduce uncertainty is to reduce the factors leading to climate change in the first place. Models won’t help by themselves.

      • Jim – now you are embarrassing yourself. Your statement “Model’s won’t help by themselves” implies that GCMs add “some” value when combined with other unspecified sources. “No Skill” means that you are better off without the models, because you don’t mislead yourself into thinking that they are adding value to the analysis.

      • David, yimmy dee is always embarrassing himself. We have tried to help him. The dude is impervious.

      • David, if you are not even going to trust the temperature change from GCMs, what would be your starting point? Paleoclimate? Extrapolation? Note that regional dowsnscaling is a particular method that doesn’t apply to the GCMs themselves, so you have to make that distinction. They didn’t say that GCMs have no skill, nor could they know in advance whether a 50- or 100-year projection has no pattern correlation with what actually will happen to global temperatures and precipitation. Given that they can represent seasonal changes, which are an order of magnitude larger than climate changes, I would not rule them out in how they handle a 2-4 C global warming. To the extent they handle seasonal change, they can handle climate change. Its a subset of what they do already.

      • There are thousands of feasible solution to any model. Plus or minus 10 degees C?

  52. It’s cute that you think comments here are anything but a farce.

    It’s either nuttier than a fruit cake, or over-blown ego’s strutting and preening as they show off how clever they think they are. Michael

    It’s like a movie called Bikini Girls on the Dinosaur Planet. Wow – bikinis and bad animatronics. A moment of levity – but a diet of Joshua’s and Michael’s pathological pursuit of the lowest common farce grows very tedious very quickly. I miss most of the antics – as I find I do more often and more generally. Although Joshua can’t quite believe that I can bear to pass up most of his smearings of unsavory – if not down right unseemly – commentary.

    It all makes for a quagmire very deliberately engineered to devalue, debase and trivialise. Something about trolls, why they do it and why they attribute their own base motives to others. It is not really something I am much interested in ‘deconstructing’.

    • Where did my italics go?

      It’s cute that you think comments here are anything but a farce.

      It’s either nuttier than a fruit cake, or over-blown ego’s strutting and preening as they show off how clever they think they are. Michael

      • Here’s a nice fruitcake trick. Use a banana bread recipe but substitute jelly for the aged bananas. If you let it sit around awhile it even smells like it has fruits and nuts in it.
        =================

  53. Pingback: Nueva tesis doctoral: el paradigma de los modelos climáticos está en crisis | PlazaMoyua.com

  54. Pingback: Nueva tesis doctoral: el paradigma de los modelos climáticos está en crisisDesde el exilio | Desde el exilio

  55. Here are some guidelines for developing Computer Simulations.

    Golden Rules of Verification, Validation, Testing, and Certification
    of Modeling and Simulation Applications

    .

    Can anybody show me evidence that any GCM has been validated along these lines? (especially rules 4, 16 and 17)

    Steven Mosher?

    I’ve been looking but all I find is mindless newspeak like this, which only confirms my worst suspicions.

    • ‘Suspicions’? Is that what that funhouse mirror cackling I hear is called?
      =====================

    • KenW, I could find no such evidence. If you go far upthread, you will find a tuning discussion with references and explanarions and examples on what I did find. The models are parameterized for processes they don’t simulate. For NCAR CAM3, these expressly include (technical manual references) 4.5 Prognostic Condensate and Precipitation Parameterization and 4.7 Parameterization of Cloud Fraction. Both are important feedbacks.
      The CMIP5 experimental design called for submitting 10, 20, and 30 year hindcasts. Think of this as in sample model math and parameterization verification. Then it called for using the RCPs to submit (IIRC) 30 year projections. All in the ‘near term’ experimental design half; long term is > than a century. Think of these ‘near term projections’ as enabling out of sample validation. And the 18 year pause means CMIP5 has now failed validation by the >17 year temperature divergence criterion Santer established in his 2011 paper.

      • I like this one from page 490 of the BAMS pdf:

        “Users of CMIP5 model output should take note that decadal predictions with climate models are in an exploratory stage.”

      • I’ll give Namoi a little lesson in verifying your model.

        Take your pde. Take an analytic function (perhaps one that mimics a solution) and plug it into your equation. Move everything that remains to the right hand side. This becomes your forcing function. Note your boundary conditions (inherited by your choice of function).
        Now if you put these boundary conditions and this forcing into your model and solve you should get approximately your original function back. Now do a grid refinement study. Compute the L2 error between your solution and your known solution. You should start to see a convergence rate. Verify that it matches the known rate for your solution method.

        Oh but thats insane you say. How can we do such a thing for a massive code like a climate simulation? This is your out, yes?

        No. This is exactly how we verified our massively parallel solvers at Sandia. Fluid solvers, fire solvers, coupled elastic material/fluid solvers, etc etc…. Solvers much more complex than a climate model.

        However, a caveat. This requires your code to be well designed and flexible. It requires using a solution framework that can be separated from your code. It requires adaptivity and that your code be grid independent.

        Long live fortran.

      • Rud, that was quite a brawl up thread, but I’m thinking an a much lower plane here. It’s not easy making software do what it’s supposed do, even when you know EXACTLY WHAT it IS supposed to do.

        Never mind the 32 free parameters. I haven’t even got to that yet.
        (With 32 parameters you can torture any piece software tell you any answer you want to hear – but you could never trust anything it says)

      • Curious George

        “Fluid solvers, fire solvers, coupled elastic material/fluid solvers, etc etc…. Solvers much more complex than a climate model.” Nick, do you really believe that a climate model should not include fluids, fires, biological influences? No, what climate modelers want to achieve is unprecedented. They just are not up to the task.

    • KenW
      I suggest rules #3, #7, #12, and #13 are critical.

      Imo, the issue is whether AGW is going to cause the climate to be significantly worse for humans and where, and when. The changes will not occur at the same time. Imo, we have no reliable data to know the answers today.

      • Actually they are all important. I’m looking for 4, 16 and 17 because that should be the easiest evidence to produce. If a major software consulting firm would put its stamp of approval (and reputation) on the fidelity of a GCM, that would show they are at least trying to be diligent. The Oreskes paper tells me they’re trying not to.

        Too bad nobody’s got what I’m looking for. I’ll just have to come back to this later. I’m sure the subject of simulations will come up again ;-))

    • “Verification and validation of numerical models of natural systems is impossible. ”

      Always good when your paper on verifying your model begins with this.

    • Hmmm…Oreskes and model validation. Maybe she changed her mind, or her mind is changing.

      http://www.nssl.noaa.gov/users/brooks/public_html/feda/papers/Oreskes1.pdf

      I can help her…

  56. Of course they know the truth about the models – or even more importantly the biases of the inputs – but they don’t like anyone pointing it out and they don’t like to admit it because turkeys don’t vote for xmas.

  57. All these issues arise because people seem to believe that it is possible to model climate deterministically. This is rather like trying to predict the behaviour of a gas by mapping the trajectory of every molecule. We have 135 years of measured temperature data. If we look at it stochastically instead of deterministically, we see that the variance spectrum of this short time series shows that it has the characteristics of “pink noise” ( = “flicker noise” = “1/f noise”) like many other naturally generated time series. There is nothing unusual about the climate of the twentieth century so why all the fuss? See http://blackjay.net/?p=161#comments .

    • Interesting…pink noise. But we humans like to find phantom patterns – in star positions (constellations), clouds, and maybe climate. Is that Jay Zuz I see in the tree bark?

  58. Bakkers choice to look critically at climate modelling is a good one. Although he has not solved the problem, he has made a useful contribution. In particuler, he exposed the folly of trying to make sense of ensembles of climate models with the hope of ariving at a more accurate result. This would only work if the models were all correct except for an added random series.At any place on the earth’s surface, the seasons would repeat themselves, but they don’t because of random effects which need to be understood and applied at the correct place in the model.

    It is only by understanding and validating every physical process in the model that success will be achieved.

  59. Group of physicits

    All readers should go back to this comment (now in over 200 social media climate threads) and study carefully the errors in the assumptions for the models.

    • @Group of physicits

      Mr physicist, ”natural climate cycles” have nothing to do with the phony ”global warmings” Climate changes for different reasons, not because of increase / decrease in global temp = they can confuse even physicists…

      2] ”clouds” don’t increase, OR decrease the global temp!!! clods make upper atmosphere warmer – on the ground cooler = overall is the same (when for the physicist upper atmosphere is not part of this globe… tragic..

      b] WV, and clouds make day temp cooler on the ground, BUT nights warmer – reason monitoring only the hottest minute in 24h and ignoring all the other 1439 minutes -> is concocted for brainwashing the ignorant. Before they do a complete job on you, go there and arm yourself with real proofs and facts, please read every sentence, if you are a physicist, you’ll understand:
      https://globalwarmingdenier.wordpress.com/2014/07/12/cooling-earth/

    • Group  of  physicists

      Stefan

      “WV, and clouds make day temp cooler on the ground, BUT nights warmer”

      Nights warmer? Not according to valid physics as here, nor according to my study of actual data that gave mean daily temperatures …

      Wettest third: Max: 30.8°C Min: 20.1°C
      Middle third: Max: 33.0°C Min: 21.2°C
      Driest third: Max: 35.7°C Min: 21.9°C

      When is your study and explanation based on the laws of physics?

      • Doesn’t the fact that WV or CO2 act to more quickly diffuse heat, even it they also retain it, work also separately to impede emission since black body emission energy is not linear to black body temperature. The Second Law makes a loss every time the heat is transferred in more diffuse fashion. To make up for this loss of efficiency of emission the whole has to rise in mean temperature to remain in budget.

        I also hypothesize this is why ocean currents transferring heat to the poles increases the global mean temp.

    • Group of physicists said: ”When is your study and explanation based on the laws of physics?”

      G’day physicist! I have studied and compared Sahara and Amazon basin places that were on same latitude. b] I have researched coastal areas of Australia and the inland deserts. c] now I live in place surrounded by tropical rainforest – only 13 degrees south of the equator (last 5 weeks was difficult to sleep at night, like in sauna.(rain started and cooled last night)

      2] in nature, few factors affect temp in humid places – density, from which direction winds blow, the altitude of the clouds and so on. Putting numbers or formulas on such things – only shows that you don’t know what you are talking about. Experiment in the kitchen or in laboratory means nothing! b] ”learning” ANYTHING connected to the phony ”global” warming is – created for confusion and to get people away from the reality!!!

      3] I have read couple of your comments – you are trying to prove the Warmist wrong, by proving that: Santa comes trough the backdoor, not via the chimney… Reason I asked you to read my post and realize first that Santa is not for real – if you know physics, you will understand the post better than most – it’s a challenge: read every sentence first of that post; so we can have interesting and constructive debate. Because: now I know what you know, but you don’t know what I know = I have unfair advantage on you. Read the post, I’m a soft target. So far: you are using physics, to prove that: Warmist dragon has six heads instead of seven…Please read the post first https://globalwarmingdenier.wordpress.com/2014/07/12/cooling-earth/

      • Group  of  physicists

        Yes well my survey was far more comprehensive, and you haven’t even quoted your data and results, whereas I have published both. I was also careful to select locations that were inland by at least 100Km and in the tropics and in the hottest month when the Sun passed nearly directly overhead each day, so that angles of incidence don’t play a significant role. I also selected 15 locations from three continents within a range of altitudes from 0 to 1200m and then adjusted temperatures (using realistic lapse rates) to what they would have been at a common altitude of 600m.

        So, as I said, where is your survey showing raw data, any adjustments and the means derived? Where is your evidence of 15C° of warming for each 1% WV as implied by IPCC documentation?

        What I have written, first and foremost, is derived from valid physics.

        It is now summarized on a website here that is endorsed by our group that now numbers five persons qualified in physics or suitably knowledgeable in thermodynamics. All endorse the content of the website.

        What I also pointed out is that the radiative forcing GH paradigm implies a warming of about 15C° for each 1% of water vapor in the atmosphere. You have not produced evidence of such.

      • Group  of  physicists

        OK Stefan – Reading your linked site and commenting as I go …

        “Almost ALL the warmth is produced on the ground and in the water”

        No it’s not. On Venus next to nothing is “produced” (whatever you mean by that) on the ground, yet it’s hotter than 460°C. On Earth the solar radiation is also nowhere near sufficient to explain the surface temperatures.

        Your language is not the language of physics, so I have no idea what processes you are thinking of when you write “oxygen & nitrogen expand instantly and waste the extra heat in few minutes. Those two gases … disperse that extra heat, to be collected by the unlimited coldness in the upper atmosphere. “

        Well, seeing that oxygen and nitrogen to not radiate “that extra heat” to space, and nor does convection continue into space, you have proved to me that you don’t have a clue as to what is happening.

        And, as I’ve said many times, there is nothing that holds together any parcel of “warmed air” as it supposedly traverses the height of the troposphere. That is because molecules move randomly in all directions between collisions and the chance of any particular molecule making its way through 10Km of air without the help of wind is infinitesimal.. If some upward wind blows some air in a generally upward direction that air does not cool in accord with the adiabatic environmental lapse rate, because the wind is adding energy and so the whole process is not adiabatic.

  60. “Decision makers do need concrete answers”

    All weep.

    • And
      “Most climate change scientists are well aware of this and a feeling of discomfort is taking hold of them”

      Can this discomfort of the 97% not be measured somehow?

  61. > The ’climate modelling paradigm’ is characterized by two major axioms:

    Are there any citations?

  62. Note to TonyB
    Re CET changes
    I just checked the MET office’s CET data, year 1659 is upgraded as the rest for whole of the CET 350 year long record, the average upgrade is 0.03C for some years goes to just above 0.07C.
    I am afraid I appear to be to blame for your displeasure, but the Met-Office is far to embarrassed to admit that they have been wrong since their annual CET numbers were published, and even more so to be associated with someone called ‘vukcevic’. For more details see my email of 29 July 2014 or my comment on WUWT
    http://wattsupwiththat.com/2015/01/08/anthropogenic-warming-in-the-cet-record/

    • Vuk

      Thanks for that. I remember our conversation at the time. Whilst small, these changes are throughout the record and should have been recorded as such. I have today asked the Met Office if that will be done by way of some formal note.

      Congratulations on your diligence and if the changes were due to you, well done also to the met office for responding.
      tonyb

      • Indeed, as said at the time they are minor changes as shown here
        http://www.vukcevic.talktalk.net/CET-upgrade.gif
        (apology to HR, I posted in a wrong place)

      • Hi Tony
        Thanks for the email. I now got one up on Dr. Svalgaard, he has been trying for about 2-3 years to change the 310 year long the SSN ( sunspot record) and has not got the official change accepted, and here is ‘pseudo-science’ Vuk, just one email and the 350 CET got changed. If would be nice of the Met Office if they told me too.
        I will email the appropriate people and thank them for the accepting the recommendation.

      • Vuk

        Yes, there is no doubt the CET record was changed thanks to your diligence. Whilst small, The changes are still larger than the tiny differences that mark whether the hottest temperatures in the record is THE hottest or in the top ten.

        The met office were circumspect in their handling of the 2014 data and they have been receptive to your work on CET so credit to them as well as you.

        Tonyb

      • Indeed, ranking of the warmest and the coldest years is another important point. Since not all years are up at an equal amount even a tiniest of change may make a difference.

      • Steven Mosher

        WHAT… WHAT WHAT
        how DARE they change history

      • Hi Steven
        I’m starting to appreciate your sense of humour.

        Linear regression correlation of the two statements (see italics) R^2 = 0,65 within 95% certainty, but If the first was correctly phrased then R^2 increases to 0.75, again within 95% certainty

        Vukcevic July 2014: Since monthly data is made of the daily numbers and months are of different length, I recalculated the annual numbers, using weighting for each month’s data, within the annual composite, according to number of days in the month concerned. This method gives annual data which is fractionally higher, mainly due to short February (28 or 29 days). Differences are minor, but still important, maximum difference is ~ 0.07 and minimum 0.01 degrees C. The month-weighted data calculation is the correct method.

        MetOffice 2015: Because February is a shorter month than all the others, and is usually colder than most all other months, our previous method, which was giving February equal weight alongside all other months, caused the annual temperature values to be pulled down, i.e. giving estimated annual values which were very slightly too cold (the difference varying between 0.01 and 0.07 degC.

        On matter of models you defend so valiantly
        using (sadly totally ignored) planet’s magnetic model I get results which don’t entirely conform to your view
        http://www.vukcevic.talktalk.net/CET-WS-Fcst.gif

      • Steven Mosher

        well vuk

        I am having fun with the people who questioned whether a historical
        record could be improved.

        They will never admit their conspiratorial ideation.

        but they are kooks

      • Mosh

        Vuk’s ,method of calculating is more accurate. Kudos to him for suggesting it and Kudos to the Met Office for accepting it. Not the same as using unreliable or unlikely data in the first place which then gets manipulated to turn it into gold..

        I am currently watching one of the links to a lecture you posted from Cambridge that I thought included Tamsin. Two thirds of the way through and as yet no Tamsin.

        (sorry did not watch the other 22 videos you linked to)

        tonyb

      • Hey Mosh

        False pretences! The video has finished. . I had thought it was to be Tamsin presenting, but it was someone else presenting her work. I’m not sure he was a bona fide scientist as he wasn’t wearing a white lab coat with an array of pens sticking out of his pocket, nor thick black framed glasses….

        BTW, I do not believe in conspiracies nor that all climate scientists are idiots. Some are activists, some are over dominant and some overplay their hand, but I wish sceptics would stop believing that they automatically know more than someone who has studied the subject for decades and that everyone is in on some giant hoax and a conspiracy to change the world.
        tonyb

      • nottawa rafter

        Tony, I agree. Skeptics should stick to the science. They lose the high road when they don’t.

      • The point Tony is simple.
        Skeptics jumped to a conclusion.
        The data changed therefore it must be bogus.
        Once they see that one of their own drove the
        Change they shut up.
        Nothing here move along.

        Problem. Where is the apology
        Where is the accountability
        Where is Willis demanding that other skeptics
        Denounce those who jumped to conclusions.

      • Oh, my Gaia, moshe; make me dig up the etymology of ‘apologia’. I’m looking for new species of dung beetle. Oh, that’s entomology.
        ==============

      • mosh

        You are clearly lumping all sceptics together as if we are one monolithic block who all believe the sane thing. Clearly the sceptical world incorporates all sorts of viewpoints.

        As an example of wrongly seeing us all as the same you say;

        ‘Where is Willis demanding that other sceptics Denounce those who jumped to conclusions.’

        Willis certainly does not speak for me. Neither does Antony Watts nor Heartland nor Christopher Monckton.

        You SOMETIMES do. Judith SOMETIMES Does. The Met office SOMETIMES does. I often do…

        But ‘I am not a number I am a free man.’

        http://www.imdb.com/title/tt0061287/quotes

        tonyb

    • @ vukcevic, tonyb, mosher, nottawa rafter, et al

      “…….but I wish sceptics would stop believing that they automatically know more than someone who has studied the subject for decades and that everyone is in on some giant hoax and a conspiracy to change the world.
      tonyb”

      “but they are kooks”

      “Indeed, ranking of the warmest and the coldest years is another important point. ”

      “Tony, I agree. Skeptics should stick to the science. They lose the high road when they don’t.”

      So, all of you agree that the ‘Annual Temperature of the Earth (TOE)’ is precisely defined and that we have a historical data base with sufficient coverage and accuracy that allows us to rank the years since 1880 with hundredths of a degree precision, so that the science alone justifies worldwide headlines announcing that 2014 was the warmest year since 1880, by 0.02 degrees? And that the inevitable, scientific conclusion from this blistering record, as widely reported, is that ACO2 is causing the TOE to rise rapidly and will result in planetary catastrophe unless immediate, world wide, coordinated action is taken to curb it?

      Or is it possible that other, non-scientific factors may influence the headlines, the attribution conclusions, and the recommendations for action, but skeptics should just not mention them? In an environment where every attempt by skeptics to introduce conflicting, SCIENTIFIC data is instantly and thoroughly refuted by ‘KOCH BROTHERS! KOCH BROTHERS! from those who have studied the subject for decades and are NOT engaged in a giant hoax or a conspiracy?

      I am not the only ‘unscientific kook’ who has noticed that the Patron Saint of the ‘Climate Science Consensus’ is Saul Alinsky, NOT Karl Popper.

      “……that everyone is in on some giant hoax and a conspiracy to change the world.” The key word Tony is ‘everyone’. It doesn’t have to be ‘everyone’ in the climate science field, and clearly isn’t. In fact, the very definition of ‘conspiracy’ implies that ‘everyone’, or even MOST are NOT ‘in on it’. Just the ones driving the boat.

      I give you this from a formerly respected climate scientist who has first hand experience with the non-conspiracy:

      “I seriously doubt that such a thesis would be possible in an atmospheric/oceanic/climate science department in the U.S. – whether the student would dare to tackle this, whether a faculty member would agree to supervise this, and whether a committee would ‘pass’ the thesis.”

      Nope; no conspiracy here. Only unscientific kooks could possibly suspect one.

      • Bob

        I have written a dozen articles on historic temperatures and the over confidence we ascribe to it, and said numerous times that the Global average (of almost anything) means that the important regional nuances are ignored.

        I think scientists may use potentially bad data, such as global SST’s to 1850. I think they are too ready to believe the data they habitually use for their models is scientifically derived, accurate, and fit for purpose. I think many have no historical perspective and aren’t aware of previous periods of cold or warmth or changing sea levels etc and therefore come to confident conclusions based on less than a full picture of the climate.

        It is the bad science arising from bad data -as we see it- that needs to be challenged, together with asking that greater historical context is used.

        EVERYONE is not in on a giant hoax or conspiracy. SOME do have much greater influence than they should. SOME are activists and over promote their findings, which are not always as robust as they claim.

        But that merely reinforces my point that we are not dealing with idiots who are ALL intent on changing the world order by making up things according to some grand plan.

        What it means is that the uncertainty monster runs amok amongst much of climate science, but that many do not recognise the creature and SOME would prefer to pretend it does not even exist.

        There are too many of those people involved in the climate science industry who won’t admit to uncertainties in the business or even that there are large and fundamental gaps in our knowledge.

        The only time I have ever heard a senior scientist say ‘we don’t know’ is when Thomas Stocker admitted that we didn’t know the temperatures of the deep oceans as we didn’t have the technology to measure it.

        The :’don’t know’ monster needs to be added to Judith’s climate science menagerie as its every bit as active as its stable-mates.

        tonyb

      • It is an ‘Extraordinary Popular Delusion and Madness of the Herd’. Sure, there were those bellowing early, rightly in view of the ominous portents. But the madness that seized the herd needed little breathing together, and now the madness is worse than the portents, which have fizzled out like summer lightning.
        ===================

      • “You are a slow learner, Kim.”
        “How can I help it? How can I help but see what is in front of my eyes? Two and two are four.”
        “Sometimes, Kim. Sometimes they are five. Sometimes they are three. Sometimes they are all of them at once. You must try harder. It is not easy to become sane.”

      • @ tonyb

        Hi Tony.

        “I have written a dozen articles on historic temperatures and the over confidence we ascribe to it, and said numerous times that the Global average (of almost anything) means that the important regional nuances are ignored.”

        I know you have, and for a long time in fact you have been my ‘go to’ guy on the site when I need a dose of sanity.

        But do you really believe this?:

        “I think many have no historical perspective and aren’t aware of previous periods of cold or warmth or changing sea levels etc and therefore come to confident conclusions based on less than a full picture of the climate.”

        That people with PhD’s in climate related fields who have worked for years, successfully, to advance to the pointy end of the climate science pyramid aren’t aware of the things you list above?

        In your drive to present a ‘reasoned’ perspective are you willing to overlook the blatantly obvious fact that Climate Science writ large DOES coordinate efforts to suppress ANY dissension from the ‘party line’ and DOES do everything in its power to destroy apostates, personally and professionally, and DOES work closely with (exclusively) leftist politicians to provide ‘studies’ and ‘conclusions’ that will provide ‘scientific’ justification for policies that the leftists/progressives have been working for decades to advance?

      • Bob

        You said;

        “In your drive to present a ‘reasoned’ perspective are you willing to overlook the blatantly obvious fact that Climate Science writ large DOES coordinate efforts to suppress ANY dissension from the ‘party line’ and DOES do everything in its power to destroy apostates, personally and professionally, and DOES work closely with (exclusively) leftist politicians to provide ‘studies’ and ‘conclusions’ that will provide ‘scientific’ justification for policies that the leftists/progressives have been working for decades to advance?”

        This will make a fascinating article. You write it and I will keep an open mind and check your evidence.

        As for my saying that many do not have an understanding of the historical context, I would point out that, as you see many tines here, it is often thought of as being anecdotal and that novel proxies, such as tree rings, are given unreasonable weight in the historic narrative.

        The lack of historic context was unfortunately spelt out to me by a recent personal communication from the Met Office whose belief in the historic record stops at 1772 and who do not believe there is any great merit in using research funds to carry on the work started by such as Lamb.
        tonyb

      • @ tonyb

        “This will make a fascinating article. You write it and I will keep an open mind and check your evidence.”

        Well, most of my evidence actually comes from Dr. Curry’s posts on this site, the commentary provided by the denizens, and 50 years of observing ‘progressives’ developing the self-licking ice-cream cone comprised of progressive politicians and progressive scientists providing them with data justifying political action. ACO2 as an existential threat to the biosphere is merely the latest. And the furtherest reaching, as EVERY action has a ‘Carbon Signature’ and is thus subject to regulation. And taxation.

        I will cite one recent example, documented on this site, that involved ‘Climate Scientists’ producing a paper with pre-agreed conclusions, coordinating who were to be the authors, in which professional journals it was to be published, what it was to say, how it was to be reviewed, and which MSM outlets would publish stories about the the paper and what actions they would demand, citing the paper as evidence. All before any actual ‘data’ was ever ‘collected’. I don’t remember the exact subject, but the leaked emails documenting the conspiracy?? appeared here fairly recently.

        And I am not writing an article. You have a lifetime of observing the process just as I have. And are a far better writer. YOU write it.

        “As for my saying that many do not have an understanding of the historical context, I would point out that, as you see many tines here, it is often thought of as being anecdotal and that novel proxies, such as tree rings, are given unreasonable weight in the historic narrative.”

        You make my point, rather than yours. These people are no more stupid than the average bear, yet they claim that a few trees scattered around the world, carefully selected (calibrated) from thousands of their ‘uncalibrated’ brother and sister trees living in the same area, provide a better record of past planetary climate than the handwritten accounts of people observing the ‘climate’ first hand. What they actually realize, and will go to the mattresses to justify, is that a ‘calibrated tree’ can be made to say anything that they desire it to say, while eyewitness accounts are immutable. And therefore to be discounted, at all costs, as anecdotal and scientifically useless.

      • Bob

        I don’t think we are disagreeing that much. I think the number of climate scientists perpetrating a ‘hoax’ or involved in a ‘conspiracy’ are small, but there are those few who are also activists and have an influence far beyond their numbers who are pushing the envelope. Politicians may also take up the cause and political/scientific/green activism is a very powerful beast.

        The latter are represented at the highest levels at the UK Environment agency and the UK Met Office. Many of those slightly lower down the chain are merely trying to develop the science, albeit they will be nudged in certain directions by dint of grants being available for certain types of work, and peer pressure. I know of a number of scientists in both organisations who are scientifically and personally sceptical but, as you observe, being a sceptic is unlikely to further your career.

        So it is not the great mass of scientists who particularly concern me, but those relatively few with great influence who seem to have discarded the scientific method.

        I have many articles in the pipeline at present so will not have time for a properly researched article that develops the theme. However, I did write a piece on the politics of the subject from a UK viewpoint, which I am horrified to note was written well over 5 years ago!. I repeat it below in case it is of interest, although many of the internal links will have disappeared.

        —– —— —–
        Article: Politics of climate change. Author: Tony Brown

        Climate change has become highly politicised and the British Govt – long time leaders in funding research into the subject – were very heavily implicated in making it a political issue in order to promote their own agenda. An unusual subject for me, but very well referenced with numerous links and quotes from such bodies as the Environmental Audit Committee of the House of Commons.

        http://noconsensus.wordpress.com/2009/10/19/crossing-the-rubicon-an-advert-to-change-hearts-and-minds/#comments

        ——– ——-
        tonyb

      • @ tonyb

        “Article: Politics of climate change. Author: Tony Brown”

        Thanks Tony!

        Forwarded the link to my daughter, who teaches 8th grade science at a local high school.

        Hope she reads it.

        Bob

  63. Kenneth Fritsch

    A timely thread here for me. I became interested in how and how well climate models handle the estimation of equilibrium climate sensitivity (ECS) and embarked on this investigation because of the more recent work of those contained in recent publications making estimates of ECS from observable data. It appeared at some level these comparisons of the modeled versus observed ECS, while readily admitting to the uncertainties in the underlying data used for calculations from empirical observations were not detailing the possible shortcomings of the modeled side of these comparisons and how those values were being calculated/estimated..

    While I am admittedly no more than a layperson in this matter and only recently familiarized with this area of climate science, I was surprised to find that the ECS estimation from the CMIP5 models for AR5 review using the abrupt 4XCO2 experiment and regression of the resulting global mean Net TOA radiation and surface temperature change required significant temperature and Net TOA radiation adjustments using the pre-industrial control spin up (piControl). Even then the regressive results for the various CMIP5 models depended on regression method as a choice between the ordinary least squares and total least squares as was suggested to me by the poster Carrick. I was also surprised to learn that most models are tuned to give a reasonable TOA radiation balance.

    My current study in this area has involved determining how well the various CMIP5 models under RCP4.5 conditions arrive at a reasonable TOA energy budget by using the potential global sea water temperature changes (ocean heat content) from each model and model run. Interesting that that study is on temporary hold until I can resolve with KNMI (who has always been very cooperative with me in these matters) the issue of obtaining global mean TOA radiation data from gridded nc file data. Downloading the gridded data from the original sources while an option can be very time consuming and better avoided by using the KNMI data that has been conveniently converted to global means.

    I have previously looked at the CMIP5 climate models from the perspective of comparing the noise levels in these model and it is there that one sees large differences between models and also the noise levels that make shorter term trend comparisons with the observed temperatures a most uncertain proposition. Even the time dependence in terms of fitting a particular ARMA model to these models outputs varies considerably from model to model. It was these frustrations that led me to attempting to extract something more meaningful and practical from these models – which in turn led me to looking at ECS and transient climate response (TCR). Even the authors of the method used in calculating the current published ECS values of CMIP5 models have recommended that more work be done using TCR. Perhaps TCR is a more practical emergent climate parameter to use – since it deals with changes over decades and not millenniums as for ECS – but in my view it lets the TOA budget off the hook.

    • Willis Eschenbach

      Kenneth, first, a most excellent comment. I do enjoy a man who runs his own numbers.

      Regarding the models and the equilibrium or the transient climate sensitivities, (ECS & TCS), over a series of posts and with invaluable assistance from commenters I established that the global temperature output of the models is equivalent to a lagged linear function of the forcings. See my posts:

      Testing … testing … is this model powered up?
      Model Charged with Excessive Use of Forcing
      Zero Point Three times the Forcing

      I went on to using that to look at climate sensitivity

      Model Climate Sensitivity Calculated Directly From Model Results

      Regarding your comment on “obtaining global mean TOA radiation data from gridded nc file data”, what you need may be available in the CERES data. It’s available here. My first of many uses of the CERES data was Observations on TOA Forcing vs Temperature .

      If all you need is some form of the gridded monthly variation in TOA solar, let me know and I’ll extract it from the CERES nc data as a 1° x 1° grid.

      Finally, if you are using R, the package “ncdf” contains the functions to read/write the .nc files.

      Best wishes in your explorations,

      w.

      • Kenneth Fritsch

        Thanks for the comments and links Willis. I use ncdf4 in R and that is not my holdup. From KNMI I can obtain the nc gridded data already converted to global means and the downloads are in kilo bytes while the gridded nc from DKRZ can be several tens or hundreds of mega bytes. I am currently downloading from DKRZ since Greet Oldenborgh of KNMI has a busy schedule right now and we cannot resolve some differences I have seen using KNMI and my conversion methods. Small differences mater when making the TOA Net radiation calculations because the rsdt, rsut and rlut numbers are large and the required subtraction yields a relatively small Net number.

        http://cera-www.dkrz.de/WDCC/ui/EntryList.jsp?acronym=ETHr4

        I have found my initial study in this area following the path of my studies of temperature proxies used in published temperature reconstructions whereas even reading both the published papers and those analyses from blog posts and papers critical of the paper I do not fully realize the state of the science until I do my own analyses of the details going into these papers.

    • KF, an excellent post. You might want to look at essay Sensitive Uncertainty. It delves into this topic a bit, with emphasis on the empirical rather than the CMIP5 derivations, the differences between TCR and ECS, and even some ECS/TCR implications. Plenty of footnotes to source materials.

      • Kenneth Fritsch

        Thanks for the comment and the essay lead – I’ll have a look..

      • Kenneth Fritsch

        If that is the paper by Nic Lewis and Judith Curry, I have read it couple of times and recommended it to others. It has been primarily from Nic Lewis’ work and writings on empirical derivations of ECS and TCR that got me interested in learning more. I also enjoyed reading his Bayesian analysis on the matter and analyses of what others are doing. Like you say the essay do not go into great detail on the CMIP5 or other model derivations and that is why I wanted a deeper look at their competition to determine whether it was even worthy of the (intellectual) battle.

    • Kenneth, first, a belated thank you for your kind comments about me. I’m sorry that I have only just spotted what you wrote.

      Secondly, I echo your comments regarding difficulties in processing the raw CMIP5 data files. I don’t find size in itself a problem – I have downloaded well over a terabyte of monthly data. But the failure of the modelling centres to agree and stick to a standard file format is very unhelpful, and makes automating processing of the data difficult and time consuming. A single large netCDF file for each run, with all models having the same month end, would have been much easier to process.

      I did eventually manage to prepare global and zonally-averaged annual TOA imbalance summary files by model for the abrupt 4x CO2 experiment, but I concluded that I really needed to deduct the corresponding values from the preindustrial control runs, and I haven’t had a chance to resolve the problems involved in doing so, or to move on to other CMIP5 ‘experiments’. If you can get KNMI to provide properly processed annual CMIP5 data – ideally zonal-averages as well as global means – that would be great.

  64. the previous post, week in review, contained a link to an Ed Hawkins post.
    http://www.climate-lab-book.ac.uk/comparing-cmip5-observations/

    The post includes a graph. I’ll try insert the image

    the red hatched box is described as “indicative IPCC AR4 assessed likely range for annual means”. This box is clearly different ( although overlapping) with the grey shaded model ensemble. It seems to me that if the red box represents the IPCC future prediction out to 2035 then in a way then in a way they’ve already moved away from the model ensemble being the best estimate and something else has replaced it.

  65. Antonio (AKA "Un físico")

    Alexander, I have no time to read your full thesis: only your abstract. I hope you find time to read at least my abstract at: https://docs.google.com/file/d/0B4r_7eooq1u2TWRnRVhwSnNLc0k/edit . Please focus in the idea (A.c). Very few scientist are aware of that idea, but time-scales are key for getting climate change statistically-valid predictions.
    I agree with your thesis: CMIP5 models have no predictive capacity. But I bet you could not defend that thesis in the ETHZ (in Switzerland) as it is the domain of Reto Knutti.

  66. @ Dr. Curry

    “I seriously doubt that such a thesis would be possible in an atmospheric/oceanic/climate science department in the U.S. – whether the student would dare to tackle this, whether a faculty member would agree to supervise this, and whether a committee would ‘pass’ the thesis.”

    Using ‘seriously doubt’ in the same context, I ‘seriously doubt’ that if I climb up on the roof of my house and jump off, my demise will be precipitated by asphyxiation as I pass through the stratosphere en route to the moon.

    • I have no personal knowledge of this guy but reading his blog makes me think he’s open minded, I’ll suggest Isaac Held would support a rigorous review of climate models.

  67. Thank you Dr. Curry for another important post. As I’ve interviewed for Atmospheric Science jobs in insurance/reinsurance/catastrophe modeling, government labs and academia, one thing has struck me. It is nearly impossible to get a job in this field if you work from data and are not a climate modeler. Reinsurance companies (by virtue of catastrophe models) set their rates based largely on un validated output from climate models. I’ve been told I’m qualified to work as a scientist in their industry (catastrophe modeling) because I’m a synoptician and stochastic modeler and not a physical modeler. I eventually left the field in frustration.

    Climate models are not only being used to make policy. They are being used in insurance. They are being used by engineers and actuaries who understand absolutely nothing about they work to evaluate risk associated with extreme events. They are also being used in water resources by engineers who have no understanding of how they work. As a statistical modeler, I find it troubling to think that models that are so complex they can’t be fully understood or validated to make any decision about risk. No rational person in their right mind would make a decision based on these models. Unfortunately, insurance companies are being forced to by regulators. We need to have a conversation about what are appropriate uses of climate models before they lead us into making very bad decisions.

  68. I think Nickels has made a number of interesting comments on this thread. GCM’s do not use the best modern numerical methods. I don’t quite understand why.

    I’m not sure what Steve Mosher is getting at with regard to tuning. It would be a sad commentary on modeler’s intelligence to suppose they don’t know what effect parameters have on model outputs. They no doubt the the parameter to give more reasonable results or ones that agree better with the data. Turbulence models are tuned this way all the time and specialists are smart enough to know what they are doing.

    • According to Mosher the models are not tuned. They are tooned. When they fail, as they all do, they are sent to the Climategate Metro Universal Studios over in Toontown, for reanimation.

    • Thx. If I had to guess why to solvers are old, its $. Sandia modernized its solvers, I believe mostly on ASCII money. I worked with guys with an interest in writing a modern solver at NCAR, but no one is still there due to $. Plus, these solver are very hard (all the different wave speeds and physics) and some of the experts in atmospheric science would have to buy in and help, and there didn’t seem to be much motivation in that.

  69. “I was very impressed by Bakker’s intellectual integrity and courage in tackling this topic in the 11th hour of completing his Ph.D. thesis. I am further impressed by his thesis advisors and committee members for allowing/supporting this. ”

    I am impressed too. Perhaps he would like a job on one of our production lines after he cannot be hired by a university as the state funding won’t cover someone not toeing the line.

  70. Just a kid trying to get a job by exploiting the pause.

  71. Thank you Judith!
    I thought the paper, “The Robustness of the Climate Modelling Paradigm, Ph.D. thesis by Alexander Bakker from The Netherlands”, was well ordered, phrased clearly, intelligently and quite coherent.

  72. It has been perfectly obvious ab initio that GCMs in general and the IPCC referenced GCM’s in particular have no utility for climate forecasting. Their outputs fall into the “not even wrong “category of results. There is also no reason for scientists to appear bewildered and to seem unable to conceive of other forecasting methods,
    Simple, transparent and reasonable forecasts can be made using the natural periodicities in the temperature data and the neutron count and 10Be data as the best proxy for the solar “activity” driver.
    For a discussion of the uselessness of the climate models see Section 1 at
    http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html
    For a discussion of the important natural periodicities see Section 2 and for forecasts of the timing and amplitude of the coming cooling which began in at the peak of the natural cycle in 2003 already see Section 3.
    also see
    http://www.woodfortrees.org/plot/rss/from:1980.1/plot/rss/from:1980.1/to:2003.6/trend/plot/rss/from:2003.6/trend
    The key periodicity on time scales of human interest is the quasi-millennial periodicity most clearly seen in Fig5.
    I have drawn attention to these model deficiencies and the need for new methods and forecasts on various threads on this blog for several years now but Judith has never seen fit to comment.
    Perhaps the thesis discussed here will finally cause her to recognize the
    the need for a different approach.

    • Dr. Page, “It has been perfectly obvious ab initio that GCMs in general and the IPCC referenced GCM’s in particular have no utility for climate forecasting.”

      The sad part is that the climate models could provide reasonable information. Judith had a post before where guys at the GFLD “initialized” their model by revising the ENSO region SSTs to actual observation. It did a pretty good job then because models include a “convective triggering” parameterization for surface temperature over about 28C degrees. For some reason the geniuses still believe that climate is a “boundary” value problem which the models will “discover”, but with the models missing actual surface temperatures by several degrees on a water world, reality hasn’t quite kicked in for the “experimenters”.

      Since the Clausius Clapeyron also depends on surface temperature not surface anomaly, the models don’t get diddly right as far as water vapor, precipitation and clouds. I find it hard to believe they are that stupid, but perhaps they are?

  73. captdallas Your reference to convective triggering is most interesting see
    http://3.bp.blogspot.com/-ZBGetxdt0Xw/U8QyoqRJsWI/AAAAAAAAASM/ewt1U0mXdfA/s1600/TrenPPT.png
    This is Fig 2 (from Trenberth) in section 1.3.2 at
    http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html
    You can see why including a convective triggering parameter would improve the connection to reality .However models of this type remain inherently incomputable see Section 1.2

  74. Captdallas With such a large number of variables you can get compensating errors producing the same outcome so that there is no means of knowing what a reasonable outcome is from the model outcomes themselves. It is well worth the time to watch the Essex presentation linked in Section2 which says
    “The modelling approach is also inherently of no value for predicting future temperature with any calculable certainty because of the difficulty of specifying the initial conditions of a sufficiently fine grained spatio-temporal grid of a large number of variables with sufficient precision prior to multiple iterations. For a complete discussion of this see Essex: https://www.youtube.com/watch?v=hvhipLNeda4

    Models are often tuned by running them backwards against several decades of observation, this is much too short a period to correlate outputs with observation when the controlling natural quasi-periodicities of most interest are in the centennial and especially in the key millennial range. Tuning to these longer periodicities is beyond any computing capacity when using reductionist models with a large number of variables unless these long wave natural periodicities are somehow built into the model structure ab initio.”

    • Generally, the past “tuning” is done to low absolute values. Instrumental SST and what the sky “sees” are not always the same. So if you are basing what the model limits are when the initial values or “tuning” criterion are you can come up with all sorts of reasons why “models” won’t work.

      If the models are properly initialized, that would be the information need to evaluate how well they may be able to perform. Once that GFLD example gets to 300K, it starts leveling off as it should.

      Personally, I wouldn’t write them off completely because I am a cheap bastard, they can produce something useful, but I think cleaning house in a few “institutions” might stimulate a little more attention to detail.

  75. If you are a cheap bastard you should use my forecasts based on the natural cycle approach – I don’t get paid by anybody so the forecasts are free to any interested parties and clear and simple enough that even politicians might understand if they read them. You can see why academia in general would not see any value or profit in terms of publications, positions ,grants,or Government jobs and Honours if they used simple commonsense and basic observations in climate forecasting.
    How can you possibly estimate the effect of anthropogenic CO2 without knowing where we are relative to the natural cycles especially the millennial cycle. All the efforts of the modelers have been a wasted journey down a blind alley. Climate science has gone backwards since Lamb was replaced by Wrigley at the MET.See
    http://3.bp.blogspot.com/-aYs23POSPlY/U82ElnxCsMI/AAAAAAAAATs/Rgk4mQB8PCc/s1600/temps-ar1lamb.jpg
    Forecasts based on Lamb,s figure from the FAR would do better than all the modeling since.
    The link is also Fig 8 at
    http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html

  76. The models are just supposed to be impressive, like wearing a white lab coat in doctor TV commercials.

    That they’re nonsense doesn’t matter. That they’re known to be nonsense and provably nonsense doesn’t matter. It’s all peer reviewed.

  77. Steven Mosher

    start here

    https://www.newton.ac.uk/event/clpw04

    Watch everything

    https://www.newton.ac.uk/event/clpw04/seminars
    This is one example …Tamsin’s work. excellent

    https://www.newton.ac.uk/seminar/20101207110011401

    Its 40 minutes.

    I predict that David Young is the only person who will watch it

    • Steven Mosher, “This talk will focus on the practical aspects of building an emulator, and show how our emulator can be used to learn about HadCM3, and to learn about the value of palaeoclimate measurements for parameter calibration.”

      https://lh5.googleusercontent.com/-fvZRSyR4ngk/VNJpsjGE2ZI/AAAAAAAAMhY/Nn7aVyiUK5k/s766/2%2520gisspi%2520v%2520ipwp.png

      https://lh4.googleusercontent.com/-OvQHzJy8gFU/VLF3XHImAqI/AAAAAAAAMGM/X1vv5Tx0kiI/s689/oppo%2520over%2520mann.png

      Decisions decisions

      Oh wait!

      https://wattsupwiththat.files.wordpress.com/2014/09/comic_rollercoaster_610.jpg

      Thank goodness for NOAA wtf NOAA, the models are saved!!

      • Steven Mosher

        scribbling on a blog is a bad hobby.
        knit more. comment less

      • StevenMosher, “scribbling on a blog is a bad hobby.
        knit more. comment less”

        That’s good advice. On a water world missing the tropical SST by a few degrees would pretty much invalidate any climate model. This particular thread is about “questioning the robustness of climate modeling paradigm” .

        http://climexp.knmi.nl/data/icmip5_tas_Amon_modmean_rcp60_0-360E_-15-15N_n_+++.png

        Note that peak temp of about 301.5K (28.4C)

        http://climexp.knmi.nl/data/iersstv4_0-360E_-15-15N_n.png

        Since the models have parameterize tropical convection using a “convection triggering” temperature of around 28C, the models aren’t getting it. That is pretty plain and simple, no getty SST no getty climate.

        Now if you toon the models to paleo compiled by some of the true giants in the field, there are no wiggles. If you use Uk’37 “THermo-bugs” you are going to have a low temperature bias of about 1C in ideal areas and more than 2 C in less than ideal areas. So how the f are you going to toon models to paleo in this current state of the paleo art?

        Wave your arms all you like, but models gots issues.

      • http://climate.calcommons.org/article/why-so-many-climate-models

        “The above figure, which comes from the report High Resolution Climate-Hydrology Scenarios for San Francisco’s Bay Area, amply illustrates the conundrum facing all those who would make use of climate projection data. In this figure, 18 different climate models are represented, with temperature change values ranging from less than 1 degree to 6 degrees increase, and precipitation change values ranging from a 20% decrease to a 40% increase. With such a wide range, what value does one use for planning? Why is there such a plethora of models?”

        “Despite the existence of a wide variety of climate models, there are conceptual problems in treating these as independent entities, amenable to statistical treatments such as averaging or taking standard deviations. To begin with, climate models have a shared history, or in other words a genealogy (Masson and Knutti 2011, Winsberg 2012). There is common code between many of these models. Technical knowledge moves from modeling group to modeling group as scientists relocate. At present we lack a detailed characterization of the shared history of climate models, and it is not at all clear what we do with such a treatment in a statistical analysis sense. It is certainly inappropriate to treat different climate models as randomly sampled independent draws from a hypothetical model space, which is what would be required by rigorous statistical analysis. (Winsberg 2012).”

      • Steven Mosher

        captain.

        you dont validate a model for a use by comparing it to reality.

        you compare to the specification for that use case.

        PERIOD.

      • Steven Mosher, “you dont validate a model for a use by comparing it to reality.”

        Now that is right, “I” compares models with reality as a form of verification. “You” are more tolerant of crap than “I” am :)

        Your little subtlety is lost on most of the taxpayers as is calling a “forecast” a “projection” when it misses.

      • Steven, aren’t the climate models we are discussing trying to model the actual climate? Aren’t they represented as models of the real climate to policy makers?

      • ‘AOS models are therefore to be judged by their degree of plausibility, not whether they are correct or best. This perspective extends to the component discrete algorithms, parameterizations, and coupling breadth: There are better or worse choices (some seemingly satisfactory for their purpose or others needing repair) but not correct or best ones. The bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms, and a posteriori solution behavior.’ http://www.pnas.org/content/104/21/8709.full

        So pick the solutions – thousands of them – that resemble the recent past.

        https://scienceofdoom.files.wordpress.com/2014/12/rowlands-2012-fig1.png

        But if they don’t get the ‘component discrete algorithms’ – and they don’t – what is the point?

      • Matthew R Marler

        Steven Mosher: you dont validate a model for a use by comparing it to reality.

        Do you trust the results computed by a model that has never been shown to match what it is claimed to model, within specified tolerances? Why?

      • Where did you get the chart, Rob? It looks like the judgement criteria there is goodness-of-fit between observations and models. Doesn’t say anything about plausibility. I wonder if they say on their applications for grants that they want to spend large sums of other people’s money developing plausible climate models. It’s like Arkansas or W. Virginia requesting a few hundred millions of dollars of federal money to build some plausible bridges.

      • “Steven, aren’t the climate models we are discussing trying to model the actual climate? Aren’t they represented as models of the real climate to policy makers?”

        yes.
        yes.

        Neither of which matters.

        Let me explain it for you AGAIN

        Here is the DOD defintion

        “validation. The process of determining the degree to which a model or simulation and its associated data are an accurate representation of the real world from the perspective of the intended uses of the model.”

        validation is the process of determining the DEGREE to which a model is accurate. note the precision of this definition. It does not say determining whether or not a model MATCHES reality. NONE DO.
        it says measuring the DEGREE to which.

        Next. Note that the degree of accuracy is CONDITIONED by the USE.

        Lets do a simple example: I am a user. I want a model that predicts the range of my aircraft within 50 nm say under standard atmospheric conditions. As a modeller your job is to build a model that is that accurate.

        Suppose you build a model and gets the range right to within 25 nm under a standard day conditions. And suppose It gets a tropical day range that is 100nm off. is the model valid?

        yes. the model is valid. Even though its 25 nm off for a standard day
        and 100nm off of reality for a tropical day. WTF? Why?

        Its valid because it meets the accuracy requirements set down by user.
        within 50nm for a standard day. The spec says nothing about tropical days.

        Why do we define validation this way? Simple. because models are always wrong. And because, if you make “matches reality” a spec
        people will always find a way to complain after the fact.

        So are climate models valid?

        1. Ask who the user is
        2. Ask how much accuracy they demand for their particular use.

        Here is what we know.

        you are not the user, so your conception of what is valid doesnt count.
        PERIOD.

        Another way to look at this is you cannot ask the question of validity because there is no spec.

        Asking the question shows you dont understand what validationn means.

        it means measuring the degree of agreement with reality from the perspective of a given use.

        You guys think validity means measuring agreement from god’s perspective.

        it’s not.

      • I understand all that Steven. But is that how it is sold to the policy makers and the general public? Or are climate scientists happy to have the public and policy makers believe that the computer climate models are like the computer models that got people to the moon and back?

      • Matthew R Marler

        Steven Mosher: you are not the user, so your conception of what is valid doesnt count.

        As voters, letter-writers, investors, planners and such, we are users.

      • Oh impenetrable walls of Ivory Tower!

        Steven Mosher, I’m just a boneheaded Ingenieur, but your definition of validation amounts to “We make stuff up”.

        What you describe here is nothing but a fri**ing computer game.

        Even so, whatever your notion of valid:

        3. I ask Show me evidence that any GCM undergoes INDEPENDENT evaluation and an accreditation. Name the agency or company. Please.

      • Steven Mosher

        matthew

        ‘As voters, letter-writers, investors, planners and such, we are users.”

        NO you are not. you are not a user.
        you’ve never looked at the data.
        you’ve never talked to a decision support people to ask for their guidance.

        in some cases you voted for the users. in other cases the users were appointed.
        in some cases the users are business,

        in no case are you a user.

      • Steven Mosher

        KenW.

        They cant undergo accrediation until users define a spec.

        You are asking the wrong question.

        think harder.

      • Steven Mosher

        Don

        “I understand all that Steven. But is that how it is sold to the policy makers and the general public? Or are climate scientists happy to have the public and policy makers believe that the computer climate models are like the computer models that got people to the moon and back?”

        go watch the video I posted.

        It will start to give you a tiny taste of how its sold.

        in the UK at least.

        As far as emission reductions, you dont need a GCM to decide.
        a simple energy balance calculation will give you all the basis you need to tax carbon.. or build nukes.. your choice

      • okay Steven, I’m reading and thinking very hard…

        “no spec”

        I’m baffeled.

        Do people get paid for something here?

      • Software with no specs, no evaluations, no picky customers and 32 free variables.

        The Ivory Tower must be Paradise.

      • nottawa rafter

        Mosher

        After reading all your comments, even the attempts to clarify, I understand why the climate establishment can look at themselves in the mirror, despite the
        growing divergence between the models and reality. I am not convinced that even if we have 50 years of near flat temperatures, the consensus will ever change their minds due to the elegance of the AGW theory.

        ” But, but it just has to be right, The physics say so.” ….. “Well, the model didn’t get validated by reality, but what the hey, it met our requirements.”

        Both statements born of the same mindset….. rationalization.

      • Steven Mosher: “captain.

        you dont validate a model for a use by comparing it to reality.

        you compare to the specification for that use case.

        PERIOD.”