Questioning the robustness of the climate modeling paradigm

by Judith Curry

Are climate models the best tools? A recent Ph.D. thesis from The Netherlands provides strong arguments for ‘no’.

The usefulness of GCM climate models, particularly for projections, attribution studies, and impact assessments has been the topic of numerous Climate Etc. posts:

A new thesis from The Netherlands (pointed out to me by Jeroen van der Sluijs) provides a very interesting and unique perspective on this topic.

The Robustness of the Climate Modelling Paradigm, Ph.D. thesis by Alexander Bakker from The Netherlands [link]

The context for the thesis is related in the Preface:

In 2006, I joined KNMI to work on a project “Tailoring climate information for impact assessments”. I was involved in many projects often in close cooperation with professional users.

In most of my projects, I explicitly or implicitly relied on General Circulation Models (GCM) as the most credible tool to assess climate change for impact assessments. Yet, in the course of time, I became concerned about the dominant role of GCMs. During my almost eight year employment, I have been regularly confronted with large model biases. Virtually in all cases, the model bias appeared larger than the projected climate change, even for mean daily temperature. It was my job to make something ’useful’ and ’usable’ from those biased data. More and more, I started to doubt that the ’climate modelling paradigm’ can provide ’useful’ and ’usable’ quantitative estimates of climate change.

After finishing four peer-reviewed articles, I concluded that I could not defend one of the major principles underlying the work anymore. Therefore, my supervisors, Bart van den Hurk and Janette Bessembinder, and I agreed to start again on a thesis that intends to explain the caveats of the ’climate modelling paradigm’ that I have been working in for the last eight years and to give direction to alternative strategies to cope with climate related risks. This was quite a challenge. After one year hard work a manuscript had formed that I was proud of and that I could defend and that had my supervisors’ approval. Yet, the reading committee thought differently. 

According to Bart, he has never supervised a thesis that received so many critical comments. Many of my propositions appeared too bold and needed some nuance and better embedding within the existing literature. On the other hand, working exactly on the data-related intersection between the climate and impact community may have provided me a unique position where contradictions and nontrivialities of working in the ’climate modelling paradigm’ typically come to light. Also, not being familiar with the complete relevant literature may have been an advantage. In this way, I could authentically focus on the ’scientific adequacy’ of climate assessments and on the ’non- trivialities’ of translating the scientific information to user applications, solely biased by my daily practice.

The thesis is in two Parts:  Part I addresses the philosophy of climate modeling, while Part II describes specific impact assessment studies.  I excerpt here some text from Part I of the thesis.:

In the light of this controversy about the dominant role of GCMs, it might be questioned whether the state-of-the-art fully coupled AOGCMs really are the only credible tools in play. Are there credible methods for the quantitative estimation of climate response at all? And more important, does the current IPCC approach with large multi-model ensembles of AOGCM simulations guarantee a range of plausible climate change that is relevant for (robust) decisions?

Another important consideration is about the expenses. Apart from the very large computation time (and costs), the post processing and storage of the huge amounts of data ask lots of the intellectual capacity among the involved researchers. The used capacity is not available anymore for interpretation and creativity. This might be at the expense of the framing and communication of uncertainties and of the quality of some doctoral dissertations. 

The ’climate modelling paradigm’ is characterized by two major axioms:

  1. More comprehensive models that incorporate more physics are considered more suitable for climate projections and climate change science than their simpler counterparts because they are thought to be better capable of dealing with the many feedbacks in the climate system. With respect to climate change projections they are also thought to optimally project consistent climate change signals.
  2. Model results that confirm earlier model results are perceived more reliable than model results that deviate from earlier results. Especially the confirmation of earlier projected Equilibrium Climate Sensitivity between 1.5″C and 4.5″C degree Celsius seems to increase the perceived credibility of a model result. Mutual confirmation of models (simple or complex) is often referred to as ’scientific robustness’.

This chapter explores the legitimacy and tenability of the ’climate modelling paradigm’. It is not intended to advocate other methods to be better nor to completely disqualify to use of GCMs. Rather it aims to explore what determines this perception of GCMs to be the superior tools and to assess the scientific foundation for this perception. First, section 2.2 explains the origin of the paradigm and illustrates that the paradigm is mainly based on the great prospects of early climate change scientists. Then section 2.4 elaborates on the pitfalls of fully relying on physics. Subsequently, section 2.3 argue that empirical evidence for the perceived GCM-superiority is weak. Thereafter, section 2.5 argues that biased models cannot provide an internally consistent (and plausible) climate response, which is especially problematic for local and regional climate projections. Next, the independence of the multiple ’lines of evidence’ is treated in section 2.6. Finally, in section 2.7 it is concluded that the climate modelling paradigm is in crisis.

 The state-of the-art fully coupled AOGCMs do not provide independent evidence for human-induced climate change. GCM-based multi-model ensembles are likely to be (implicitly) tuned to earlier results. The confirmation of earlier results by GCMs is therefore no reason for higher confidence. The confidence in the GCMs originates primarily from the fact that, after extensive tuning of the feedbacks and other processes, a radiative balance is found for the Top-Of-Atmosphere. This is indeed quite an achievement, but the tuning usually provides only one of countless solutions. Multi-model ensembles tuned to a particular response give us only little insight in the possible range of outcomes. Besides, the GCMs only include a limited selection of potentially importantfeedbacks and sometimes artefacts have to be incorporated to close the radiative balance.

The founding assessments of Charney et al. (1979) and Bolin et al. (1986) did see the great potential of future GCMs, but based their likely-range of ECS on expert judgment and simple mechanistic understanding of the climate system. And even today, the IPCC acknowledges that the model spread (notably of multi-model ensembles) is only a crude measure for uncertainty because it does not take model quality and model interdependence into account. Nevertheless, in practice, GCMs are often applied as a ’pseudo-truth’.

The paradigm that GCMs are the superior tools for climate change assessments and that multi-model ensembles are the best way to explore epistemic uncertainty has lasted for many decades and still dominates global, regional and national climate assessments. Studies based on simpler models than the state-of-the-art GCMs or studies projecting climate response outside the widely accepted range have always received less credence. In later assessments, the confirmation of old results has been perceived as an additional line of evidence, but likely the new studies have been (implicitly) tuned to match earlier results.

Shortcomings, like the huge biases and ignorance of potentially important mechanisms, have been routinely and dutifully reported, but a rosy presentation has generally prevailed. Large biases seriously challenge the internal consistency of the projected change, and consequently they challenge the plausibility of the projected climate change.

Most climate change scientists are well aware of this and a feeling of discomfort is taking hold of them. Expression of the contradictions is often not countered by arguments, but with annoyance, and experienced as non-constructive. “What else?” or “Decision makers do need concrete answers” are often heard phrases. The ’climate modelling paradigm’ is in ’crisis’. It is just a new paradigm we are waiting for.

I was gratified to see three of my recent papers in his reference list:

JC comments

There isn’t too much in Part I of the thesis that is new or hasn’t been discussed elsewhere.  His discussion on model ‘tuning’ – particularly implicit tuning – is very good.  Also I particularly like his section ‘Lines of evidence or circular reasoning’.

However,  the remarkable aspect of this to me is that the ‘philosophy of climate modeling’ essay was written not by a philosopher of science or a climate modeler, but by a scientist working in the area of applied climatology.  His experiences in climate change  impact assessments provide a unique perspective for this topic.  The thesis provides a very strong argument that GCM climate models are not fit for the purpose of regional impact assessments.

I was very impressed by Bakker’s intellectual integrity and courage in tackling this topic in the 11th hour of completing his Ph.D. thesis.  I am further impressed by his thesis advisors and committee members for allowing/supporting this.  Bakker notes many critical comments from his committee members.  I checked the list of committee members, one name jumped out at me  – Arthur Petersen – who is a philosopher of science that has written about climate models.  I suspect that the criticisms were more focused on strengthening the arguments, rather than ‘alarm’ over an essay that criticizes climate models. Kudos to the KNMI.

I seriously doubt that such a thesis would be possible in an atmospheric/oceanic/climate science department in the U.S. – whether the student would dare to tackle this, whether a faculty member would agree to supervise this, and whether a committee would ‘pass’ the thesis.

Bakker’s closing statement:

The ’climate modelling paradigm’ is in ’crisis’. It is just a new paradigm we are waiting for.

I have made several suggestions re ways forward, contained in these posts:

652 responses to “Questioning the robustness of the climate modeling paradigm

  1. Pingback: We must rely on forecasts by computer models. Are they reliable? | The Fabius Maximus website

  2. As you said Judith, there is nothing new or revolutionary about what Bakker says. The interesting part is that the climate community KNOWS that the models don’t do what they are supposed to do, but it is always possible to get funding to start a new model or continue work on an existing one. It’s as if producing models is an end in itself that will somehow please the God of Climate and persuade him/her/it not to demand human sacrifices.

  3. “The ’climate modelling paradigm’ is in ’crisis’. It is just a new paradigm we are waiting for.”

    I agree. GCMs are based on a conventional earth system theory of a non-living earth. There is another earth system theory, Gaia theory. Some thoughts on a living earth model as a way forward . . .

    http://www.theecologist.org/blogs_and_comments/commentators/2511300/climate_change_and_the_living_gaia.html

    • Thanks for your link. It provides an interesting perspective on an alternative framework/paradigm.

      The notion of millions of biology/ecology driven PDEs vs the limited number of physics and chemistry driven ones solved via the GCMs makes sense. To take the GCMs as gospel one would have to conclude that the “living” aspects of the system can be ignored except for man’s impact via GHG emissions.

      • I appreciate your comment. For various reasons Gaia theory has been all but ignored by climate scientists on both sides of the debate. Personally, I find the theory plausible and intriguing. Whether or not Gaia theory is valid, it certainly deserves fair consideration and testing.

    • Hi Lee,

      I went to the British Museum over Xmas where they have James Goldsmith’s archive and a small exhibition. It was really interesting – particularly the scientists arguing against it.

      It reinforced some earlier ideas I’d had about climate – that Life adapts to the environment – sometimes very quickly & there are microbes in the atmosphere. Recently I’ve looked at the CERN CLOUD experiments and noticed that molecules associated with trees improve cloud seeding. Definately room for thought there.

    • Lee, Gaia theory is not the only other paradigm that is possible. In the philosophy of Henri Bergson and Alfred North Whitehead it is argued that mathematics and physics are inherently not able ot predict the behaviour in natural systems because they have a wrong perception of time for these kind of systems. In essence these systems are always evolving or in a state of ‘becoming’ while physics and mathematics work with states of ‘being’ in which processes can be described by cutting them in time. There is also no need to make a distinction between living and non-living systems in these process philosophies. It means that we will never be able to predict the future in an evolving system except the very near future which will more or less be the same as the present. After that it is not chaos cause that too is a mathematical concept. Predicting the future climate is thus not very different then predicting the weather or the stock market on a longer time-scale. You can write all the software you want it just will not work out consistently.

      • Predicting nonreproducable results fed the classic stock brokers industry since conception and will continue for a long while. If wrong there is always unforeseen events to blame it on. If temperature begins to fall over next decade the claim will be natural time for the end of interglacial is knocking. Or, perhaps if an ocean current changes it will of course, be man.

  4. For another discussion of this subject see — and recommendations — see this article by two ecologists (like climate science, a field struggling with the power and limitations of models): “Are Some Scientists Overstating Predictions? Or How Good are Crystal Balls?“, Tom Stohlgren and Dan Binkley, EcoPress, 28 October 2014 — Conclusion:

    Given the … large and unquantifiable uncertainties in many long-term predictions, we think all predictions should be:

    stated as hypotheses,
    accompanied by short-term predictions with acceptance/rejection criteria,
    accompanied by simple monitoring to verify and validate projections,
    carefully communicated with model caveats and estimates of uncertainties.

    Seems like good advice that could be easily adoptive — but probably will be only as a result of pressure on climate scientists by people outside their community. Perhaps from the funding agencies, or Congress.

    • FM, thanks very much for this link

    • Matthew R Marler

      Fabius Maximus, thank you for the link.

    • Add my thanks for the link to the list, Fabius Maximus.

      Cheers

      • This is topic drift, but since the subject has been raised. Our website was named after Fabius Maximus (280 – 203 BC), who saved Rome from Hannibal by recognizing Rome’s weakness and therefore the need to conserve its strength. He turned from the easy path of macho “boldness” to the long, difficult task of rebuilding Rome’s power and greatness. His life holds profound lessons for 21st Century Americans.

        We advocate a conservative strategy for America. “Conservative” in strategic sense (we anger Left and Right equally).

        I started writing about climate in 2008, using it to illustrate our inability to see and think clearly about public policy issues (which has not improved since then). My recommendations start from Steven Mosher ‘s ast observation “We don’t even plan for the past” (source). That matches with a recent comment by Roger Pielke Sr about using models to prepare for future climate:

        @FabiusMaximus01 Using the recent paleorecord, historical data and worst case sequence of observed is much better approach.

        — Roger A. Pielke Sr (@RogerAPielkeSr) February 3, 2015

    • Well,duh. But, yeah, thanks.
      =======

      • Kim

        Can you relay my thanks as well to Fabius maximus or Fm as I expect he is known to you.
        Tonyb

      • By the way, for those that don’t know, the original Fabius maximus was a Roman dictator whose chief claim to fame was his campaign to defeat hanibal.

        I came across this intriguing reference as to Fabian battle strategies which seem to me to be the perfect tactics to be used to counter The forces that will be massed in Paris later in the year in readiness for their concerted climate offensive.

        http://en.m.wikipedia.org/wiki/Fabian_strategy

        Tonyb

      • From Plutarch, just after the Trebian trouble and the traverse of Tuscany:

        ‘Besides the more common signs of thunder and lightning then happening, the report of several unheard-of and utterly strange portents much increased the popular consternation.’
        ================

      • Lost a lot of elephants along the way, didn’t he?

  5. I’m thinking that we won’t find much discussion of this part among “skeptics”:

    oops.

    Shortcomings, like the huge biases and ignorance of potentially important mechanisms, have been routinely and dutifully reported…

    Because it doesn’t quite fit with the standard “skeptic” narrative. The standard narrative being that biases and ignorance of potentially important mechanisms are routinely and dutifully ignored or hidden, Try to find a single thread in the “skept-o-sphere” where that standard “skeptical” narrative isn’t asserted!

    But for the same reason, I’m thinking we will find discussion among “skeptics” of this part:

    “…. but a rosy presentation has generally prevailed.”

    Because it’s a characterization that they can take on faith – even though it amounts to argument by assertion, with no need for skeptical treatment of counterarguments (that a different view has prevailed because the view being presented is analyzed and rejected).

    Too bad reasonable engagement is such a difficult achievement with issues that become so ideologically polarized.

    • ‘I’m thinking that we won’t find much discussion of this part among skeptics”:
      oops.
      Shortcomings, like the huge biases and ignorance of potentially important mechanisms, have been routinely and dutifully reported…’
      An issue that you invented is an “oops”?
      I think we should stick to saying “oops” on our own mistakes, rather than making up ones for others to make.

      Would you call it an “oops” that this paper says forthrightly and repeatedly that the models are implicitly tuned? After all, at ATTP we are currently having a discussion where several commenters insist that the models are not tuned, since that is what realclimate says and they are prominent climate scientists.

      • miker –

        Would you call it an “oops” that this paper says forthrightly and repeatedly that the models are implicitly tuned? After all, at ATTP we are currently having a discussion where several commenters insist that the models are not tuned, since that is what realclimate says and they are prominent climate scientists.

        I followed the discussion a bit. I think that your characterization of their arguments (that they are basically nothing but appeal to authority) is not terribly accurate. But I think the discussion is interesting.

        As Judith says, there’s not really a lot new here. I have seen these discussions about modeling a number of times – and I would imagine that people who are really knowledgeable about the field have seen the discussions quite a bit. I have seen modelers from within the “consensus” community acknowledge these issues.

        Not being someone who can evaluate the technical arguments, I focus on other elements of the discussion. One of the elements that I follow here is the oft’ hear refrain from “skeptics” that I see directly contradicted in the excerpt that Judith provided – as I mentioned above.

        Can you address that issue? Was there something inaccurate about my point? Do we not see it stated in practically every thread in the “skept-o-sphere” that biases and ignorance of potentially important mechanisms are routinely and dutifully ignored or hidden?

      • Miker613, anyone who says models aren’t tuned does not know what they are talking about. All GCMs are parameterized. Parameter sets for CMIP5 were selected to give the best 10, 20, and 30 years hindcasts per the experimental design. See Taylor et. al. BAMS 93: 485-498 (2012) open acsess on line. For parameterizarion, see the technical documentarion to NCAR CAM3. Available free on line as NCAR/TN-464+STR (2004). Or read essay Models all the way Down in ebook Blowing Smoke.

      • Steven Mosher

        Rud

        “Parameter sets for CMIP5 were selected to give the best 10, 20, and 30 years hindcasts per the experimental design. See Taylor et. al. BAMS 93: 485-498 (2012) open acsess on line”

        wrong.

      • Joshua, I can’t address your point because I don’t know. I expect you’re right – there is a lot of nonsense written by skeptics. And by non-skeptics. Pay no attention.

      • Steven, is this also…wrong?

        “Model results that confirm earlier model results are perceived more reliable than model results that deviate from earlier results.”

      • Matthew R Marler

        Steven Mosher: wrong.

        Details please. I’d take Rud Istvan’s authority over yours any day. So if you think he is wrong, please provide the details. For example, you could quote text from the sources he cited proving that he misinterpreted them.

      • Joshua:
        ” Shortcomings, like the huge biases and ignorance of potentially important mechanisms, have been routinely and dutifully reported…

        Because it doesn’t quite fit with the standard “skeptic” narrative. The standard narrative being that biases and ignorance of potentially important mechanisms are routinely and dutifully ignored or hidden, Try to find a single thread in the “skept-o-sphere” where that standard “skeptical” narrative isn’t asserted!

        Hmm, “routinely…reported” and “routinely…ignored” are not mutally exclusive. If “huge biases and ignornce of…mechanisms” is “routinely…reported” in the literature, and IPCC AR’s are “our best summary of the work”, why are the “huge biases” not noted in the AR’s? “ignorance of potentially important mechanisms” could perhaps be defended as present in the AR’s, but I don’t think I’ve ever seen anyone acknowledge, let alone defend, this “ignorance of …mechanisms” being missing from the ARs.

      • Steven Mosher

        Matthew

        read the paper

        “Details please. I’d take Rud Istvan’s authority over yours any day.”

        Ruds claim

        “Parameter sets for CMIP5 were selected to give the best 10, 20, and 30 years hindcasts per the experimental design. See Taylor et. al. BAMS 93: 485-498 (2012) open acsess on line”

        http://journals.ametsoc.org/doi/pdf/10.1175/BAMS-D-11-00094.1

        1. There is NO SUCH THING in the document he references
        2. Input data sets ( forcings ) are proscribed for the experiments.
        3. There are no “parameter sets” proscribed
        4. That document is the overview. the actual technical description is on the CMIP5 page.
        5. the ONLY thing that comes close to mentioning 10 20 and 30 year periods are the EXPERIMENTAL decadel forecasts. and even there parameters are not proscribed.
        6.Different models have different tunable paramaters. HADGCM for example has 32. in no way does the design of experiment document proscribe the setting of these parameters.

      • Steven Mosher, you perhaps misunderstood my comment. The CMIP5 experimental design required decadal hindcasting back to 3 decades. All GCM models are massively parameterized. They must be. The NCAR CAM3 technical documentation shows what,why, and how. The individual modelers choose parameter sets for their models that give the best hindcasts. Duh. That is all. And that amounts to tuning.
        Explains why high sensitivy models have high aerosols. And low sensitivity models don’t. Got to find a parameter set that hindcasts reasonably well for any model to have street cred.
        This was explained with many footnotes in the Climate chapter of The Arts of Truth.

      • It is a bit like when SteveMc says keep an eye on the key; the models are not tuned, because they call tuning something else, paramiterization.

      • Matthew R Marler

        Rud Istvan: The individual modelers choose parameter sets for their models that give the best hindcasts

        Where exactly is that described? I did not find it in the document that you linked to, nor in the first few documents that it linked to.

      • Steven Mosher

        Rud
        ‘Steven Mosher, you perhaps misunderstood my comment. The CMIP5 experimental design required decadal hindcasting back to 3 decades.”

        the DECADAL experiments were not required.
        They were experimental.
        They are started at various time steps. They are initialized with climate state variables.
        “Parameter sets for CMIP5 were selected to give the best 10, 20, and 30 years hindcasts per the experimental design.”
        is what you said.
        it is wrong.

        As you note some models have many parameters. But the design did not proscribe these.

        The models are not tuned in any but the most vague sense of the word.
        Were they tuned they would hindcast BETTER than the do.
        The only example of REAL tuning I know of is the FAMOUS excercise which used a very reduced model.

      • Steve, what do they do with all the models that did not hindcast due to paramertization?
        They went under the bus and the ones that closest matched the past and the climate sensitivity they wished were retained.
        By the way, doing a fit is not an experiment; experiment has a defined meaning and abusing its meaning is not helpful.

      • Matthew R Marler, it isnt documented anywhere. That is the point. Mosher points out that HADGCM has 32 parameters. The keepers of that model are free to select any set of those 32 values. They only submit their model results to CMIP5, not the parameterization. So, they fooled round to find a set of their 32 parameters that did a good job in the required hindcasting ‘experiments’ (note the perversion of the idea of an experiment.) They froze their parameterization in some set giving their best hindcasts per the ‘experimental design’, then ran all the cententenial stuff using that set.
        And that’s why CMIP5 missed the pause and runs hot. All the different (and unreported) parameterizations were chosen to best fit the whatever model to the period about 1975-2005 per the CMIP5 experimental design. That only makes sense if you think all the delta T over that period was caused by CO2. Which warmunists believe. Subsequent results say that was not true. The attribution problem in a different guise.
        Either Mosher knows this and was blowing smoke, or he shoild read more and comment less. It is sometimes hard to take one’s own advice.
        Regards, and thanks for the kind words upthread.

      • Is Mosher simply misspelling or playing one of his games when he writes “proscribed” (i.e., prohibited) where a naive reader would expect to find “prescribed” (i.e, mandated)? I can just see his later “read harder” admonition (in contrast to his usual “charitable reading” admonition) as part of Mosherball.

      • Matthew R Marler

        Rud Istvan: Matthew R Marler, it isnt documented anywhere.

        I think it likely that the modelers have good ideas about what parameter values work well for the models that they have developed for the data that they have, data that have been studied now for decades. However, you made specific claims, like this one: Parameter sets for CMIP5 were selected to give the best 10, 20, and 30 years hindcasts per the experimental design. See Taylor et. al. BAMS 93: 485-498 (2012) open acsess on line. For parameterizarion, see the technical documentarion to NCAR CAM3. Available free on line as NCAR/TN-464+STR (2004).

        and

        The individual modelers choose parameter sets for their models that give the best hindcasts.

        They didn’t “tune” the parameters for the latent heat of vaporization of water or the specific heat of dry air at standard temperature and pressure, for example, but took those and many other “physical constants” from the textbooks and review literature.

        Unless you have better references than what you have provided so far (which you may indeed have), then it looks to me like Steven Mosher is correct on this point.

      • Look! A Cloud!

      • MRM, your above latest comment indicates you did not read the NCAR GCM manual I referenced above. Never mind the differential equations or the algorithmic numerical approximation methods. Just read the chapter outlines to understand my point.
        Parameters are not know physical constants in GCMs. You make the mistake of thinking like a pre normal scientist rather than a post normal ‘climate scientist’. Climate model parameters have to do with guessing stuff the GCMs are inherently incapable of simulating. For illustrations from the NCAR CAM3 technical manual referenced above, here are some Chapter 4 (model physics) chapter subsection headings, all direct quotes. 4.5 Prognostic Condensate and Precipitation Parameterization (humidity fail, the missing modeled tropical troposphere hotspot!). 4.7 Parameterization of cloud fraction (ah, the AR5 WG1 chapter seven cloud uncertainty). 4.9.3 Trace gas parameterization. And so forth. And that was just chapter 4 subsection headings! Chapter 7 is a bigger revelation– initial and boundary conditions.
        Please, read either my references or my book essays. Or both.

        Parameterization has little to do with known physical constants like the latent heat of evaporation. That erroneous notion is another example of how warmunists have obscured things for the rest of us–IMO on purpose, like Mosher here who either knew or should have known.

      • Matthew R Marler

        Rud Istvan: Matthew R Marler, it isnt documented anywhere.

        Rud Istvan: MRM, your above latest comment indicates you did not read the NCAR GCM manual I referenced above. Never mind the differential equations or the algorithmic numerical approximation methods. Just read the chapter outlines to understand my point.

        I wish you would make up your mind on this point. So far, I have not found anything that looks like “tuning” or parameters.

        Climate model parameters have to do with guessing stuff the GCMs are inherently incapable of simulating. For illustrations from the NCAR CAM3 technical manual referenced above, here are some Chapter 4 (model physics) chapter subsection headings, all direct quotes. Prognostic Condensate and Precipitation Parameterization (humidity fail, the missing modeled tropical troposphere hotspot!). 4.7 Parameterization of cloud fraction (ah, the AR5 WG1 chapter seven cloud uncertainty). 4.9.3 Trace gas parameterization. And so forth.

        So far, I have not found where those are based on “tuning” instead of physical considerations.

        Here is Appendix A from NCAR/TN-464+str(2004) (by the way, is that thing available in pdf format?);

        A. Physical Constants
        Following the American Meteorological Society convention, the model uses the International System of Units (SI) (see August 1974 Bulletin of the American Meteorological Society, Vol. 55, No. 8, pp. 926-930).

        \begin{displaymath}\begin{array}{lcll} a & = & 6.37122 \times 10^{6} \quad\mathr… …y \: air \: at \: standard \: pressure/temperature} \end{array}\end{displaymath}

        {oops, the table does not display here, but there are 23 constants listed, including the dry air density at STP, and the specific heat of air at STP}

        The model code defines these constants to the stated accuracy. We do not mean to imply that these constants are known to this accuracy nor that the low-order digits are significant to the physical approximations employed.

        section 4.7 includes this: Convective cloud fraction in the model is related to updraft mass flux in the deep and shallow cumulus schemes according to a functional form suggested by Xu and Krueger [192]:

        $\displaystyle \cfrac _{shallow} = k_{1,shallow} ln(1.0+k_2 M_{c,shallow})$ (4.170)

        $\displaystyle \cfrac _{deep} = k_{1_deep} ln(1.0+k_2 M_{c,deep})$ (4.171)

        where $ k_{1,shallow}$ and $ k_{1_deep}$ are adjustable parameters given in Appendix C, $ k_2 = 500$, and $ M_c$ is the convective mass flux at the given model level.

        Then Appendix C contains in total:
        C. Resolution and dycore-dependent parameters
        The following adjustable parameters differ between various dynamical cores and model resolutions in CAM 3.0.

        Table C.1: Resolution and dycore-dependent parameters Parameter FV T85 T42 T31 Description
        $ q_{ic,warm}$ 8.e-4 4.e-4 4.e-4 4.e-4 threshold for autoconversion of warm ice
        $ q_{ic,cold}$ 11.e-6 16.e-6 5.e-6 3.e-6 threshold for autoconversion of cold ice
        $ k_{e,strat}$ 5.e-6 5.e-6 10.e-6 10.e-6 stratiform precipitation evaporation efficiency parameter
        $ RH_{\min}^{low}$ .91 .91 .90 .88 minimum RH threshold for low stable clouds
        $ RH_{\min}^{high}$ .80 .70 .80 .80 minimum RH threshold for high stable clouds
        $ k_{1,shallow}$ 0.04 0.07 0.07 0.07 parameter for shallow convection cloud fraction
        $ k_{1,deep}$ 0.10 0.14 0.14 0.14 parameter for deep convection cloud fraction
        $ p_{mid}$ 750.e2 250.e2 750.e2 750.e2 top of area defined to be mid-level cloud
        $ c_{0,shallow}$ 1.0e-4 1.0e-4 2.0e-4 5.0e-4 shallow convection precip production efficiency parameter
        $ c_{0,deep}$ 3.5E-3 4.0E-3 3.0E-3 2.0E-3 deep convection precipitation production efficiency parameter
        $ k_{e,conv}$ 1.0E-6 1.0E-6 3.0E-6 3.0E-6 convective precipitation evaporation efficiency parameter
        dif4 N/A 1.0e15 1.0e16 2.0e16 horizontal diffusion coefficient

        I can see why you don’t quote from this sucker.

        But all things considered, I do not find “tuning” in the sense most people understand it, but consideration of physics, standard physical constants, and other published literature. With evidence of changes from CAM 3.0, perhaps you could say that some of the parameter values were changed in light of model performance (“tuned” to a degree, at least, in a manner of speaking), but mostly this looks like thinking carefully and at length about the physics.

      • Matthew R Marler | February 3, 2015 at 2:27 pm |

        I wish you would make up your mind on this point. So far, I have not found anything that looks like “tuning” or parameters.

        Ummm, really. This isn’t defensible.

        http://www.researchgate.net/publication/259539033_Efficient_screening_of_climate_model_sensitivity_to_a_large_number_of_perturbed_input_parameters

        The actual atmosphere doesn’t have a setting:
        “Stratiform precipitation evaporation efficiency”

        That is a model parameter for something that is wildly variable in the actual atmosphere. The chance of it being exactly 5.0e-6 in the atmosphere on a given day at a given point is very poor.

        If you overestimate the CO2 forcing you can compensate by adjusting some of the 27 or more more model parameters to get reasonable output during the training period.

        The model parameters are by definition tuning parameters. If you change them the model output is different.

      • Matthew R Marler

        PA: Ummm, really. This isn’t defensible.

        Can you find where Rud Istvan’s specific claims ( tuning to get best 15 year etc hindcasts; etc.) are supported in the references that he cites?

        The model parameters are by definition tuning parameters. If you change them the model output is different.

        The claim was that they had been specifically tuned to get the best hindcasts. The actuality seems to be that some were physical constants, others were based on reading of the physics in the published literature. That they might have been “tuned” or could have been “tuned” is not evidence in support of the claim that they were in fact tuned.

      • MRM, to a late and maybe dead thread,
        You cite code from NCAR CAM3. Bravo. Now look at the parameter constants in that cloud code. Not some lab determined physical constant, rather, stuff like cloud fraction (observationally about 2/3) which still does not adequately express net cloud feedback since that also depends on:
        1. Cloud altitude
        2. Cloud type
        3. Cloud opacity
        QED.
        Dig ever deeper, and the wonders of post modern climate science become ever clearer. The only issue on this thead was whether GCMs were ‘tuned’ via hindcast parameterization. Well, what say you know? Having numerically identified some of those parameters in the code itself?

      • Matthew R Marler | February 3, 2015 at 6:33 pm |
        PA: Ummm, really. This isn’t defensible.

        The claim was that they had been specifically tuned to get the best hindcasts. The actuality seems to be that some were physical constants, others were based on reading of the physics in the published literature. That they might have been “tuned” or could have been “tuned” is not evidence in support of the claim that they were in fact tuned.

        Well gee, lets look at some of these physics based parameters:
        Minimum relative humidity for high stable cloud formation
        Cloud particle density over sea ice
        Initial cloud downdraft mass flux
        Evaporation efficiency parameter
        Minimum overshoot parameter
        Land vegetation roughness scaling factor

        Few of the parameters are based on physics. Some of them are synthetic parameters to make the model work. Some are constants representing atmospheric variables The acceptable value range for some parameters is over an order of magnitude.

        If you wish to continue the “physical constants” or “based… on physics” claims please identify by name which parameters conform to your description.

      • Matthew R Marler

        Rud Istvan: The only issue on this thead was whether GCMs were ‘tuned’ via hindcast parameterization. Well, what say you know? Having numerically identified some of those parameters in the code itself?

        I think that your claim that those parameter values were tuned to get accurate hindcasts (Parameter sets for CMIP5 were selected to give the best 10, 20, and 30 years hindcasts per the experimental design ) is not supported in the references that you cited.

        I’ll repeat something from the overview: Treatment of cloud condensed water using a prognostic treatment (section 4.5): The original formulation is introduced in Rasch and Kristjánsson [144]. Revisions to the parameterization to deal more realistically with the treatment of the condensation and evaporation under forcing by large scale processes and changing cloud fraction are described in Zhang et al. [200].The parameterization has two components: 1) a macroscale component that describes the exchange of water substance between the condensate and the vapor phase and the associated temperature change arising from that phase change [200]; and 2) a bulk microphysical component that controls the conversion from condensate to precipitate [144].

        The references to the published literature support a claim that they were doing something other than “tuning” the parameter estimates to get better fits to extant data.

      • Matthew R Marler

        PA: Few of the parameters are based on physics. Some of them are synthetic parameters to make the model work. Some are constants representing atmospheric variables The acceptable value range for some parameters is over an order of magnitude.

        I thought I had presented the entire overview, but I do not see it displayed, so I’ll put it here.

        1.2 Overview of CAM 3.0

        The CAM 3.0 is the fifth generation of the NCAR atmospheric GCM. The name of the model series has been changed from Community Climate Model to Community Atmosphere Model to reflect the role of CAM 3.0 in the fully coupled climate system. In contrast to previous generations of the atmospheric model, CAM 3.0 has been designed through a collaborative process with users and developers in the Atmospheric Model Working Group (AMWG). The AMWG includes scientists from NCAR, the university community, and government laboratories. For CAM 3.0, the AMWG proposed testing a variety of dynamical cores and convective parameterizations. The data from these experiments has been freely shared among the AMWG, particularly with member organizations (e.g. PCMDI) with methods for comparing modeled climates against observations. The proposed model configurations have also been extensively evaluated using a new diagnostics package developed by M. Stevens and J. Hack (CMS). The consensus of the AMWG is to retain the spectral Eulerian dynamical core for the first official release of CAM 3.0, although the code includes the option to run with semi-Lagrange dynamics (section 3.2) or with finite-volume dynamics (FV; section 3.3). The addition of FV is a major extension to the model provided through a collaboration between NCAR and NASA Goddard’s Data Assimilation Office (DAO). The AMWG also has decided to retain the Zhang and McFarlane [199] parameterization for deep convection (section 4.1) in CAM 3.0.

        The major changes in the physics include:

        Treatment of cloud condensed water using a prognostic treatment (section 4.5): The original formulation is introduced in Rasch and Kristjánsson [144]. Revisions to the parameterization to deal more realistically with the treatment of the condensation and evaporation under forcing by large scale processes and changing cloud fraction are described in Zhang et al. [200].The parameterization has two components: 1) a macroscale component that describes the exchange of water substance between the condensate and the vapor phase and the associated temperature change arising from that phase change [200]; and 2) a bulk microphysical component that controls the conversion from condensate to precipitate [144].
        A new thermodynamic package for sea ice (chapter 6): The philosophy behind the design of the sea ice formulation of CAM 3.0 is to use the same physics, where possible, as in the sea ice model within CCSM, which is known as CSIM for Community Sea Ice Model. In the absence of an ocean model, uncoupled simulations with CAM 3.0 require sea ice thickness and concentration to be specified. Hence the primary function of the sea ice formulation in CAM 3.0 is to compute surface fluxes. The new sea ice formulation in CAM 3.0 uses parameterizations from CSIM for predicting snow depth, brine pockets, internal shortwave radiative transfer, surface albedo, ice-atmosphere drag, and surface exchange fluxes.
        Explicit representation of fractional land and sea-ice coverage (section 7.2): Earlier versions of the global atmospheric model (the CCM series) included a simple land-ocean-sea ice mask to define the underlying surface of the model. It is well known that fluxes of fresh water, heat, and momentum between the atmosphere and underlying surface are strongly affected by surface type. The CAM 3.0 provides a much more accurate representation of flux exchanges from coastal boundaries, island regions, and ice edges by including a fractional specification for land, ice, and ocean. That is, the area occupied by these surface types is described as a fractional portion of the atmospheric grid box. This fractional specification provides a mechanism to account for flux differences due to sub-grid inhomogeneity of surface types.
        A new, general, and flexible treatment of geometrical cloud overlap in the radiation calculations (section 4.8.5): The new parameterizations compute the shortwave and longwave fluxes and heating rates for random overlap, maximum overlap, or an arbitrary combination of maximum and random overlap. The specification of the type of overlap is identical for the two bands, and it is completely separated from the radiative parameterizations. In CAM 3.0, adjacent cloud layers are maximally overlapped and groups of clouds separated by cloud-free layers are randomly overlapped. The introduction of the generalized overlap assumptions permits more realistic treatments of cloud-radiative interactions. The parameterizations are based upon representations of the radiative transfer equations which are more accurate than previous approximations in the literature. The methodology has been designed and validated against calculations based upon the independent column approximation (ICA).
        A new parameterization for the longwave absorptivity and emissivity of water vapor (section 4.9.2): This updated treatment preserves the formulation of the radiative transfer equations using the absorptivity/emissivity method. However, the components of the absorptivity and emissivity related to water vapor have been replaced with new terms calculated with the General Line-by-line Atmospheric Transmittance and Radiance Model (GENLN3). Mean absolute differences between the cooling rates from the original method and GENLN3 are typically 0.2 K/day. These differences are reduced by at least a factor of 3 using the updated parameterization. The mean absolute errors in the surface and top-of-atmosphere clear-sky longwave fluxes for standard atmospheres are reduced to less than 1 W/m$ {}^2$. The updated parameterization increases the longwave cooling at 300 mb by 0.3 to 0.6 K/day, and it decreases the cooling near 800 mb by 0.1 to 0.5 K/day. The increased cooling is caused by line absorption and the foreign continuum in the rotation band, and the decreased cooling is caused by the self continuum in the rotation band.
        The near-infrared absorption by water vapor has been updated (section 4.8.2). In the original shortwave parameterization for CAM [27], the absorption by water vapor is derived from the LBL calculations by Ramaswamy and Freidenreich [140]. In turn, these LBL calculations are based upon the 1983 AFGL line data [152]. The original parameterization did not include the effects of the water-vapor continuum in the visible and near-infrared. In the new version of CAM, the parameterization is based upon the HITRAN2k line database [153], and it incorporates the CKD 2.4 prescription for the continuum. The magnitude of errors in flux divergences and heating rates relative to modern LBL calculations have been reduced by approximately seven times compared to the old CAM parameterization.
        The uniform background aerosol has been replaced with a present-day climatology of sulfate, sea-salt, carbonaceous, and soil-dust aerosols (section 4.8.3). The climatology is obtained from a chemical transport model forced with meteorological analysis and constrained by assimilation of satellite aerosol retrievals. These aerosols affect the shortwave energy budget of the atmosphere. CAM 3.0 also includes a mechanism for treating the shortwave and longwave effects of volcanic aerosols. A time history for the mass of stratospheric sulfuric acid for volcanic eruptions in the recent past is included with the standard model.
        Evaporation of convective precipitation (section 4.1) following Sundqvist [169]: The enhancement of atmospheric moisture through this mechanism offsets the drying introduced by changes in the longwave absorptivity and emissivity.
        A careful formulation of vertical diffusion of dry static energy (section 4.11).

        Other major enhancements include:

        A new, extensible sea-surface temperature boundary data set (section 7.2): This dataset prescribes analyzed monthly mid-point mean values of SST and ice concentration for the period 1950 through 2001. The dataset is a blended product, using the global HadISST OI dataset prior to 1981 and the Smith/Reynolds EOF dataset post-1981. In addition to the analyzed time series, a composite of the annual cycle for the period 1981-2001 is also available in the form of a mean “climatological” dataset.
        Clean separation between the physics and dynamics (chapter 2): The dynamical core can be coupled to the parameterization suite in a purely time split manner or in a purely process split one. The distinction is that in the process split approximation the physics and dynamics are both calculated from the same past state, while in the time split approximations the dynamics and physics are calculated sequentially, each based on the state produced by the other.

        I don’t find any support in that document for Rud Istvan’s claim that the parameters had been “tuned” to get the best hindcasts. What I find supports the idea that they were trying to “get the physics right”.

      • Matthew R Marler

        PA: If you wish to continue the “physical constants” or “based… on physics” claims please identify by name which parameters conform to your description.

        In response I quoted the entire Overview, but it is not displayiing, so I’ll just put the link:http://www.cesm.ucar.edu/models/atm-cam/docs/description/node8.html

        I think it is clear that they were trying to “get the physics right”.

      • Hope this thread right. Yup. Read book, get back. Otherwise too many individual rebuttals without footnotes.

      • Matthew R Marler

        Rud Istvan: All GCMs are parameterized. Parameter sets for CMIP5 were selected to give the best 10, 20, and 30 years hindcasts per the experimental design. See Taylor et. al. BAMS 93: 485-498 (2012) open acsess on line For parameterizarion, see the technical documentarion to NCAR CAM3. Available free on line as NCAR/TN-464+STR (2004). Or read essay Models all the way Down in ebook Blowing Smoke.

        I have read those documents now, and I have found nothing that supports your claim that the parameter sets were selected to give the best 10, 20, and 30 year hindcasts. I have quoted extensively from the technical documentation contradicting your claim.

        You want I should buy your book as a substitute for the primary literature? Perhaps you could give it to me? That’s just as reasonable if you can not supply us with even one quote from the technical documentation that supports the claim that you posted here.

      • Matthew R Marler

        A quick comment on my attitude, in case anyone is still reading. I am suspicious that the free parameters may have been tuned to get good performance on extant data, at least in some vague fashion other than least-squares estimation or another estimation algorithm. However, modelers deny that, and I can’t automatically judge all of them to be liars. For the specific claim made by Rud Istvan, challenged by Steven Mosher and repeated several times by me, I can not find Rud Istvan’s claim supported in the documents that he cited, from which I quoted extensively. So I am in a kind of limbo.

      • Lets just stop the arguing and put it into technical terms. ‘Parameter settings and scope of differing runs are chosen to suit the required investigative experiments about to be done’

      • Parameterization.

        In the real world of engineering we call those model parameters “fudge factors”. And do we change them in order for the model to perform better? It would be kind of stupid if we didn’t. Sometimes we have to invent fudge factors but we don’t know why Einstein had to invent something called the cosmological constant. Empiricism rules. Toy models drool.

      • Rud Istvan | February 2, 2015 at 12:56 pm |

        “anyone who says models aren’t tuned does not know what they are talking about”

        Yup.

      • Marler, with a seemingly serious intent, writes:

        But all things considered, I do not find “tuning” in the sense most people understand it, but consideration of physics, standard physical constants, and other published literature.

        Most people understand tuning as adjusting the tension of a string on a guitar so it makes the proper musical note when plucked. Or maybe for older folks turning the dial in a radio to get the channel you want. Stay tuned to this station!

        So Marler, even you might be able to imagine the need in a climate model for a tunable parameter called albedo. The earth’s average albedo isn’t fixed, it isn’t well understood how/why/when it changes, how much it changes, or what it’s precise value is at any given time.

        See here for more background on this very important fudge factor.

        http://www.atmos.umd.edu/~zli/PDF_papers/94JD00225_Albedo.pdf

        “Estimation of Surface Albedo From Space: A Parameterization for Global Application”

        Given global albedo estimates are generally 35% +-3% what value does the modeler choose for it? After all, we’re talking about a range of 6% of the average solar constant (340W/m2 * 0.06)= 20.4W/m2.in the range of uncertainty. ALL human attributable forcings are estimated to be a mere 3.5W/m2 so the uncertainty in how much solar energy is rejected by reflection without ever effecting global climate is 6 times the total alleged human influence on climate.

        Are you getting the picture, now Matthew? Listen to Rud. Ignore Mosher.

      • Matthew R Marler

        David in TX: Are you getting the picture, now Matthew? Listen to Rud. Ignore Mosher.

        I am open to the possibility that the models may have been tuned. The specific claims made by Rud Istvan are not supported in the documents that he cited. On this point Steven Mosher is correct.

        I am also open to the possibility that anthropogenic CO2 may be warming the Earth surface and atmosphere. Many of the specific claims made by the proponents of the theory of AGW are not supported by the science.

        Do you not understand that specific claims require adequate support?

      • Steven Mosher

        Rud is wrong again.
        And springer gets the assist on the own goal.

        I suggest that rud actually download a gcm
        I suggest he join the user group
        I suggest he plow through code. I started in 2007.
        I suggest he look at metadata requirements for
        Cmip submission to find some things he thinks there is no record of.

        I mean something very specific by tuning.
        The systematic adjusting of uncertain parameters to force a match between model outputs and observations.

        If someone has 32 knobs to turn and tweaks a couple
        To match 20% of the observations… That’s not tuning. It’s not curve fitting. Calibration might be a better term. Or BFM

        If someone has 32 knobs and they systematically turn them to force an agreement between the model and the entire time series of observation.. Then they have tuned or fit the curve.

        You think they tuned?
        The hindcast isn’t that good.

        Very simple point

      • Steven Mosher

        I like springers guitar example.
        To tune a guitar you twist a knob until the note produced matches a reference note.

        Look at gcm output a single run.
        Compare it to the reference.
        See how frickin out of tune it is.

        It would be like springer passing you a guitar that was off by half a note and then you accused him of tuning it.

        Well he played with knob. Wouldn’t be the first time

      • The blog owner, who happens to be a PhD climate scientist, former dept chair of Earth Sciences at a Georgia Tech, with published papers dealing with climate modeling disagrees with Steven Mosher who has zero professional credentials in any field of science.

        https://judithcurry.com/2013/07/09/climate-model-tuning/

        JC comment: This paper is indeed a very welcome addition to the climate modeling literature. The existence of this paper highlights the failure of climate modeling groups to adequately document their tuning/calibration and to adequately confront the issues of introducing subjective bias into the models through the tuning process.

        Tuning/calibration is unavoidable in a complex nonlinear coupled modeling system. The key is to document the tuning, both the goals and actual calibration process, in the manner in which the German climate modeling group has done.

        Stop pretending to be an expert Mosher. You’re not going to win this argument by redefining what is understood as model tuning by professional experts in the art. If your goal is to look like an uninformed argumentitive piker then mission accomplished.

      • oops, wrong thread:
        Nice find David in Tx. I forgot to search here first again, dang!

      • http://www.gfdl.noaa.gov/news-app/story.78/title.cloud-tuning-in-a-coupled-climate-model-impact-on-20th-century-warming

        Climate models incorporate a number of adjustable parameters in their cloud formulations that arise from uncertainties in cloud processes. These parameters are tuned to achieve a desired radiation balance and to best reproduce the observed climate.

        Oh dear. Another august source that disagrees with the ill-informed piker.
        .

      • Steven Mosher | February 5, 2015 at 3:32 pm |

        Look at gcm output a single run.
        Compare it to the reference.
        See how frickin out of tune it is.

        Haha. Obviously you fail to recall the old saying “you can tune up a pig, but it’s still a pig” and also “you can’t tune a sow’s ear into a silk purse”.

        You don’t think if someone could come up with a set of tunable parameters in credible ranges that better matched the observed climate that they wouldn’t do it? You want us to believe that it’s for lack of trying by about a million climate scientists who’d win a Nobel for it if they found the magic model that could faithfully reproduce the observed climate? One would have to be an incredibly dense knob to swallow that line of BS. Of course that doesn’t rule you believing it…

      • Matthew R Marler

        David in TX, from the same blog post of Prof Curry that you cited:

        In addition to targeting a TOA radiation balance and a global mean temperature, model tuning might strive to address additional objectives, such as a good representation of the atmospheric circulation, tropical variability or sea-ice seasonality. But in all these cases it is usually to be expected that improved performance arises not because uncertain or non-observable parameters match their intrinsic value – although this would clearly be desirable – rather that compensation among model errors is occurring. This raises the question as to whether tuning a model influences model-behavior, and places the burden on the model developers to articulate their tuning goals, as including quantities in model evaluation that were targeted by tuning is of little value. Evaluating models based on their ability to represent the TOA radiation balance usually reflects how closely the models were tuned to that particular target, rather than the models intrinsic qualities.

        To me, that is different from the specific claim of Rud Istvan that the parameters had been tuned to get the best fitting 10 year, etc hindcasts. And as described on the overview page of the specific document that he cited in support, it was shown that the “tunable” parameters (i.e. the parameters other than the “physical constants”) had values selected based on other published papers about the physics being modeled.

        And even the quoted text is an inference, not citing any descriptions of any actual tuning of any particular set of parameters for any particular model by any particular group.

      • Matthew R Marler

        David in TX, from the gfdl.noaa web page that you linked: We investigate the impact of uncertainties in cloud tuning in CM3 coupled climate by constructing two alternate configurations (CM3w, CM3c). They achieve the observed radiation balance using different, but plausible, combinations of parameters. The present-day climate is nearly indistinguishable among all configurations. However, the magnitude of the indirect effects differs by as much as 1.2 Wm−2 , resulting in significantly different temperature evolution over the 20th century (Figure below). CM3 predicts a warming of 0.22 °C. CM3w, with weaker indirect effects, warms by 0.57 °C, whereas CM3c, with stronger indirect effects, shows essentially no warming. CM3w is closest to the observed warming (0.53-0.59 °C).

        They provided several sets of “tuned” parameters in order to explore the quantitative uncertainties in the projections of cloud cover changes. They did not tune the parameters to achieve the best fit to particular sequences of data, as asserted by Rud Istvan.

      • ‘tuning’, ‘fitting’, semantics.

        AT the end of the day I prefer ‘fudge factor’.

        http://www.cesm.ucar.edu/events/workshops.html

        Feel free to sift through the presentations. I think you will find half or more of the presentations are about toying with parameterizations.

        I just remember sitting through a few of these. Let me recap the average gist:

        “We played with our paramaterization, fiddled with our parameters and ran the model. These values gave the best agreement (with data or other models). However this bias reared its ugly head. So we constrained the solution to fit the other model everywhere except in this area and played with the parameter. This was what we found fit best. We got pretty good agreement.”

        A few years of these conferences and one might understand my derision for concluding anything from these models.

      • I lost track of this argument a couple of days ago and don’t feel like reading all this again, but hopefully Matt indulge me with a reply on my question re. his statement:

        “They did not tune the parameters to achieve the best fit to particular sequences of data, as asserted by Rud Istvan.”

        Why did they tune the parameters?

      • You can tune a piano, but you can’t tune a climate model?

      • Well my take away from this conversation is as follows:

        It’s only tuning if you get it dead right. But tuning is a BAD THING and so of course no real scientists do it.

        If you don’t get it dead right it’s called parameterisation, which sounds so much more sciencey and of course is a GOOD THING. All real scientists do it.

      • Matthew R Marler

        Don Monfort: Why did they tune the parameters?

        They tuned the parameters whose values were not published in standard texts and tables. Parameters whose values have been well studied and measured are called “physical constants”, such as the density and specific heat of dry air at STP. Others were updated in response to publications.

      • Matthew R Marler

        nickels: Feel free to sift through the presentations. I think you will find half or more of the presentations are about toying with parameterizations.

        Something that Rud Istvan did not write is certainly supported by presentations that he did not cite.

      • Thanks, Matt. I reviewed some of the discussion and found that Rud had recharacterized the comment that started the argument:

        “The only issue on this thead was whether GCMs were ‘tuned’ via hindcast parameterization.”

        Are you saying that the reference he gave doesn’t prove that, or that GCMs are never ‘tuned’ via hindcast parameterization?

      • Matthew R Marler

        Don Monfort: Are you saying that the reference he gave doesn’t prove that,

        In the references that he gave, I did not find support for his claim that the models had been tuned to give the best 10, 20, and 30 year hindcasts. The excerpts that I copied and pasted here suggest to me a more thoughtful and physics-oriented approach to parameterisations, including the citations of other published papers.

        It will be interesting to see whether, and how, the modeling community responds to Greg Goodman’s analysis of the Mt Pinatubo eruption in the post of today.

      • Thanks, Matt. I get your point. But I am not sure that a thoughtful and more physics oriented approach doesn’t involve trying a lot iterations of plausible tunings of various parameters to get the best 10, 20, and 30 year hindcasts.

        Did you watch the video on HadCM3 that Mosher recommended?

        Yes, it will be interesting to see if there is a reaction from the goon squad to Greg’s work and also to Nic Lewis deconstruction of the latest excuse for the pause, over on CA. My guess is they will endeavor with all their sciency might to studiously ignore the unpleasantness.

    • Engagements become more strained when one side of a dialog opts to silence the other. Like when the duct tape is being placed over one’s mouth.

      • And they take off the duck tape only to call men in white coats to analyze the poor soul’s illness.

    • Joshua, I would say that both are true. The biases and shortcomings are routinely discussed in the technical literature on the models but ignored by the CAGW advocates. Note that the technical discussions are very technical hence very difficult to convey to non-experts.

      • David –

        ==> “but ignored by the CAGW advocates. ”

        When you use such undefined terms and make such broad characterizations based on them, I don’t know how to proceed.

        I could easily say that recognition of the shortcomings in the technical literature are ignored by anti-mitigation advocates.

        That does not seem like a way forward to me, but IMO, resembles identity-aggressive/identity-defensive behaviors.

        What do you think of the author’s proposed alternative strategies?

      • Thus the need for the executive summary for mass consumption (AP wire).

      • Joshua can please you give chapter and verse in the IPCC summaries where these “huge biases and ignorance of potentially important mechanisms, have been routinely and dutifully reported” ?

      • Steven Mosher

        In general the modeling experts do a better job than skeptics of criticizing the models.

        The difference is tone.

        Expert. This is a problem in the model.
        Skeptic. It’s a fraud, disaster , destruction of
        The scientific method, a travesty, they are ignorant
        Gold digging socialists

      • Eli, said the same thing.

      • Steven Mosher : “In general the modeling experts do a better job than skeptics of criticizing the models.”

        I am both a sceptic and someone who has been paid to develop computer models – for financial purposes, admittedly.

        I can assure you that if I had handed those who invested my expertise some schtick about how the models couldn’t be expected to bear any resemblance to reality, they would certainly not have paid me.

        But there again, I work in the real world, so the criteria are vastly different.

        You want to try it some time.

    • You seem to have a structural conception of how skeptics will behave. Almost like a model.

    • Rud is definitely correct, and anything in the ‘Model Physics’ chapter is probably fair game for something that is determined experimentally and is basically a ‘fudge factor’.
      But I can’t really help MM because I don’t know anything about the fitting process. I suspect if you have the time to follow the various references in chapter 4 of the CAM manual you will find the fitting procedure.

      The one thing I do know is that these ‘fudge factors’ are not grid independent and must be recalculated when grid size changes. Which leads me to believe they are true ‘fudge factors’ and do not have any actual units.

      I know this doesnt really help, but having sat through enough CCSM (what is used to be called) talks I know Rud is correct.

      • For instance, here is a discussion of tuning the physic package for a different grid resolution…..

        http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.422.141&rep=rep1&type=pdf

      • Another paper discussing comparisons of CAM cloud physics parameterizations with field data and short range weather forecasts.

        “Testing cloud microphysics parameterizations in NCAR CAM5
        with ISDAC and M-PACE observations”

        Not saying this answers MM questions, but it will hopefully give at least a taste of the hocus pocus’ery that goes on in these models with their various ‘fudge factors’.

        http://www.ecd.bnl.gov/pubs/BNL-96872-2012-JA.pdf

      • “Given the uncertainty in representing various cloud
        processes in climate models…”

        “MG08 includes the treatment of subgrid cloud variability
        for cloud liquid water by assuming that the probability
        density function (PDF) of in-cloud liquid water
        follows a gamma distribution function.”

        “subgrid variability of cloud liquid”

        “The temperature of homogeneous freezing of rain was
        changed from 40°C in the original MG08 scheme to 5°C
        in the released version of CAM5 in order to improve the
        Arctic surface flux and sea ice in the coupled climate
        simulations [Gettelman et al., 2010]. We note that this
        change has no physical basis,…”

        “There are still large uncertainties in the mechanisms
        of ice nucleation,…”

        “..partially related to the subgrid-scale dynamics that are
        not resolved in large-scale models..”

        “The modeled cloud fraction, phase and spatial distribution
        of cloud condensates have a significant impact on
        modeled radiative fluxes…”

        “In general, CAM5 underpredicts the
        observed downwelling SW flux by up to 100 W m2…”

        “Lognormal fitting parameters for the best estimate
        aerosol particle size distribution are given in Table 1…”

        “There is a threshold size separating cloud
        ice from snow (Dcs), which is largely a tuning parameter…”

        “CAM5 severely underestimates aerosol optical depth
        (AOD) by a factor of 5–10…”

        I mean, look. I dont know squat about cloud microphysics, and modelling them as outlined in this paper is truly fascinating and interesting work.

        However, I know enough reading this paper to scoff at the statement:
        “The science is settled….”

        http://www.ecd.bnl.gov/pubs/BNL-96872-2012-JA.pdf

    • Nice find David in Tx. I forgot to search here first again, dang!

  6. Pingback: The Climate-Modeling Paradigm | Transterrestrial Musings

  7. I would like to know how many climate papers rely on the assumption that climate models are correct as a starting point. Very likely over half the entire science would be null if we accept the truth that these models are, in fact, not right.

    • nickels –

      ==> “Very likely over half the entire science would be null if we accept the truth that these models are, in fact, not right.”

      Is that what you get from these excerpts – that the author thinks that “the truth” is that these models are, in fact, not right – beyond in the sense that it is “true” that all models are wrong?

      • Models are cool, but wrong.
        So any paper that runs models and makes conclusion from their output will be wrong (although possibly interesting).
        So the question is simply, if we look at the literature how many papers does this make interesting but irrelevant….

      • Joshua

        The author thinks that the models are insufficiently accurate to reach meaningful conclusions about future conditions. Therefore, scientists who have written assessments that future conditions will be significantly worse as a result of higher levels of CO2 and resulting temperature increases have reached conclusions based on poor models and their assessments are generally invalid

      • (Not talking of course about papers who use models to determine structural and sensitivity issues).
        Im taking about papers who use model output as predicitions….

      • Rob –

        It is an interesting post based on an interesting paper. I’ve seen the issues discussed before, and based on seeing smart and knowledgeable people weigh in on different sides of these issues, I’m not inclined to use this one author’s opinion as dispositive.

        At any rate, what do you think of the author’s suggested alternative strategies to cope with climate related risks?

      • Joshua
        “what do you think of the author’s suggested alternative strategies to cope with climate related risks?”

        Some of the ideas seem like they make sense, but something making sense does not mean it will be implemented.

        The construction and maintenance of good infrastructure is the #1 thing that can be done to avoid damage from adverse weather. A very large portion of the world puts very little priority on this issue. It simply doesn’t matter much what else is done if nations don’t to that.

      • Rob –

        => “Some of the ideas seem like they make sense, but something making sense does not mean it will be implemented.”

        That’s interesting – because I think that a lot of what the author discusses closely resembles what I have discussed with you many times – where I talk about the implicitly subjective simplifications in the “mental modeling” that is unconsciously confused with reality – only for you to tell me that it makes no sense.

        At any rate, “surprise-free scenario-planning” is pretty much what I suggest in these threads on a regular basis – only to get in return all sorts of derision and pejoratives.

        It’s a good thing I consider all that dreck to be based on implicitly subjective simplifications in my interlocutors mental modeling, or my feelings might get hurt.

        :-)

      • @nickels says

        ‘Models are cool, but wrong’.

        My understanding is the opposite. They are hot and wrong.

        Mother Gaia is stubbornly refusing to cooperate with their high-end Thermogeddonist predictions and has sat on her hands for fifteen years or so.

      • Rob –

        From the paper:

        Scenario planning, like other frameworks that aim to guide decision makers to think about (deep) uncertainties (e.g. Cash et al. 2003; Tang and Dessai 2012), stresses the importance of user involvement during the entire development phase.

        If I had a quarter for every time I’ve said essentially that in these threads (only to be insulted in return by my much beloved “denizens”), I could…well…go out for a steak dinner tonight (if the plow ever arrives)….

      • “They are hot and wrong.”
        ha!
        I was think as an example about one the papers here a few months back that did a bunch of model runs to try and determine when we would see the CO2 signal pull away from the ‘background’ signal.
        What possible sense could such a paper have if its using models as the underlying driver?

      • Joshua

        ““surprise-free scenario-planning” is pretty much what I suggest”

        And I suggest what you have written is meaningless in regards to the implementation of policy. Just because something is possible does not mean that people take actions in response.

        Limited resources force decisions to be made regarding priorities.

      • Rob –

        ==> “And I suggest what you have written is meaningless in regards to the implementation of policy. ”

        Well, he speaks to some of the practical limitations, and offers some alternatives. I’m not in complete agreement with his analysis.

        At any rate…

        ==> “Just because something is possible does not mean that people take actions in response.”

        Right. Some people are absolutely insistent on not taking action (until unrealistic criteria – based on subjective, simplistic mental modeling that doesn’t reflect reality are met).

        ==> “Limited resources force decisions to be made regarding priorities.”

        I love how one day, conz accuse libz of “limited pie” thinking and then the next day talk about how the pie is limited.

      • Joshua

        I have been consistent in what I have written. I evaluate your position based on what you write–not because there are other alarmist who write unsupported positions. The US needs to seek to balance its budget. That needs to be taken into consideration when evaluating realistic alternatives.

        A poor model provides very little useful information. I don’t see how the outputs of the current GCMs are any better than merely looking at historical records and taking into account expected popuilation changes.

        Joshua–How about writing specifically what policies you think the US should enact in response to AGW in the next 5 years???

      • As someone who has been engaged with discussions of policy, I can say that Joshua has been consistent in wanting scenario planning to include broad based user involvement during the planning phase. He and I don’t always agree on policy, but both of us have agreed to broad based comprehensive planning.

      • John writes- “both of us (he and Joshua) have agreed to broad based comprehensive planning.”

        Can you think of anyone against such planning? Isn’t the issue that there are limited resources to fund things and tough choices have to be made?

      • I confess I typically jump over your comments, but I don’t recall you ever writing anything similar to that.

      • Joshua,

        You write:-
        “Scenario planning, like other frameworks that aim to guide decision makers to think about (deep) uncertainties (e.g. Cash et al. 2003; Tang and Dessai 2012), stresses the importance of user involvement during the entire development phase.
        … every time I’ve said essentially that ”

        I don’t think you ever have said that. The users are the politicians who re-write the IPCC SPMs every time. Not little us. We’re just fodder … And climate scientists are just the doers ….

    • Nickels

      About as many as assume the data they use as a Starting point is correct.

      Tonyb

    • “I’m not inclined to use this one author’s opinion as dispositive.”

      I’d say that’s wise, Josh. And yet none of this is new. The scary predictions we read about on a regular basis in the NYT”s are generally model based.

      Yet the models do not seem to be performing well…to be polite.

      Therefore:……..

      I’ll let you fill in the blanks

    • Heh, ‘state of the art fully coupled AOGC’s do not provide
      independent evidence for human induced climate change.’
      Artful minxes!

      What Nassim Taleb calls the nerd effect,’ failing to take
      into account external uncertainty, the unknown unknowns,
      ‘stems from the mental elimination of off – model risks or
      focusing on what you know. You view the world from within
      a model,’

      The Black Swan. Ch 10.

  8. Judith, your comment about such a thesis probably not being possible in the US is very worrisome. ‘Political correctness’ has been a large and growing problem in the social sciences for decades. See, for example, the NAS report to the University of California regents at http://www.nas.org/images/documents/A_Crisis_of_Confidence.pdf
    That it has intruded into earth sciences is not good at all. Eisenhower’s farewell address did not only warn about the military industrial complex. It warned about the dangers of research becoming politicized by much federal funding dependency. And now here we are, with the federally climate gravy train producing climategate, and such severe PC that Soon and Briggs had to go to the Chinese Academy of Science to get their irreducibly simple sensitivity model (an equation with just 5 parameters) published.

    • Agree! But your link doesn’t seem to work for me.

    • Got it. Just read the Intro. Looks really interesting.

      I’m glad I’ve got my blood pressure under control.

    • Rud, I have read through most of the NAS report. I find it to be a compelling document. Thanks for pointing it out.

      Has there been any response from the UC Regents? I would suspect that most of them didn’t even read it and dismissed it as the ravings of a bunch of conservatives. The politicization and radicalization of our education system is a particularly insidious and difficult nut to crack.

      • Not to my knowledge. But for sure the problem is not limited to U. Cal. Look at Oreskes hire by Harvard out of that corrupted system.

        It slow poisons the next generation, as Judith suggested.My daughter experienced it personally. But she learned much from the experience. My hope is that many others might, also.

    • Rud

      Mark found the correct link. I just wanted to say that I actually lived through a major tipping point in the politicization of the university in the early 1980s. By that time, faculty tenure committees routinely selected candidates of a certain ideology in the social sciences. I don’t know if that was true in the physical sciences – I think it was not. I was disappointed by the absence of true political and philosophical debate that I had naively expected at a university. The thought and speech police were everywhere, it was oppressive. I heard an interview of the author in which he stated that he thinks most students are careerists that keep their heads low and avoid controversial topics – the go along to get along.

  9. Willis Eschenbach

    I seriously doubt that such a thesis would be possible in an atmospheric/oceanic/climate science department in the U.S. – whether the student would dare to tackle this, whether a faculty member would agree to supervise this, and whether a committee would ‘pass’ the thesis.

    I gotta say, that’s one of the saddest and truest comments
    I’ve ever read in this entire climate discussion … somewhere, Einstein is weeping.

    w.

  10. The natural resource of the US was its peoples’ faith in the ethic of Americanism. It’s the peoples’ faith that created the wealth and with it, a government-education complex that has now become an institutional curse.

      • It is the Academia/UN alliance, a wedding of ideals that are grounded in their mutual opposition to the despised, foundational principles of Americanism (which is based on a respect for individual liberty and the need for personal responsibility, guided by a Judeo-Christian heritage and the liberal philosophy of the founders that are now branded by the Left as, conservative ideals).

  11. “it might be questioned whether the state-of-the-art fully coupled AOGCMs really are the only credible tools in play”. Wrong question. As I and others have explained a number of times here and at wattsupwiththat, the GCMs are not credible as climate models. They try to replicate climate using small space-time slices, which means that they aren’t climate models at all. They are simply weather models, and not very good ones. The probability of them being able to predict climate months, years or decades ahead is zero, because of the exponential build-up of errors in such a model. Even the good weather models can only predict a few days ahead. A real climate model would not have small date-time slices, but it would have coded the things that drive climate : orbit, solar activity, GCRs, clouds, GHGs, ocean oscillations, atmospheric circulations, etc, etc. Unfortunately, most of those things are not understood yet, so a reasonably functional climate model is not yet possible.

    • Steven Mosher

      “The probability of them being able to predict climate months, years or decades ahead is zero, because of the exponential build-up of errors in such a model. ”

      Wrong.

      All GCMs are able to predict the climate.
      every last one of them.
      We can even test the predictions.
      Look, the predictions are high.

      One way to use this is simple.

      If I have a model that starts in 1850 and predicts 2014 a little bit high
      what can I do?
      What can I say about its prediction in 2100?
      Suppose it predicts 3C in 2100.

      Well, That’s information. I might say, that +3C is an upper boundary.
      Unless of course you want to argue that i should plan for MORE than 3C of warming.

      And on the other hand I might just run a statistical model and it would say
      1C of warming.

      So I have one approached that is biased high, and another approach that will miss any acceleration.

      Personally, I’d be happy using the high side estimate. Dont tell me I can’t cause I use high side estimates every day in business. every fricking day I use a model that is wrong to the high side. If my boss suggests a decision that is above the high side model.. I remind him that my best model, which is always biased high, is below the figure he wants to use.

      GCMs can predict.
      GCMs do predict
      Whether or not you can use the prediction is a pragmatic decision.

      pragmatic decisions cannot be made in a vaccum.
      They are made in a space where alternatives are offered.

      Unless you have an alternative, you dont get to play the game.
      sorry.

      • I can ask the politician to stop funding their game. They can go back to modelling the climate in the pub after a few pints of ale.

        My supervisor used to tell the research group that if we couldn’t explain the importance of our work to a truck driver in a honkytonk bar in Texas then we should ask ourselves why we wetre doing it and why said truck driver should fund it.

      • Software has to be proved correct. Not the other way around.
        Unvalidated, untested software is just unvalidated untested software.
        Anything it says is no better than a wild guess.

      • “Personally, I’d be happy using the high side estimate.”

        I used to have one customer service rep who always estimated longer than expected to get their shipment out in order to never disappoint. I had another rep who would over-promise delivery estimates trying to please the customer on the phone. I set a policy that we always give our customers our true best estimate. I had never realized before that this is not natural human nature. There are always temptations to misuse power by mis-information, (usually for the best of reasons).

      • I remember hearing that a couple of decades ago all the airlines, plagued by late arrivals and missed connections, all brought their on-time percentage way way up – within a single year. How? By adding to every arrival time as a buffer.
        Cheap trick? No. It was a good idea: turned out that customers would much rather wait around an extra half an hour in an airport than miss a connection.

      • Mosher writes_ “GCMs can predict. GCMs do predict. Whether or not you can use the prediction is a pragmatic decision.”

        Steve- sometimes you seem to be intentionally obtuse in your comments. Do GCM make you better at determining whether any place on the planet will be better off or worse off as a result of AGW at some point in time?

        What modelled output from a GCM would lead you to conclude with reasonable confidence that any particular place on the planet will have a worse climate as a result of AGW in say 100 years?

      • “sometimes you seem to be intentionally obtuse in your comments”

        That’s because he is.

        Andrew

      • Look, I disagree with mosher on many things, but I don’t think that it could be said that he’s intentionally obtuse.

        :-)

      • Matthew R Marler

        Steven Mosher: Unless you have an alternative, you dont get to play the game.

        Luckily, we have many alternative models. And we are blessed to live in a republic where everyone can vote, or correspond with elected officials, or otherwise “play the game”..

        Besides that, we have the cautionary tales from history of the best available models being (catastrophically) wrong, as was the case with the Tacoma Narrows Bridge, and the model-based projections of Malthus and the neo-Malthusians. The “precautionary principle” tells us that we should not trust the models before they have been shown to be accurate.

      • When each particular quill stand on end like quills upon the fretful porpentine.
        ============

      • “Well, That’s information. I might say, that +3C is an upper boundary.”

        For all we know a circulation change could occur in 5 years and it could warm 10C….

        Once a simulation reaches 100% error (as a climate model will after a few months) any predictive value is nil. Your just looking at discretization noise at that point, nonsensical fluctuations.

        Yeah, you can call it a prediction and use it, but I’m not sure I see the point unless its to try and pull a fast one.

        The alternative is to simply admit you dont know what’s going to happen, which is, in fact, the truth.

      • Steven Mosher. I do have an alternative. In my model, tomorrow is the same as today. It is more accurate than any of the GCMs. Go on, just test it. You will find that over time, it shows less error than any GCM.

      • We certainly live in different worlds. You believe models are “data” and even if they are wrong we can discuss and act on them. Try that attitude with large scale chemical processes with nitric acid or if significantly off you shut down a process plant for days or weeks. I couldn’t afford an attitude quite as cavalier as yours. Why should we use models that are inherently inaccurate, predicting to the high side as you say, to rearrange the global economy? Why are these inaccurate projections treated as gold standards with all kinds of government and “scientific” pronouncements treating them as fact? I believe we should demand more rigor.

      • Mosher says: “Unless you have an alternative, you dont get to play the game. sorry.” In what world is that how things work? I can say for a fact that the proposed bullet train for California 1) will go way over budget, 2) will never be finished and 3) if it were finished would never have the ridership claimed because there aren’t that many people traveling that route in the first place. Do I need a model? No. Do I need to build an alternate train? No. I just need to audit their plan and financials and see that they employ both unicorns and pixie dust to make it all work, that the state does not now have and will never have enough $ to pull this off, etc.
        Just because you need to make a decision does NOT mean that you must use some lame-ass tool just because it is there, whether a GCM or a financial model or a stock market forecast (do you believe those?).

      • Joshua from War Games:

        A strange game. The only winning move is not to play. How about a nice game of chess?

      • Steven Mosher : “The probability of them being able to predict climate months, years or decades ahead is zero, because of the exponential build-up of errors in such a model. ”

        Wrong.”

        No, not wrong.

        Absolutely 100% correct.

      • Mosher, it is true they predict. The question is, with what quality. The answer is with very poor quality. The Pause is one diagnostic. Empirical sensitivity estimates are another. The problems are inherent. Read several essays in this in Blowing Smoke. You can find them using the the introductions road map. Read them, check the footnotes, then get back. With facts, not opinions.

      • Steve, the models project and never predict.
        You should perhaps read around the subject before you pontificate.

      • I thought Joshua was the scientist’s son and the computer was WOPR.

      • Craig nails it. Mosher, as often is the case, is simply full of it.

        Mosher, tell FDA the next time they come to audit your medical device development and manufacturing site that they can’t play unless they have a QMS of their own.

        Tell SEC the next time they come to audit your financial services firm they can’t play unless they have analysis models of their own.

        Your naiveté and/or arrogance shows little bounds.

        Putz.

      • Steven Mosher,

        You wrote –

        “GCMs can predict.
        GCMs do predict
        Whether or not you can use the prediction is a pragmatic decision. . . .

        . . . Unless you have an alternative, you dont get to play the game.

        sorry.”

        First, the GCM predictions are no more useful than no prediction at all, if results to date are taken into account.

        Second, I presume the game to which you refer is the game of transferring money by force from the taxpayers’ often meagre accounts, into one’s own. I accept that people such as yourself set the rules. The Warmists have played the game well.

        A possible problem with your admonition is that it depends on people, whom you dislike, wanting to play the game under conditions imposed by yourself. Your ersatz apology may be seen as patronising and condescending, and not sincerely meant. Of course, I may be wrong.

        It is a symptom of delusional psychosis that the sufferers believe, sincerely, that they are obliged to provide expert guidance and direction to people who don’t share their delusion.

        You might find that people don’t want to play your game, as they stand no chance of winning, or even breaking even. I wish you luck with recruiting new players. Unfortunately, the supply of gullible simpletons willing to fund the game organisers is diminishing – do you agree?

        Live well and prosper,

        Mike Flynn.

      • Proscribe or prescribe? Dolt.

      • Using a high side estimate in private behavior is one thing. But when you are seeking public policy changes (and public funding) you run the risk of significant overspending (aka, inefficient resource allocation) in your response and necessarily short-changing competing needs. If you base policy on high side projections the fear of “mushroom clouds” can always trump the opposition. Examples include the US defense budget and health care costs. Heaven help us if climate change policy follows that path.

      • “All GCMs are able to predict the climate.”

        So can astrologists, phrenologists, and tea leaf readers.

        And do we really need to deal with this hackneyed argument again?

        “pragmatic decisions cannot be made in a vaccum.
        They are made in a space where alternatives are offered.

        Unless you have an alternative, you dont get to play the game.
        sorry.”

        No. No one can model the Earth’s climate with the precision, accuracy and certainty claimed by those who would have the government seize control over the global energy economy.

        No one has to propose an alternative method of predicting that which cannot be predicted (accurately – obscurantists can so boring in their consistency sometimes), in order to reject the enormous public policy proscriptions of those who have no ability to do so themselves.

      • Basil Newmerzhycky

        Moshers coment “All GCMs are able to predict the climate.
        every last one of them.” is basically correct. Some models running a little off (but still within the margins of error when you include the long term trend not just short term of past decade) is not a reason to not use the models.

        To suggest GCM’s need improving is OK. To say we need to do away with them is as ludicrous as saying “Since the WRF model missed the last NYC snowstorm we should do away with all models”. This “book-burning” is the antithesis of climate science. I would hope Dr Curry would come out and tell everyone this.

      • I am with bay-zil, on this one. All climate models can predict the climate. That’s why they call them climate models! Just like all econometric models can predict the economy. Try to follow this…that’s why they call them econometric models! Simple as that. Get it? Now ask me why we need so many freaking models to predict the same freaking thing and I will refer you back to bay-zil. He really smart.

      • We don’t need to burn the climate models. I’m not sure if you can keep bits lit anyway. No matter.

        All we have to do is adjust the internal parameters and/or approximations, so the mean of the models lies on top of the global temp record. It would be great if we could get the 14,000 foot temp mean to match the global sat record. But it would be an improvement the model’s mean could simply match the land temp construction.

      • OMG! Jim2, that sounds like tuning parameters. They don’t do that, according to Mosher. Watch Mosher’s video:

        https://www.newton.ac.uk/seminar/20101207110011401

        After seeing that, I am convinced our money would be better spent hiring lingerie models to make climate predictions.

      • Great idea, Don

      • Basil Newmerzhycky

        ” Now ask me why we need so many freaking models to predict the same freaking thing and I will refer you back to bay-zil. He really smart.:

        There is a very good reason why short term weather forecasting and longer term climate forecasting involves a number of models instead of one.

        No model is perfect…at best there are rough but good estimates of future conditions and many models have built in biases that occur randomly.

        But running several models in an ensemble, and taking the ensemble mean has proven to be a superior way of pushing the model accuracy past the limits of just one model by minimizing outlier runs. This model interpretation is the same for modeling anything…weather, climate, economics…etc.

        For example, thats why the best medium range forecasters often use a combo of the ECMWF, GFS, and UKMET solutions for the best forecast. They know one of the models is great in the Arctic but weak in the tropics. Another is better in the tropics but weak in the mid latitudes. One under-does air-sea interactions but really nails continental air-masses. Impossible to build all into one model., whether its “Weather” or “Climate”.
        I’m sure there is even a model on how many penthouse magazines Don burns thru in one evening sitting. Hope that helps. Otherwise If there is anything else I can help you with, just let me know. :)

      • We know about the inaccuracy of medium range forecasts and the story about model ensembles, bay-zil. We can get that dogmatic drivel from jimmy dee ten times a day. You are superfluous.

      • In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions.

        http://www.ipcc.ch/ipccreports/tar/wg1/505.htm

        Here the IPCC in the TAR is talking perturbed physics models. I do wish people would catch up with 20th century climate science.

    • A fan of *MORE* discourse

      Mike Jonas  proclaims [utterly wrongly] “The probability of them [GCMs] being able to predict climate months, years or decades ahead is zero, because of the exponential build-up of errors in such a model.”

      Please allow me to join Steven Mosher in noting that this widespread belief (among denialists) is utterly mistaken.

      A terrific example — that fortunately is free of ideological entanglements — is the dynamics of the solar system, and simulated in general orbital models (GOMs).

      GOM dynamics is known to be chaotic, with a characteristic Lyapunov time 5–10 million years (roughly). Despite these chaotic features, GOM integrations spanning billions of years are commonplace and useful — provided that the GOM integration scrupulously respects dynamical conservation of energy, angular momentum, and symplectic phase-space volume — in that the statistical properties of solar system dynamics are modeled accurately.

      For mathematical details see Konstantin Batygin and Gregory Laughlin’s On the dynamical stability of the solar system (2008). Also good is Renu Malhotra, Matthew Holman, and Takashi Ito’s Chaos and stability of the solar system.

      Best wishes for happy learning regarding general orbital models (GOMs) and general climate models (GCMs) are extended to all Climate Etc readers!

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • Orbital mechanics requires only a few degrees of freedom to solve. Therefore it is more or less computable.
        Navier Stokes is, of course, infinite dimensional. Therefore it must be discretized which immediately introduces error. Then these resulting millions of degrees of freedom must be integrated while trying to preserve proximity to the original equation–not computable.

        Its like comparing a high school 100m with an olympic 100M.

      • And only a mildy chaotic system to boot (planets).
        Many chaotic systems show exactly this kind of behavior: long term stability (and linear error growth) follow by regions of chaotic instability and exponential error growth.

        Statistics? What exactly is your claim?

        pure drivel, sorry.

      • “Conclusion General circulation models (GCMs) are not presently, and never have been, generally regarded as providing the strongest scientific evidence for the accelerating reality and harmful consequences of anthropogenic global warming (AGW).”

        I mean, scroll down a bit FOM. On the same thread you are both conceeding that GCM are not accurate and then here you go claiming they are accurate.

        Logic fail.

        GCM’s are accurate. Statistics of GCM’s are accurate.

        Yeah and God is the tooth fairy.

      • A fan of *MORE* discourse

        These physical principles are not complicated “Nickels”!

        Remember  local dynamics is chaotic; global thermodynamics is well-conditioned:

        Again  local weather is chaotic; global climate is well-conditioned:

        Yet again  that’s why the sea-level is rising, the oceans are heating (and acidifying), and the polar ice is melting … all without pause or near-term limit … despite the chaotic dynamics of local weather.

        Nickels, it is a pleasure to assist your mathematical and physical understanding!

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • So the claim is that even tho we cannot compute local weather. If we do so, everywhere, and sum all the results they will be correct?

        So, in other words, if we run a climate simulation, and 1 month out the solution has 100% error (which it will) nevertheless averaging we will have the correct climate.

        Well, no.

        Let me explain to you how mathematical modeling works.

        1) You have reality.
        2) you have your model.

        3) You run your model. It begins to diverge from reality. Before it completely diverges some things (like global averages) might still have some accuracy, which can be demonstrated by a-posteriori analysis.

        4) Your solution finally diverges completely.

        5) Your solution is worthless except for making pretty pictures and feeding the CAGW propaganda machine.

        There is no diverging completely from the local solution but still maintaining some sort of global accuracy.
        This is nonsense.

      • A fan of *MORE* discourse

        nickels deposes  “Let me explain to [Climate Etc] readers how mathematical modeling works.”

        Broad-range scientific researchers like Frenkel and Smit

        and corporations like ANSYS

        alike will be very surprised to learn (from nickels) that their (prospering!) dynamical simulation enterprises cannot possibly generate predictions of value.

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • Well, if only climate models had the capabilities of a tool like ANSYS.
        ANSYS has adaptivity as well as some basic error estimators.
        And, more importantly, engineers know how to use it. They understand the limitations. Its one thing to try and find the max stress on a part within a vat of fluid, quite another to integrate the entire world forward 100 years.

      • SkepticGoneWild

        FOMBS knows how to cut and paste. That’s it.

      • Basil Newmerzhycky

        “Fan of More Discourse” is absolutely correct in that short term “weather models” used to forecast days 1-10 are very susceptible to “chaos” whereas longer term global climate operates in a closed system, with greatly minimized chaotic effects.

        Nickel, I suggest you familiarize yourself with how both numerical weather models and GCM’s work. They are totally different and a 7 day weather model prognosis is probably less challenging than a 7 year climate model run. You might want to look over some of those links that Discourse provided, they might help you understand modeling and chaos theory.

      • Models are not constrained at all and continue to diverge from close starting points.

        Climate states shift abruptly and more or less extremely every 20 or 30 years – and the physics of these internal climate shifts are utterly unpredictable.

    • Anyone who claims that an effectively infinitely large open-ended non-linear feedback-driven (where we don’t know all the feedbacks, and even the ones we do know, we are unsure of the signs of some critical ones) chaotic system – hence subject to inter alia extreme sensitivity to initial conditions – is capable of making meaningful predictions over any significant time period is either a charlatan or a computer salesman.

      Ironically, the first person to point this out was Edward Lorenz – a climate scientist.

      You can add as much computing power as you like, the result is purely to produce the wrong answer faster.

      Adding “ensembles” of models together and hoping to get anything sensible is about as helpful as producing an ensemble of tea leaf readers or black cock entrail experts.

  12. Two premisis are odds. If it is assumed that model input complexity increases reliability, why then should feedbacks be adjusted to yield sensitivity rates congruent with early less-sophisticated models?

    Obviously, it can be argued that increased input complexity will yield more reliable results. This begs the question, regarding the potential existence of a universal downward bias on initial simulations, prior to adjustments for congruence with early model sensitivity ranges?

  13. I especially like the line: “Mutual confirmation of models (simple or complex) is often referred to as ’scientific robustness’. “

  14. A thoroughly enjoyable read. I am sure many skeptics feel vindicated, since many of the author’s comments confirm their perceptions.

    I like this paragraph in particular:

    Model results that confirm earlier model results are perceived more reliable than model results that deviate from earlier results. Especially the confirmation of earlier projected Equilibrium Climate Sensitivity between 1.5″C and 4.5″C degree Celsius seems to increase the perceived credibility of a model result. Mutual confirmation of models (simple or complex) is often referred to as ’scientific robustness’.

    In other words, rock the boat at your peril.

    The doubts are going to have to come from within. No amount of exogenous criticism will do the trick. Institutional instability begins at the core and works out. Termites come to mind.

  15. Curious George

    I question a mathematical erudition of GCM modelers. They use a latitude-longitude grid, which is perfectly good for low and middle latitudes, but is woefully suboptimal in polar regions. There is a whole body of experience on grid optimization, so-called “finite element method”.

    But even in an article titled “The Finite Element Sea Ice-Ocean Model (FESOM) v.1.4” we read “To avoid the singularity [on a triangle covering the North Pole] a spherical coordinate system with the north pole over Greenland (40◦ W, 75◦ N) is used.” Q.E.D.
    http://www.geosci-model-dev.net/7/663/2014/gmd-7-663-2014.pdf

    • There is definitely some crazy stuff going on with a lot of these legacy climate models. Not only the pole issue, but various pressure coordinate systems, etc. Switching to spherical coordinates, dropping terms, etc, etc…
      The math is all sound, but a lot of these decisions and approaches are more historical than optimal.

      To me the failing and where finite elements has the (untapped) potential to help with climate modeling is in the error estimate and convergence statements, as well as the simplicity of rolling the coordinate transforms into the method as opposed to changing coordinates before discretizing.

      A model without some estimate of its error and without the ability to do grid convergence studies leaves much to be desired.

  16. Matthew R Marler

    Mutual confirmation of models (simple or complex) is often referred to as ’scientific robustness’.

    Held and Soden, in their paper entitled “Robust Respnses of the Hydrological Cycle to Climate Warming” make that claim explicitly. With almost all of the model runs being too high for out-of-sample data since the models were run, it is just as reasonable to hypothesize that what the models have in common is a mistake. Or a common set of mistakes.

    More comprehensive models that incorporate more physics are considered more suitable for climate projections and climate change science than their simpler counterparts because they are thought to be better capable of dealing with the many feedbacks in the climate system.

    With finite data, or with a modeling system that is growing more and more complex at a faster rate than the data sets are growing, model complexity can reduce model reliability significantly, unless the more complex models are significantly more accurate. That is due to instability (multiple correlation) and inaccuracy in the parameter estimates — inaccuracy and variability of the parameter estimates are generally greatly underestimated.

    Personally, I don’t believe that a new “paradigm” is needed: there are models of many levels of complexity and “physical reality”. What I think is needed is much more work in collecting and analyzing the most appropriate data possible, and continued work within the existing “paradigm”, or “paradigms” (the word was a little vague at the start, and has morphed beyond any definition at all, imho.)

    I wish the author success in getting the main points published in a prestigious journal.

    • Matthew R Marler

      Ah. It is done in the European style, and some of the work has already been published in peer-reviewed journals.

  17. A fan of *MORE* discourse

    Climate Etc readers will be delighted to learn that Alexander Bakker’s thesis — which questions the scientific primacy of numerical general circulation models (GCMs) — has found strong support from James Hansen and a long list of collaborators, who represent strong climate-research groups from around the world!

    An Old Story, but Useful Lessons
    James Hansen, 26 September 2013

    Introduction  International discussions of human-made climate change (e.g., IPCC) rely heavily on global climate models, with less emphasis on inferences from the paleo record. A proper thing to say is that paleoclimate data and global modeling need to go hand in hand to develop best understanding — almost everyone will agree with that. […]

    There is a tendency in the literature to treat an ensemble of model runs as if its distribution function is a distribution function for the truth, i.e., for the real world.

    Wow. What a terrible misunderstanding.

    Today’s models have many assumptions and likely many flaws in common, so varying the parameters in them does not give a probability distribution for the real world, yet that is often implicitly assumed to be the case. […]

    Conclusion  It is not an exaggeration to suggest, based on best available scientific evidence, that burning all fossil fuels could result in the planet being not only ice-free but human-free.

    ——
    Q&A with James Hansen
    13 December, 2013

    Our paper [“Assessing Dangerous Climate Change: Required Reductions of Carbon Emissions to Protect Young People, Future Generations and Nature”] is based fundamentally on observations, on studies of earth’s energy imbalance and the paleoclimate rather than on climate models.

    Although I’ve spent decades working on [climate models], I think there probably will remain for a long time major uncertainties, because you just don’t know if you have all of the physics in there.

    Some of it, like about clouds and aerosols, is just so hard that you can’t have very firm confidence.

    So yes, while you could say most of these [messages] you can find one place or another, but we’ve put the whole story together. The idea was not that we were producing a really new finding but rather that we were making a persuasive case for the judge.

    Please note the long list of coauthors associated to these strong statements: James Hansen, Pushker Kharecha, Makiko Sato, Valerie Masson-Delmotte, Frank Ackerman, David J. Beerling, Paul J. Hearty, Ove Hoegh-Guldberg, Shi-Ling Hsu, Camille Parmesan, Johan Rockstrom, Eelco J. Rohling, Jeffrey Sachs, Pete Smith, Konrad Steffen, Lise Van Susteren16, Karina von Schuckmann, James C. Zachos.

    And Naomi Oreskes too has long been a critic of over-reliance upon complex numerical models:

    Naomi “Merchants of Doubt” Oreskes
    Slams “Corrosive” Climate Change Skepticism

    Our 1994 paper in the journal Science [“Verification, Validation, and Confirmation of Numerical Models in the Earth Sciences”] took a critical look at numerical simulation models.

    It’s never been my view that we should trust science uncritically. I’ve always been interested in the questions: How do we know when to trust science? How do we distinguish between healthy and corrosive skepticism? In short, how do we judge scientific claims?

    In climate science, the case for the reality of anthropogenic climate change does not rest solely (or even primarily) on climate models. If it did, I’d be a skeptic too.

    I still believe what I wrote in 1994: models are a tool for exploring and testing systems. Their primary value is heuristic. But together with other lines of evidence they can be part of a persuasive scientific case. Or not.

    Conclusion  General circulation models (GCMs) are not presently, and never have been, generally regarded as providing the strongest scientific evidence for the accelerating reality and harmful consequences of anthropogenic global warming (AGW).

    To assert otherwise is — as Bakker, Hansen, and Oreskes all emphasize — “a terrible misunderstanding.”

    Good on `yah Alexander Bakker … for joining James Hansen and Naomi Oreskes in publicly proclaiming this common-sense scientific reality!

    \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • AFOMD, let’s take the simple approach to becoming ardent fans of more discourse:

      Climate scientists have moved the goalposts for evaluating the predictions of the climate models by adopting a new approach to GCM verification. Rather than using the central tendency of the GMT mean, they are, for all practical purposes, now using the trend in peak hottest years which occur every four or five years as their basis for GCM verification: 1998, 2005, 2010, 2014 etc.

      Gavin Schmidt: “With the continued heating of the atmosphere and the surface of the ocean, 1998 is now being surpassed every four or five years, with 2014 being the first time that has happened in a year featuring no real El Niño pattern.” Gavin A. Schmidt, head of NASA’s Goddard Institute for Space Studies in Manhattan, said the next time a strong El Niño occurs, it is likely to blow away all temperature records.

      With that as background, please tell us in 100 words or less why — while CO2 is currently increasing at the higher RCP8.5 emission scenario — there is now a dramatic slowdown in global warming in progress. Tell us in simple, understandable words why there is not an AGW Credibility Gap developing, as is illustrated on this graph:

      • I assume you meant “now” in:

        “Tell us in simple, understandable words why there is not an AGW Credibility Gap developing, as is illustrated on this graph:”

      • rogerknights, since I am addressing my question to AFOMD, who apparently believes that the recent pause in global warming in the face of ever-rising CO2 concentrations doesn’t represent the kind of scientific issue which should cast doubt on the supposed mechanisms of global warming, and on the ability of the GCM’s to project future global warming, then I choose to phrase my question to him using the negative element ‘not’.

      • A fan of *MORE* discourse

        Betablocker, your concerns are addressed in the reply (above) to “nickels.”

        In a nutshell  local dynamics (namely “weather”) is chaotic; global thermodynamics (namely “climate”) is well-conditioned

        Betablocker (and nickels too), it is a continuing pleasure to assist your understanding of the Bakker/Hansen/Oreskes thesis. Namely, that thermodynamics is primary, while numerical models are secondary, in regard to the scientific understanding of climate-change!

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • Well, good on ye Hanson and others for taking care with climate models.
      But as far as I can tell, the substitute is:

      A) Make an interesting but unsupported simplification (essential assume a linear system –the greenhouse model):

      “We hypothesize that the global climate variations of the Cenozoic (figure 1) can be understood and analysed via slow temporal changes in Earth’s energy balance, which is a function of solar irradiance, atmospheric composition (specifically long-lived GHGs) and planetary surface albedo. ”

      B) Come up with a ‘climate sensitivity’ factor (using a simplified climate model). Again, assumptions of extreme linearity; there is no reason such a ‘factor’ need exists….

      “Climate sensitivity (S) is the equilibrium global surface temperature change (ΔTeq) in response to a specified unit forcing after the planet has come back to energy balance, Embedded Image
      5.1
      i.e. climate sensitivity is the eventual (equilibrium) global temperature change per unit forcing. Climate sensitivity depends upon climate feedbacks, the many physical processes that come into play as climate changes in response to a forcing. Positive (amplifying) feedbacks increase the climate response, whereas negative (diminishing) feedbacks reduce the response.”

      C) Forecast doom:

      “We conclude that the large climate change from burning all fossil fuels would threaten the biological health and survival of humanity, making policies that rely substantially on adaptation inadequate.”

      All they seem to have done in place of models is make an extreme simplifying assumption and then run their extremely simple model to forecast doom.

      All interesting work, but not any more conclusive.

      (from FOMS link)
      http://rsta.royalsocietypublishing.org/content/371/2001/20120294

  18. Matthew R Marler

    Here is a humorous quote from the thesis: In the few decades’ history of climate change assessments, a lot of effort has been invested in the construction and in the procedural aspects. Remarkably enough, these assessments are rarely evaluated (Hulme and Dessai 2008b; Enserink et al. 2013). This seems a little strange as the sparse evaluations of climate assessments all conclude that there is a huge gap between the provided information on climate change and the user needs (e.g. Tang and Dessai 2012; Keller and Nicholas 2013; Enserink et al. 2013).

  19. Judith Curry

    “I seriously doubt that such a thesis would be possible in an atmospheric/oceanic/climate science department in the U.S. – whether the student would dare to tackle this, whether a faculty member would agree to supervise this, and whether a committee would ‘pass’ the thesis.”

    My query: At what time in academic and government funding priorities has NOT been driven by political correctness? From the earliest tomb of Ptolemy’s Almagest geocentric mathematical models that survived into the Middle Ages, to current tuned GCMs that survive learned societies’ (Royal, American Academy of Science, etc) scrutiny. I think Eisenhower’s farewell address was merely recapitulating what was obvious from the past as most military leaders have been students of history.

    To me, Bakker’s most heretical statement is: “Then section 2.4 elaborates on the pitfalls of fully relying on physics.” Bingo!

    I know Judith that you have a new edition of your atmospheric physics textbook coming out. However, if we knew all the physics that was relevant to making climate predictions and that physics were incorporated into the GCM’s and nothing more, we would all be talking about something else like: when are we going to achieve warp speed.

    Time to trash GCMs, and start all over again; after all, we have learned a thing or two since: “The founding assessments of Charney et al. (1979) and Bolin et al. (1986) did see the great potential of future GCMs, but based their likely-range of ECS on expert judgment and simple mechanistic understanding of the climate system.”

    • From the earliest tomb of Ptolemy’s Almagest geocentric mathematical models that survived into the Middle Ages, […]

      In fact it goes much farther back than that. Ptolemy lived in late Antiquity, at which time the original Academy founded by Plato had long since been perverted by subsidies and other favors from Hellenistic “kings” in Alexandria and Pergamon. Documents surviving from the “classical” age of Philosophy were cherry-picked for preservation and publication depending on how well they fit the preconceptions of a bunch of political academics dependent on such “royal” favor. Then they, and Hellenistic writings depending on them, were cherry-picked over again depending on how well they fit the interests of Roman conquerors, first late Republican, then associated with the Caesars/Imperium.

      Then they were all burned, the surviving documents being those popular enough to survive in country houses and private libraries of powerful Romans. Then they were cherry-picked again by early Christian “Philosophers” looking to rationalize their religious beliefs with natural science. Then some of those rationalizations were adopted by the Roman church as “immutable” doctrine, Ptolemy’s astrology among them.

      • AK

        I see not only military leaders are students of history. Thank you.

        I enjoy learning more about the past in spite of the reconstructions performed to fit a current popular narrative. The more I read different perspectives, I can see the similarities to the present.

      • > Documents surviving from the “classical” age of Philosophy were cherry-picked for preservation and publication depending on how well they fit the preconceptions of a bunch of political academics dependent on such “royal” favor.

        Citation needed.

      • > Then some of those rationalizations were adopted by the Roman church as “immutable” doctrine, Ptolemy’s astrology among them.

        See for yourself:

        The Catechism of the Catholic Church states, “All forms of divination are to be rejected: recourse to Satan or demons, conjuring up the dead or other practices falsely supposed to ‘unveil’ the future. Consulting horoscopes, astrology, palm reading, interpretation of omens and lots, the phenomena of clairvoyance, and recourse to mediums all conceal a desire for power over time, history, and, in the last analysis, other human beings, as well as a wish to conciliate hidden powers. They contradict the honor, respect, and loving fear that we owe to God alone” (CCC 2116).

        http://www.catholic.com/tracts/astrology

      • Everybody knows we have already run out of space…

        http://www.independent.co.uk/news/science/eternal-life-could-be-achieved-by-procedure-to-lengthen-chromosomes-10020974.html

        yet scientists could be ready to practice on their view of eternity tomorrow.

      • Citation needed.

        The Vanished Library: A Wonder of the Ancient World. by Luciano Canfora, Translated by Martin Ryle. Berkeley and Los Angeles: University of California Press, 1990. Pp. ix + 205; 6 figures. ISBN 0-520-07255-3 (pb). As with all such references I give, I’ve done considerable research beyond it, and am willing to vouch for it, somewhat. The paraphrases are mine.

        It’s a book highly worth reading, IMO.

        As for “astrology”, even Isaac Newton would have described himself as an “astrologer”. The division of the subject into “astrology” which is fortune-telling, and “astronomy” came later:

        Since the 18th century they have come to be regarded as completely separate disciplines. Astronomy, the study of objects and phenomena originating beyond the Earth’s atmosphere, is a science[2][3][4] and is a widely-studied academic discipline. Astrology, which uses the apparent positions of celestial objects as the basis for the prediction of future events, is defined as a form of divination and is regarded by many as a pseudoscience having no scientific validity.[5][6][7]

        Just out of curiosity, do you really care about this, or are you just trying to waste my time?

        @RiHo08…

        You’re welcome. Always glad to pass along some of what I’ve learned in my misspent life, to anyone actually interested.

        AK

      • > As for “astrology”, even Isaac Newton would have described himself as an “astrologer”.

        Sure, and Galileo himself, the first Denizen, did some too:

        http://www.skyscript.co.uk/galast.html

        These factoids are far from establishing that “some of those rationalizations were adopted by the Roman church as “immutable” doctrine, Ptolemy’s astrology among them,” whatever “some of those rationalizations” may be. Unless we have some real Doctrine on the table, all we have is handwaving. More so that this thesis might very well be undermined yet another Wiki entry:

        Christianity has generally been in opposition to astrology, but its popularity led to it being woven the Christian beliefs at some periods and in some populations.

        http://en.wikipedia.org/wiki/Christian_views_on_astrology

        However hard William Lilly tried to sell his astrology as Christian, it never really was.

        ***

        Canfora is interesting, but it might not the the slam dunk AK’s “paraphrases” show:

        http://www.jstor.org/discover/10.2307/41592179

        The truth is out there even in ancient history, I guess.

      • These factoids are far from establishing that “some of those rationalizations were adopted by the Roman church as “immutable” doctrine, Ptolemy’s astrology among them,” whatever “some of those rationalizations” may be.

        You’re missing my point. (Deliberately?) It’s generally accepted that the Ptolemaic model was considered “part of Church doctrine”, although I suppose some nit-picking could be done whether anything beyond its basic geocentrism was considered “mandated by Scripture”. However, at that time the entire Ptolemaic model was considered “Astrology”, because the term “astronomy” was limited to identifying names of stars. That’s why I used the word I did. Primarily to remind readers that there have been many changes since then.

        For that matter, the dividing line between “divination” and predicting the positions of “Heavenly objects” based on mathematical models has also changed.

      • Hey, the flights of birds and the perambulations of guts are way more pertinent and perspicacious, particularly to the birds and the turds.
        ====================

      • > It’s generally accepted that the Ptolemaic model was considered “part of Church doctrine” […]

        This conflates Ptolemy’s astrology with the Ptolemaic model, AK.

        Here’s Ptolemy’s Tetrabiblos:

        http://www.astrologiamedieval.com/tabelas/Tetrabiblos.pdf

        Also note that to speak of the “Ptolemaic model” presume it was his invention. It was not. Nor was it “his” astrology.

        You may not find any astrological elements in the Christian doctrines, AK. There’s an obvious reason for that. Think.

        ***

        Here’s how to substantiate a claim that “documents” get “cherry-picked for preservation and publication depending on how well they fit the preconceptions of a bunch of political academics dependent on such “royal” favor” in case you ever feel like it:

        The Harper government has dismantled one of the world’s top aquatic and fishery libraries as part of its agenda to reduce government as well as limit the role of environmental science in policy decision-making.

        http://thetyee.ca/News/2013/12/09/Dismantling-Fishery-Library/

        ***

        I don’t know who’s wasting whose time here.

      • This conflates Ptolemy’s astrology with the Ptolemaic model, AK.

        No, the Ptolemaic model was part of “Ptolemy’s astrology”.

        Also note that to speak of the “Ptolemaic model” presume it was his invention. It was not. Nor was it “his” astrology.

        Ptolemy was a librarian. We call it the “Ptolemaic model” because the usual source was summary documents he wrote. Including a good deal of what today would be considered plagiarism of previous authors. Just as with “Euclidean geometry”, which, AFAIK, is generally thought to be summaries of “current knowledge”.

        Here’s how to substantiate a claim that “documents” get “cherry-picked for preservation and publication depending […]

        Well, IIRC it’s not too different from some of the original material cited by Canfora. In both cases, highly biased sources, of which observers are free to make their own interpretations.

        The parallel with the burning of the Library in Alexandria (assuming it actually happened which Canfora questions) seems valid to me. In each case, the cherry-picking took place prior, in terms of which documents were copied an made available elsewhere.

        Unless they were all scanned and made available online by Google? I don’t know, and don’t care enough to research it, although Κυδος to Google if they were!

        I don’t know who’s wasting whose time here.

        I do.

      • > The Ptolemaic model was part of “Ptolemy’s astrology”.

        The Ptolemaic model is not in the Tetrabiblos, but in the Almagest:

        http://en.wikipedia.org/wiki/Almagest

        Maths and astronomy in one book, astronomy in another one.

        Fancy that.

      • Arch:

        > astronomy in another one.

        astrology in another one.

        ***

        Also note that the picture above is of the Almagest.

        ***

        We have yet to see the “rationalizations” the Church adopted as Doctrine.

        More on the Galileo controversy:

        http://www.catholic.com/tracts/the-galileo-controversy

        ***

        All this from a guy who pretends having read Kuhn.

      • Maths and astronomy in one book, astronomy [sic: astrology?] in another one.

        Nope. Maths and one kind of astrology in one book, another kind of astrology (divination) in another.

        Only later did they diverge:

        For a long time the funding from astrology supported some astronomical research, which was in turn used to make more accurate ephemerides for use in astrology. In Medieval Europe the word Astronomia was often used to encompass both disciplines as this included the study of astronomy and astrology jointly and without a real distinction; this was one of the original Seven Liberal Arts. Kings and other rulers generally employed court astrologers to aid them in the decision making in their kingdoms, thereby funding astronomical research. University medical students were taught astrology as it was generally used in medical practice. [my bold]

        Astronomy and astrology diverged over the course of the 17th through 19th centuries. Copernicus didn’t practice astrology (nor empirical astronomy; his work was theoretical[12]), but the most important astronomers before Isaac Newton were astrologers by profession – Tycho Brahe, Johannes Kepler, and Galileo Galilei. Newton most likely rejected astrology, however (as did his contemporary Christiaan Huygens),[13][14][15] and interest in astrology declined after his era, helped by the increasing popularity of a Cartesian, “mechanistic” cosmology in the Enlightenment.

      • We have yet to see the “rationalizations” the Church adopted as Doctrine.

        Well, perhaps you have trouble seeing over your own towering preconceptions. From your own link:

        Centuries earlier, Aristotle had refuted heliocentricity, and by Galileo’s time, nearly every major thinker subscribed to a geocentric view. Copernicus refrained from publishing his heliocentric theory for some time, not out of fear of censure from the Church, but out of fear of ridicule from his colleagues.

        Many people wrongly believe Galileo proved heliocentricity. He could not answer the strongest argument against it, which had been made nearly two thousand years earlier by Aristotle: If heliocentrism were true, then there would be observable parallax shifts in the stars’ positions as the earth moved in its orbit around the sun. However, given the technology of Galileo’s time, no such shifts in their positions could be observed.

        […]

        In 1614, Galileo felt compelled to answer the charge that this “new science” was contrary to certain Scripture passages. His opponents pointed to Bible passages with statements like, “And the sun stood still, and the moon stayed . . .” (Josh. 10:13). This is not an isolated occurrence. Psalms 93 and 104 and Ecclesiastes 1:5 also speak of celestial motion and terrestrial stability. A literalistic reading of these passages would have to be abandoned if the heliocentric theory were adopted. Yet this should not have posed a problem. As Augustine put it, “One does not read in the Gospel that the Lord said: ‘I will send you the Paraclete who will teach you about the course of the sun and moon.’ For he willed to make them Christians, not mathematicians.” Following Augustine’s example, Galileo urged caution in not interpreting these biblical statements too literally.

        Unfortunately, throughout Church history there have been those who insist on reading the Bible in a more literal sense than it was intended.

        Etc. Given the source, and its jesuitical efforts to pretend to a church open to scientific challenges to its “immutable truths”, I would say this constitutes good evidence that the heliocentric astrology of Aristotle and Ptolemy were regarded as doctrine.

        Note that the linked sophistry from the Roman Church completely ignores the issues of the moons of Jupiter and the phases of Venus:

        From September 1610, Galileo observed that Venus exhibited a full set of phases similar to that of the Moon. The heliocentric model of the solar system developed by Nicolaus Copernicus predicted that all phases would be visible since the orbit of Venus around the Sun would cause its illuminated hemisphere to face the Earth when it was on the opposite side of the Sun and to face away from the Earth when it was on the Earth-side of the Sun. On the other hand, in Ptolemy’s geocentric model it was impossible for any of the planets’ orbits to intersect the spherical shell carrying the Sun. Traditionally the orbit of Venus was placed entirely on the near side of the Sun, where it could exhibit only crescent and new phases. It was, however, also possible to place it entirely on the far side of the Sun, where it could exhibit only gibbous and full phases. After Galileo’s telescopic observations of the crescent, gibbous and full phases of Venus, therefore, this Ptolemaic model became untenable.

      • > Maths and one kind of astrology in one book, another kind of astrology (divination) in another.

        Fascinating. Let’s return to the main claim:

        Then some of those rationalizations were adopted by the Roman church as “immutable” doctrine, Ptolemy’s astrology among them.

        Was Ptolemy’s model the only “rationalization” adopted by the Roman church as “immutable” doctrine, or is there “some” more?

        Still no Doctrine.

        You know what a “Doctrine” means for the Church?

        Here’s a list:

        https://carm.org/basic-christian-doctrine

        Not much about the Ptolemaic model there.

        Galileo’s trial was not over the Ptolemaic model, by the way.

      • dont forget there were two suns

      • Was Ptolemy’s model the only “rationalization” adopted by the Roman church as “immutable” doctrine, or is there “some” more?

        Still no Doctrine.

        You know what a “Doctrine” means for the Church?

        doc·trine
        ˈdäktrən/
        noun
        noun: doctrine; plural noun: doctrines

        a belief or set of beliefs held and taught by a church, political party, or other group.
        “the doctrine of predestination”
        synonyms: creed, credo, dogma, belief, teaching, ideology; More
        tenet, maxim, canon, principle, precept
        “the doctrine of the Trinity”
        US
        a stated principle of government policy, mainly in foreign or military affairs.
        “the Monroe Doctrine”

        Creed?:

        Whosoever will be saved, before all things it is necessary that he hold the catholic faith. Which faith except every one do keep whole and undefiled; without doubt he shall perish everlastingly. And the catholic faith is this: That we worship one God in Trinity, and Trinity in Unity; Neither confounding the Persons; nor dividing the Essence. For there is one Person of the Father; another of the Son; and another of the Holy Ghost. But the Godhead of the Father, of the Son, and of the Holy Ghost, is all one; the Glory equal, the Majesty coeternal. Such as the Father is; such is the Son; and such is the Holy Ghost. The Father uncreated; the Son uncreated; and the Holy Ghost uncreated. The Father unlimited; the Son unlimited; and the Holy Ghost unlimited. The Father eternal; the Son eternal; and the Holy Ghost eternal. And yet they are not three eternals; but one eternal. As also there are not three uncreated; nor three infinites, but one uncreated; and one infinite. So likewise the Father is Almighty; the Son Almighty; and the Holy Ghost Almighty. And yet they are not three Almighties; but one Almighty. So the Father is God; the Son is God; and the Holy Ghost is God. And yet they are not three Gods; but one God. So likewise the Father is Lord; the Son Lord; and the Holy Ghost Lord. And yet not three Lords; but one Lord. For like as we are compelled by the Christian verity; to acknowledge every Person by himself to be God and Lord; So are we forbidden by the catholic religion; to say, There are three Gods, or three Lords. The Father is made of none; neither created, nor begotten. The Son is of the Father alone; not made, nor created; but begotten. The Holy Ghost is of the Father and of the Son; neither made, nor created, nor begotten; but proceeding. So there is one Father, not three Fathers; one Son, not three Sons; one Holy Ghost, not three Holy Ghosts. And in this Trinity none is before, or after another; none is greater, or less than another. But the whole three Persons are coeternal, and coequal. So that in all things, as aforesaid; the Unity in Trinity, and the Trinity in Unity, is to be worshipped. He therefore that will be saved, let him thus think of the Trinity.

        Furthermore it is necessary to everlasting salvation; that he also believe faithfully the Incarnation of our Lord Jesus Christ. For the right Faith is, that we believe and confess; that our Lord Jesus Christ, the Son of God, is God and Man; God, of the Essence of the Father; begotten before the worlds; and Man, of the Essence of his Mother, born in the world. Perfect God; and perfect Man, of a reasonable soul and human flesh subsisting. Equal to the Father, as touching his Godhead; and inferior to the Father as touching his Manhood. Who although he is God and Man; yet he is not two, but one Christ. One; not by conversion of the Godhead into flesh; but by assumption of the Manhood by God. One altogether; not by confusion of Essence; but by unity of Person. For as the reasonable soul and flesh is one man; so God and Man is one Christ; Who suffered for our salvation; descended into hell; rose again the third day from the dead. He ascended into heaven, he sitteth on the right hand of the God the Father Almighty, from whence he will come to judge the living[16] and the dead. At whose coming all men will rise again with their bodies; And shall give account for their own works. And they that have done good shall go into life everlasting; and they that have done evil, into everlasting fire. This is the catholic faith; which except a man believe truly and firmly, he cannot be saved.

        How much of the above actually depends on the principles of Paul and the Synoptic Evangelists, and how much on Neoplatonist rationalizations?

        Certain central tenets of Neoplatonism served as a philosophical interim for the Christian theologian Augustine of Hippo on his journey from dualistic Manichaeism to Christianity. As a Manichee, Augustine had held that evil has substantial being and that God is made of matter; when he became a Neoplatonist, he changed his views on these things. As a Neoplatonist, and later a Christian, Augustine believed that evil is a privation of good and that God is not material. Perhaps more importantly, the emphasis on mystical contemplation as a means to directly encounter God or the One, found in the writings of Plotinus and Porphyry, deeply affected Augustine. He reports at least two mystical experiences in his Confessions which clearly follow the Neoplatonic model. According to his own account of his important discovery of ‘the books of the Platonists’ in Confessions Book 7, Augustine owes his conception of both God and the human soul as incorporeal substance to Neoplatonism.

        Many other Christians were influenced by Neoplatonism, especially in their identifying the Neoplatonic One, or God, with Yahweh. The most influential of these would be Origen, who potentially took classes from Ammonius Saccas (but this is not certain because there may have been a different philosopher, now called Origen the pagan, at the same time), and the late 5th century author known as Pseudo-Dionysius the Areopagite.

        Granted, Augustine may have later formally abandoned Neoplatonism, but his methods of thought remain grounded in it (IMO).

  20. This is interesting, He was using GCMs to do real calculations for the real world but he didn’t have the option of adjusting the real world to fit the GCMs.

    I expect that in the near future CGMs will become more widely used for just such practical applications. This thesis was therefore only a matter of time – but better sooner than later.

    I applaud Judith for giving it the attention it deserves. There aren’t many venues where this can happen.

  21. From the post:

    “Model results that confirm earlier model results are perceived more reliable than model results that deviate from earlier results. Especially the confirmation of earlier projected Equilibrium Climate Sensitivity between 1.5″C and 4.5″C degree Celsius seems to increase the perceived credibility of a model result. Mutual confirmation of models (simple or complex) is often referred to as ’scientific robustness’.”

    Looks like an example of the anchor bias.

    http://en.m.wikipedia.org/wiki/Anchoring

  22. Pielke Jr. makes a pretty good case that the world has already reached consensus on the topic of how “robust” GCMs are in the only way that really counts.
    “While people will no doubt continue to enjoy debating about and witnessing to climate policies, the fact is, at the meta-level, that debate is pretty much over. Climate policy has entered its middle aged years.”

    https://theclimatefix.wordpress.com/2015/02/02/future-trends-in-carbon-free-energy-consumption-in-the-us-europe-and-china/

  23. ‘Especially the confirmation of earlier projected Equilibrium Climate Sensitivity between 1.5″C and 4.5″C degree Celsius seems to increase the perceived credibility of a model result.’ Huh – I think Nic Lewis would be very interested in seeing some of the “failed” models that produced ECS<2, that apparently never saw the light of day.

  24. The Netherlands, not to be confused with Holland…
    ‘Holland vs the Netherlands’ on YouTube – http://youtu.be/eE_IUPInEuc

    • nottawa rafter

      So confusing I am surprised they got this far to found their nice little town in West Michigan.

      Ahhh, the Dutch jokes I hear. About as good as the Yooper jokes. (U. P) . Thanks for the link.
      :)

      • My personal favorite (40 years in West MI) – How was electrical wire invented? Two Hollanders fighting over a penny.

  25. Svend Ferdinandsen

    I speculate on why models are tuned to balance TOA radiation. With all the changes in temperature it can never be in balance, not even over 100’s of years,.

    • Curious George

      Unsure what balance you mean. Where most of us live, mornings are cold and then the day warms us. And winters are cold and summers are warm. An instantaneous balance? – clearly a fiction. A daily balance? Approximate at best. A yearly balance? Maybe. And remember that a blackbody radiation depends on T**4 – not that I propose that treating the Earth as a black body is an acceptable model.

  26. Another natural bias is lack of comprehension of paleo time-spans leading one to a false assumption in the early 1970s, looking back forty years, that we might be slipping back into ice age, and after the warm 80s eighties dusting off Arrhenius’ Green House Effect with a question mark, then in the warmer 90s with an exclamation point. By 2001 the science was settled.

    But meanwhile looking at more and better paleo-reconstructions we get a better perspective of what Earth’s baseline is on her timescale.

    I think many more of the climate modeling community need to take not solve paleo climate first. Knowing the mega-influences (causing +/- 3C) may just go a long way to get a start on solving the minor ones (+/1 0.3C).

  27. The key: “Model results that confirm earlier model results are perceived more reliable than model results that deviate from earlier results.”

    News flash — Newton’s law of gravity rejected by climate modellers for not confirming earlier models based on epicycles.

    The key above is an admission of scientific corruption of the first order.

  28. The latest study from the United Nations Intergovernmental Panel on Climate Change found that in the previous 15 years temperatures had risen 0.09 degrees Fahrenheit. The average of all models expected 0.8 degrees. So we’re seeing about 90% less temperature rise than expected… In other words, for at least the next two decades, solar and wind energy are simply expensive, feel-good measures that will have an imperceptible climate impact. ~Bjorn Lomborg

    • “..15 years temperatures had risen 0.09 degrees …”

      Data uncertainty? Year-year fluctuation; decade-decade; century-century ?

      • …it’s always good to question these things –e.g., it’s really only been a rise of 2/100ths but, close enough for government work.

  29. Perhaps it should be noted that The Netherlands already is built to a considerably extend on ground that is below present sea water level. Relying on the world to mitigate is therefore not an option, they just have to adapt. I imagine that statistical and probalistic models also are used and give results that makes it less controversial to question the GCM:s.

    • A fan of *MORE* discourse

      Gunnar Strandell notes [correctly] “The Netherlands already is built to a considerably extend on ground that is below present sea water level.”

      Background 

      (1) Community A [the Netherlands] enjoys water-impermeable geology; dikes work well.

      (2) Community B [Florida] suffers from porous karst geology; dikes fail utterly.

      FOMD’s Paradox  Which community plans rationally, generations ahead, to deal practically with accelerating sea-level rise? Which community denies irrationally the reality of sea-level rise?

      Why this paradox? The world wonders!

      Conclusion  There’s no shortage of markets that are neither efficient nor rational.

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

  30. A model is a great Gothic cathedral of complexity which has been built on foundations of assumption, extrapolation and simplification…then plugged with bias and guess.

    In short, not something for adults, which is why we’re in the present mess.

    Need adults, so badly.

  31. Hi Judy – I agree; KNMI has shown some leadership in the climate issue.

    Several years ago I participated in a review of KNMI [https://pielkeclimatesci.wordpress.com/2012/03/19/climate-research-assessment-and-recommendations-in-the-report-of-the-2004-2009-research-review-of-the-koninklijk-nederlands-meteorologisch-instituut/]

    Our findings included

    “The generation of climate scenarios for plausible future risk, should be significantly broadened in approach as the current approach assesses only a limited subset of possible future climate conditions.”

    Roger Sr.

  32. Group of physicists

    THESE ARE THE FAULTS IN THE CLIMATE MODELLING PARADIGM …

    Hansen, Trenberth et al made the huge mistake of thinking they could explain Earth’s surface temperature by treating the surface as a black body (which it is not because there are sensible heat transfers also involved) and then adding the flux from the colder atmosphere to that from the Sun and then deducting the non-radiative outward flux and finally using the net total of about 390W/m^2 in Stefan-Boltzmann calculations to get a temperature of 288K.

    Of course to get the right result they had to fiddle the back radiation figure up to 100% of the incident Solar radiation before it enters the atmosphere. Thus they devised an energy-creating atmosphere which delivered more thermal energy out of its base than entered at its top.

    To obtain their “33 degrees of warming” they effectively assumed that the main “greenhouse” gas water vapor warms the surface by 10 t 15 degrees for each 1% concentration in the atmosphere. Then they had to promulgate the myth (proven contrary to evidence) that deserts are colder than rain forests, though they did not enlarge on that and admit their conjecture meant at least 30 degrees colder where there is a 3% difference in water vapor.

    Then they worked out their 255K figure (ignoring the T^4 relationship) and said it was the temperature about 5Km above the surface. Perhaps it is, but they then used school boy “fissics” and assumed the surface temperature would be the same in the absence of their GH gases In fact the surface would receive less solar radiation than the region 5Km further up.

    So they had to reinvent the Second Law of Thermodynamics incorporating two major errors into their version of that law. The first error was to disregard the effect of gravitational potential energy on entropy, and the second error was to disregard the fact that the law applies to each independent process. Their version of the Second Law could be used to “prove” that water could flow up a mountainside provided that it flowed further down on the other side.

    They need to think, like Newton, and realize that when an apple falls off a tree then entropy increases, just as the Second Law says it will. So too does entropy increase when a molecule “falls” between collisions unless, that is, the sum of molecular gravitational potential energy and kinetic energy remains constant and there is thus a gravitationally induced temperature gradient.

    To prove their paradigm they would need to construct a Ranque Hilsch vortex tube and somehow ensure that the huge centrifugal force did not cause a huge temperature gradient in the cross-section of the tube. Until those promulgating the hoax can do that they have contrary evidence staring them in the face.

    • You make a good a lot of good points, especially gravity’s role. The GHE assumes that GHG is transparent to incoming, but of course there is a lot of IR in Sun light and that is heating the CO2 in upper atmosphere on the way in. This warmer CO2 that did not used to be there is emitting IR at the same rate it is absorbing in both directions, day and night. This warmer outer layer is somewhat immune to convection, stratifying instead because of it’s lower density. It’s sits like an electric blanket on the top of a bedspread, losing a lot of energy to cold night, (even cloudy ones), especially if CO2 density is at saturation of opacity above cloud height. A lot of the incoming heat that used to penetrate much further is now intercepted in day and emitted at night. Is this part of the models?

    • Does a warmer upper atmosphere make clouds condense at a higher average height? If so, again their reflective influence in day is increased as less atmosphere is being heat above, more shaded below. At night heat would be less trapped near the surface and again the clouds would have their warmth from condensation less insulated from emission outward.

    • Matthew R Marler

      Group of Physicists: Their version of the Second Law could be used to “prove” that water could flow up a mountainside provided that it flowed further down on the other side.

      It would have to be in enclosed watertight pipes, as with siphoning. Or is there a maximum height (say 14 feet of elevation) over which that would work?

      The Earth receives a steady stream of energy from the sun, some of which gets stored in tight chemical bonds in cellulose, sugar and bones. What are the implications of this steady stream of incoming energy for second law arguments in the atmosphere?

      Until those promulgating the hoax can do that they have contrary evidence staring them in the face.

      Could you avoid the word “hoax”, which, like “lie”, implies that they know that what they are promulgating is wrong?

    • Group  of  physicists

      So is it appropriate to call the claim that water vapor and carbon dioxide warm the surface to be hoax, a lie and a fraud? I say it is because climatologists must have questioned the role of water vapor and realised that the sensitivity cannot possibly be 10 to 15 degrees of warming for each 1% in the atmosphere. Climatologists have tried to kid the world that they know more about thermodynamics than do physicists. They have ignored how physicists define black bodies and then treated the Earth’s surface, the thin transparent surface layer of the oceans and even layers in the atmosphere as if they are black bodies. They have ignored what Loschmidt said about the autonomous formation of a temperature gradient resulting from the force of gravity acting on molecules in flight between collisions. In fact they have published papers (like Verkley et al) which use equations pertaining to thermodynamic potentials which are specifically derived by ignoring gravitational potential energy. So they “prove” using wrong assumptions that the assumptions are true, namely that there would be isothermal conditions in the absence of greenhouse gases. That state is not what the Second Law of Thermodynamics indicates will evolve. Instead gravity does set up a temperature gradient and the resulting sensible heat transfers into the surface explain the observed surface temperatures on planets like Earth and Venus.

  33. Alexander Bakker astutely asked:

    “Are there credible methods for the quantitative estimation of climate response at all?”

    Mosher seems to have all the answers, maybe he can take this one on.

  34. irritated engineer

    This excerpt came from a project I am working on. The computer model, eQuest/DOE2, is used for analyzing existing building energy usage:.

    “Some calibration of data was used to establish the baseline for energy modeling purposes. You can see
    that after calibration, it is extremely close to the most recent consumption data. (Annual is close but monthly is off by as much as 100%.)

    Internal loads (lighting, equipment, and infiltration) were adjusted along with (occupancy) schedules as tuning
    variables to achieve a match with gas and electric use history.”

    Now the kicker:
    “The baseline model matches actual use quite well on an annual basis, but varies significantly on a monthly
    basis. This is primarily judged to be driven by the unique event-driven nature of the building. Activity and event schedules as input into the model were simplified, and
    did not align with the specific event schedules that occurred during the calibration period.
    Additional information was requested about operating hours of the heating and cooling plant and the ice
    making chillers, to allow greater resolution and analysis, but that information is either not available or was
    not made available.
    Nevertheless, significant effort was invested in matching baseline model input to the best definition we
    could derive as to how the facility is constructed and operated, and the model is deemed to be a reliable
    tool for predicting average energy savings associated with the measures analyzed.”

    A potential project worth between $5 million to $30 million is being decided partly on this model output. The building leaks air like a sieve due to age, failed window gasketing, wall caulking, and failed HVAC dampers being open,. plus general neglect. This could account for as much as 30% of recent energy usage..

    The modeler has a paragraph in the report that states they have no liability associated with work results.

    • Who is the idiot who agreed to pay for that model?

    • I should have asked-(to stay out of moderation)

      Who agreed to pay for such a model? Does it perform per what was agreed upon?

      • irritated engineer

        Owner is paying for it, they believe it is the right method since everyone in the industry says the same thing. I can’t fault them or much of the industry since no one has ever challenged the issue before. They (owners) always wonder afterwards why the promised realization rate rarely gets met.
        The model is “calibrated, or tuned” to a false energy-use baseline, so how can the results be usable?

        Why not fix the problems and let the building run for 18 months to get a new realistic baseline before looking at modeling?

      • As a model it is poorly representing what is happening in the building. The specification defining how well the model is to perform seems to have been poorly written.

      • irritated engineer

        My initial recommendation was to not even spend the money to run the model since its output would be invalid. Unfortunately the building industry has become addicted to this approach to looking at existing buildings. Many energy codes now require it.
        No one talks about the problems associated with it.
        I had one case where the modeler increased the chiller (heat recovery) efficiency from 0.9KW/ton to 1.6KW/ton. They also increased operating hours from 3,120/yr to 4,300/yr. Took 3 weeks to dig this info out of the modeler. The property management kicked them out once all this was presented.

      • Cap’n’s got a ride for you. Let me bait the hook.
        ============

      • Have you ever talked to the people who prepare environmental impact reports? I’ve met some very disillusioned folks over the years.

      • irritated engineer, “Why not fix the problems and let the building run for 18 months to get a new realistic baseline before looking at modeling?”

        That is just why too simple, it will never fly. That’s why I prefer fishing :)

        btw, just about every building complex has/had pretty comprehensive TAB reports that detailed “as installed” versus “as designed” performance. The TAB company can survey the system in a fraction of the time and cost while”fixing” as in re-balance the systems by finding out which dampers are failed, belts are slipping, RPMs are off, fans are rotating backwards, pump flows that are off etc. etc. then with that information engineers might be less irritated.

    • Dear IE, why don’t the computer controlled elevators lurk at the ground floor entrance of our brand new building in the morning?

      • irritated engineer

        They are optimized not to lurk. you input the floor you want on a local screen, correct? The system determines which cab is closest to you, as well as others who may be on adjacent floors. It then scopes out a path of least distance to get everyone to their floor.
        Having idle cabs rest at their last stop is most energy efficient.

        Big question – Where are you located regionally, and do you have a lot of infiltration into the lobby in the morning through the doors? Airflow into the elevator shafts? This is a preliminary symptom that you have envelope leaks. The problem is not the incoming cold air but the outgoing (somewhere higher in the bldg) warm air you already paid for. The other prime symptom for this problem is sign on the exit man-doors telling you to use the vestibules or revolving doors to exit the bldg.

      • blueice2hotsea

        Having idle cabs rest at their last stop is most energy efficient.

        DocMartyn’s point is more about improving transportation efficiency (and perhaps annoyance at waiting for an elevator at ground floor when late for work :). Improved transportation efficiency could potentially allow fewer elevator shafts and thus improved energy efficiency. No?

      • blueice2hotsea

        Duh. # of shafts would be sized for peak traffic. So ‘no’, accommodating morning stragglers does not improve energy efficiency.

    • Somebody forgot to set the clock?

    • irritated engineer

      OT but a response anyway:
      CaptDallas says:
      “btw, just about every building complex has/had pretty comprehensive TAB reports that detailed “as installed” versus “as designed” performance. The TAB company can survey the system in a fraction of the time and cost while”fixing” as in re-balance the systems by finding out which dampers are failed, belts are slipping, RPMs are off, fans are rotating backwards, pump flows that are off etc. etc. then with that information engineers might be less irritated.”

      This view is typical of the industry problem: BIAS, BLINDERS, and BLUNDERS.

      Companies only look for issues they can sell their specific services for, they are not capable of seeing the building as a whole – WHOLISM.

      What would happen to TAB/CX/engineers/contractors/modelers if every building held off any work for 18 months after fixing envelope and associated issues? Most would fold.

      “Fraction of the time and cost” – as compared to what?

      TAB, and CX, is sometimes complicit in the original problem. I’ve seen reports that intentionally hide engineering/construction problems to prevent liability issues from surfacing. Companies who expose such issues tend to have short lifespans in the community they serve.
      Who says the original TAB/CX report, or latest version, is valid for the current tenants? Who says fan speed or pump flow is wrong?
      Who usually makes changes to the systems? The operators, and why? Typically because of tenant complaints. Making changes back without acknowledging this will just result in future tenant complaints, which will force the operator to make the system change again back to where it was before the TAB/CX contractor came in and spent the owner’s money.
      TAB/CX companies are not design engineers and typically do not have liability insurance to make system design changes.

  35. The Sun is the 13th Floor of Western Climate Science

  36. Here we present results from a multi-thousand-member perturbed-physics ensemble of transient coupled atmosphere–ocean general circulation model simulations. We find that model versions that reproduce observed surface temperature changes over the past 50 years show global-mean temperature increases of 1.4–3 K by 2050, relative to 1961–1990, under a mid-range forcing scenario. http://www.nature.com/ngeo/journal/v5/n4/full/ngeo1430.html

    These 1000’s of ‘solutions’ of the same model are constrained to those that reproduce recent temperature change. But two questions remain.

    1. Which solution doe you choose to represent the model in the grand opportunistic IPCC ensemble? 1.4 or 3K by 2050 – or something arbitrarily mid range?

    2. Given Incomplete understanding of three aspects of the climate system—equilibrium climate sensitivity, rate of ocean heat uptake and historical aerosol forcing—and the physical processes underlying them lead to uncertainties in our assessment of the global-mean temperature evolution in the twenty-first century – these uncertainties must include large changes in energy dynamics from changes in albedo and water vapour – how do we know that any of the solutions is a realistic projection?

    There are 2 critical issues with models. Divergence of solutions from arbitrarily close initial starting points – sensitive dependence – and the implausibility of realistic representation of physical processes and couplings – structural instability. Even minor changes in processes and couplings lead to unpredictable divergence of solutions. The first results in irreducible imprecision – the second undermines the credibility of the entire exercise.

    Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable. They echo other famous limitations on scientist’s expectations, namely the undecidability of some propositions within axiomatic mathematical systems (Gödel’s theorem) and the uncomputability of some algorithms due to excessive size of the calculation.
    http://www.pnas.org/content/104/21/8709.full

    • # Divergence problem.# Attribution problem
      # Uncertainty problem. # Waning public
      confidence problem.
      Must be perturbing for them. Lewandowsky might
      need ter do a new, er, study on cli-sci hang-ups
      soon. Someone give him some more funding.

    • I would like to see this model’s results graphed out 10,000 more years. Has CO2 saved us from the inevitable next ice age, (due any century now) or not?

      • “I would like to see this model’s results graphed out 10,000 more years.”

        just color it all in.;-))

  37. Ah, here we go again.

    Anyone who claims that an effectively infinitely large open-ended non-linear feedback-driven (where we don’t know all the feedbacks, and even the ones we do know, we are unsure of the signs of some critical ones) chaotic system – hence subject to inter alia extreme sensitivity to initial conditions – is capable of making meaningful predictions over any significant time period is either a charlatan or a computer salesman – possibly both.

    Ironically, the first person to point this out was Edward Lorenz – a climate scientist.

    You can add as much computing power as you like, the result is purely to produce the wrong answer faster.

    As to “equilibrium climate sensitivity”, that is meaningless, such a system as the Earth’s climate which is in a continuous state of perturbation from an unknown number of influences and incorporates numerous feedbacks which involve those perturbing influences can never reach equilibrium.

    • “As to “equilibrium climate sensitivity”, that is meaningless, such a system as the Earth’s climate which is in a continuous state of perturbation from an unknown number of influences and incorporates numerous feedbacks which involve those perturbing influences can never reach equilibrium.”

      Agreed. There is the concept of a linearized model and a climate sensitivity of that linearization. And, of course, every different state of the climate has its own linearization. So there is in fact a whole continuum of climate sensitivities….

      But I guess stating the sensitivity in that way would bring things to close to math, which has the tendency to constrain ones conclusions, so, jettison that!

  38. High on the priority list would be educating people about what can ever be expected from GCM’s. They are excellent at uncovering dynamics and displaying long-term trends, and actually have far less systematic errors than many “skeptics” would posit:

    http://phys.org/news/2015-02-global-slowdown-systematic-errors-climate.html

    • Gates
      Can you tell me where the climate will be most likely to be less favorable in 20 years as a result of AGW based on the output of a GCM? What climatic conditions will be worse for humans and by how much in the location you select? Where will the climate improve???

      How in your opinion are these models useful for making government policy in any particular country???

      • Rob,

        Sure thing, as there are a few real obvious ones:

        1) Investing in beach front property in many areas of the world is increasingly a bad proposition and will get harder and harder to get insured.
        2) I would not invest in any activity which requires a regular sustained water supply in many areas. (CA and SW USA, water is going to be a big issue– even bigger than it has been up to now)
        3) Don’t build in areas requiring solid permafrost (i..e the sub-Arctic and Arctic) for foundational support. Very bad choice.
        4) If you live in New Orleans, might want to plan to migrate elsewhere. Generally going to be lost cause, but many coastal areas will be, only that city is both sinking and the sea level rising.
        5) Alaska & Siberia – ouch. Forest fires will be generally on the increase.
        6) Australia – Hot & Hotter. Sorry Aussies, the decades and century ahead look most unpleasant.

        The GCM’s are exceptionally useful for dictating policy. We have a general warming trend, general intensification of the hydrological cycle as accompanies all rising GH gas climates, so harden your systems to expect these things.

      • Group of physicists

        What R. Gates ought to invest in is a course in physics. There is no valid physics which can explain any warming of Earth’s surface by water vapor or carbon dioxide. The surface temperature is what it is because the force of gravity induces a temperature gradient that is the state of thermodynamic equilibrium which the Second Law of Thermodynamics states will evolve autonomously as maximum entropy is approached.

      • nottawa rafter

        Gates
        Given Rob’s 20 year test and using the CU sea level rise of 3.2mm/yr, I would be very comfortable investing in beach front property. The IPCC projections made in 1990 overshot the 2015 level by 100%. There is still no acceleration in the rate of rise in the CU data.

        In lieu of beach front property as an investment, you may want to consider Danish bonds with a negative yield.

        Sounds peachy.

      • “There is no valid physics which can explain any warming of Earth’s surface by water vapor or carbon dioxide.”
        ______
        I realize now why I’ve been staying away from CE. Thank you for the reminder!

      • Gates

        Gates
        What you have written is not based on the output of GCM’s. You are merely writing your concerns about AGW.
        Are you claiming that the output of a GCM has led you to believe that the rate of sea level rise will be increasing beyond the current observed rate? Which one are you referencing?
        “ I would not invest in any activity which requires a regular sustained water supply in many areas. (CA and SW USA, water is going to be a big issue– even bigger than it has been up to now)”
        You have written more general bologna. You seem to be taking any generally unfavorable weather condition and claiming it will get worse faster due to AGW.
        Gates writes- “The GCM’s are exceptionally useful for dictating policy”
        WRONG- Government policy involves knowing reasonably accurately what is likely to occur over the next few decades. For a GCM to be useful it would need to be able to reasonably accurately forecast what regions within nations will get substantially more vs. less rainfall. They are not reliable for this purpose are they?

        Gates writes- “general intensification of the hydrological cycle as accompanies all rising GH gas climates”
        You make many unsupportable claims.

      • “Gates writes- “general intensification of the hydrological cycle as accompanies all rising GH gas climates”
        You make many unsupportable claims.”
        _____
        Suggest you do a bit more research on the subject of natural feedbacks to rising GH gases before making this ignorant claim. The rock-carbon cycle and intensification of the hydrological cycle are natural ways for the sequestration of carbon to occur. Problem is, the Human Carbon Volcano has vastly over-whelmed this natural negative feedback as each is working on completely different time scales. The net result is that, without significant downscaling of the rate to which humans are transferring carbon to the atmosphere, we’ll have to commit to serious sequestration of carbon ourselves.

      • Gates

        I have read the theory. I have also not read anything reliable that shows that extreme weather events are actually increasing. Some (alarmists) try to use the value of property damages over time, but that doesn’t take inflation into account. What what observations are you referencing to confirm the theory???

      • Group  of  physicists

        R Gates: If you wish to debate us regarding the physics here feel free to do so, but note the ‘Evidence’ page which supports what we say.

    • Matthew R Marler

      R. Gates: The rock-carbon cycle and intensification of the hydrological cycle are natural ways for the sequestration of carbon to occur.

      How much additional energy (or power, if you prefer) is consumed by the intensification of the hydrological cycle of which you just wrote?

    • Matthew R Marler

      R. Gates: They are excellent at uncovering dynamics and displaying long-term trends, and actually have far less systematic errors than many “skeptics” would posit:

      Where is the evidence that the display of long-term trends by GCMs has been “excellent”?

    • @gates
      But you do agree that climate models diverge from reality pretty fast, like in a month or so, yes?
      I mean, otherwise we could predict the weather long term.

      So, how is it that models which diverge completely from the solution still provide useful information?

      • “But you do agree that climate models diverge from reality pretty fast, like in a month or so, yes?”
        ______
        Well, a month is a bit too short, but within a few years a model will diverge from the actual evolution of the climate system, absolutely. But this divergence is not a reflection of faulty dynamics but of natural deterministic yet chaotic systems. Thus, even if we knew every single possible dynamic associated with a climate system, such a complete dynamical model would still diverge from reality eventually. This was Lorenz great contribution.

        I’ve used this analogy many times, but I’ll state it one more time. Imagine a a cloud of dust floating in your living room. Suppose we used a model to try and predict the path of a single particle of dust, and we knew every dynamic associated with what will cause that dust particle to move, gravity, air currents, static attraction etc. Even knowing all of that, our most powerful supercomputer could never tell you exactly where that particle of dust will be even 10 seconds later. That particle represents the evolution of a climate system. But here’s the essential point– though the exact path of any particle (or any real world climate system) is unpredictable. a good model can tell you the rate of accumulation of dust on a table because it knows all the net forcings involved.

      • And when the dust stops accumulating on the table at the predicted rate you know your model is busted. You can tune it up and try again. As long is somebody is paying for it.

      • “And when the dust stops accumulating on the table at the predicted rate you know your model is busted.”
        _____
        If that were happen, but energy has been accumulating in the climate system at a rate very close to model runs. Unfortunately, the undue attention on tropospheric sensible heat and inability to reliably measure energy elsewhere in the system has been a handicap until the past decade or so, during which time our ability to measure net energy accumulation in the system has improved greatly. As the “hiatus” has shown us, energy moving around within the system can lead many to wrongfully suggest that GCM model dynamics related to net climate energy are way off, but that is not the case:

        http://phys.org/news/2015-02-global-slowdown-systematic-errors-climate.html

      • The dust done missed the table and ended up at the bottom uv da ocean. Gonna have to toon dose models, agin.

        Cut off the funding and they will have to get jobs driving cabs and waiting tables. Somebody should ask them why they need so many freaking models of the same system with the same freaking physics.

      • R. Gates, In case you wonder why, check this out.

        CMIP5 pi is their pre-industrial control. Oppo et al 2009 is a decadal binned average 50 year smoothed IPWP reconstruction. You know how well the IPWP correlates with temperature.

        That little flat blue line is how well the model get absolute temperature and natural variability. Not getting tropical SST even close would be a systemic problem I believe, on a planet 70% covered with water.

      • Capt., they are going to have to toon that up, now that you have brought it to their attention. That Oppo looks like a nice lady and she actually went out on the water to do some research:

        http://www.whoi.edu/main/news-releases/2009?tid=3622&cid=59106

        I think gatesy will say that the model matches up when you average in the temps from the bottom of ocean.

      • Don you have seen these right?

        Oppo and Mann

        Oppo and Lamb

        Oh and Gates’ fav,

        Marcott brought to you by NOAA wtf NOAA :)

      • I think I am going to have to promote you to majdallas-houston.

      • “I’ve used this analogy many times, but I’ll state it one more time. Imagine a a cloud of dust floating in your living room.”

        Well, I hear your point. However I need more than an analogy. I need a mathematical theory that tells me why climate is retrievable from a simulation that has completely diverges from the actual solution.

        Otherwise its just tilting at windmills.

        Because I disagree. By the time your model has completely diverged taking averages does not give you the climate. It gives you the sum of many small numerical resonances integrated until they swamp the entire solution. Or, put another, what you get is pure nonsense.

      • One more point…. numerical integration errors are NOT independent, they are highly correlated. So averaging over them does not put you in the realm of independent random variables and the law of large numbers. Not at all.
        So your averages are simply going to reflect the sum of these correlated errors and there is no reason such a thing should converge to the true average of some physical parameter.

        If such a mathematical theory exists I will delve into. I have never heard of such a theory and no CAGW’er has ever pointed me to one.

      • This is the money quote for Gates paper,

        “The statistical methods used in the paper are so bad as to merit use in a class on how not to do applied statistics.

        All this paper demonstrates is that climate scientists should take some basic courses in statistics and Nature should get some competent referees.”

        Gordon Hughes

        And the ‘toon of the month award goes to Nature Magazine

  39. It is now time for skeptics to relax and watch the TRAGIC-COMEDY of would-be world tyrants and puppet scientists unfold at the limit of human comprehension:

    World leaders tried to save the world and themselves from nuclear annihilation in 1945 by:

    1. Forming the UN to take totalitarian control of society, and

    2. Changing solar and nuclear physics to hide Neutron Repulsion in cores of atoms, planets, stars and galaxies heavier than 150 atomic mass units (where nuclear structure changes [1] to neutrons in the core and neutron-proton pairs at the nuclear surface).

    At the limits of comprehension, at the intersection of spiritual and scientific knowledge, an “intelligent and creative Mind” (Max Planck) guides force fields from the Sun’s pulsar core to createand sustain every atom, life and world in the Solar System . . .

    a volume of space greater than that of ten billion, billion Earth’s !

    I.e., world leaders & puppet scientists tried to hide a force of creation that is incomprehensibly more powerful than anything they could have imagined.

    Whether or not we succeed, world leaders will certainly fail to control God’s force of creation.

    1. See page 3, “Solar energy,” Adv. Astronomy (submitted for on-line review, 6 JAN 2015): https://dl.dropboxusercontent.com/u/10640850/Solar_Energy_For_Review.pdf

  40. I read Bakker’s dissertation with particular attention to the justifications for the conclusion that The ’climate modelling paradigm’ is in ’crisis’..

    Question for Dr. Curry and denizens:

    What are the original aspects of this work that qualify it as a dissertation?

    With due respect for his opinions based on his own experiences, his assertions of model biases, implicit tuning, and other possible shortcomings of GCMs rely entirely on the work (and often opinions and even speculations) of others. I see no originally produced evidence to support the critique that constitutes most of Part I. Thorough discussions of most of the points he raises already exist in numerous papers and workshop discussions, with none concluding (AFAIK) that climate modeling is in ‘crisis’.

    Aside from the issue of original contribution, some of his remarks call into question his understanding of how models are constructed and their various uses. His explanation of what climate models do (footnote 8, p. 22) is pretty muddled, which does not inspire confidence it what he has to say about their shortcomings. His comment on circular reasoning (pp 21-22) makes no sense to me. He seems to be saying that no hypothesis about a system can be tested by a mathematical model of that system. (He regards the point as “trivial”; I find it absurd.) Perhaps I misunderstand.

    Bakker has some very useful things to say about the scientist/user interface, and some sound advice (if only superficially presented) on alternative tools for input to policy. These, and the papers presented in Part II would constitute by themselves a good dissertation, IMO. Part I is a provcative essay of dubious value and inferior scholarship.

    • ” his remarks call into question his understanding of how models are constructed and their various uses.”

      What uses? Spit it out.

      • R Graf – What uses? Spit it out.

        Visit Isaac Held’s blog for many examples of how GCMs and other climate models are used for diagnosis.

        If you’re interested in understanding the uses and limitations of regional climate models, here’s a good place to start; follow the links.

        See Dr. Curry’s previous post.

        See R. Gates comment at February 2, 2015 at 5:55 pm

        There’s much more; you can find it.

    • I await with interest, some amusement, and not much positive expectation, for seeing some “denizens” give substantive responses to Pat’s questions.

    • He seems to be saying that no hypothesis about a system can be tested by a mathematical model of that system.

      The problem is ( is the system enumerable?)

      If it is then is is insolvable,hence trivial.

    • Matthew R Marler

      Pat Cassen: What are the original aspects of this work that qualify it as a dissertation?

      It is in the European style of summarizing work that the author has already published in the peer-reviewed literature.

      These, and the papers presented in Part II would constitute by themselves a good dissertation, IMO.

      That answers your first question, at least from your point of view. Occasionally, what some readers might regard as dross has been included at the insistence of one of the committee members.

      • > It is in the European style of summarizing work that the author has already published in the peer-reviewed literature.

        So the original content of the thesis is not in the thesis, Matt?

        Even po-mos don’t dare doing that!

      • Matthew R Marler

        Willard: So the original content of the thesis is not in the thesis, Matt?

        That is true. In the US the standard for a PhD thesis is that it be original work that is “publishable”, though many, if not most, of them are not in fact published. In Europe, the criterion “publishable” is established by having a series of works actually published, and then the series is written up in a document that cites the publications. In the US, it frequently happens as well that the final thesis may be a culmination of previously published papers, but usually the thesis is approved (or not) before the submitted works have in fact appeared in print.

      • > In Europe, the criterion “publishable” is established by having a series of works actually published, and then the series is written up in a document that cites the publications.

        Thanks, MattStat. I’ll check when I’ll get the Round Tuit.

    • John Vonderlin

      Hi Pat,
      “His explanation of what climate models do (footnote 8, p. 22) is pretty muddled, which does not inspire confidence it what he has to say about their shortcomings.” I’d suggested some proofreading for anybody asserting somebody else’s thinking is muddled. Typos happen, but this is incomprehensible.

      • How about:

        > His explanation of what climate models do (footnote 8, p. 22) is pretty muddled, which does not inspire confidence regarding what he has to say about their shortcomings.

        JohnV?

        Note that Pat doesn’t claim there are typos in the thesis.

      • Pople poored bog comets?

      • UAH out for January: +0.35C. Oh burr, kaicetrophe is right around the korner.

      • Steven Mosher

        Nit pickers lose points by being less than perfect.
        glass houses and all.

        Of course willard offers charity to Pat.
        No charity for the author.

        he’s a stingy prick that willard.
        zero honor.

      • > No charity for the author.

        Which author?

        A quote might be nice.

    • Matthew –That answers your first question…

      Yes.

      what some readers might regard as dross has been included at the insistence of one of the committee members.

      That does not seem to be the case here.

      John Vonderlin – I’d suggested some proofreading for anybody asserting somebody else’s thinking is muddled. Typos happen, but this is incomprehensible.

      Point taken. “…does not inspire confidence in what he has to say…”

    • I believe he means by circular reasoning, that the model embodies the hypothesized effect of CO2 in such a way that more CO2, input into the model, will result in more warming. It’s not that CO2 won’t tend to increase back IR radiation, it will, but perhaps the effect of the back radiation on the oceans aren’t well modeled, or perhaps the knock-on responses aren’t properly modeled.

      As you point out, he could have been more explicit here. But that’s my take on it.

      • There are quite a few assumptions that are likely common. Looking at CMIP5 pre-industrial tropical temperatures are estimate to be about 24.75 C for 1000AD to 1200AD. Actual SST for the tropics during that period looks closer to 27C +/- 1C or so. If all the models start with a “normal” 1 to 2 degrees below actual, they would all tend to run hot.

        Since Bakker has used the models quite a bit, I imagine he has a pretty good list of common assumptions.

    • Steven Mosher

      “His comment on circular reasoning (pp 21-22) makes no sense to me.”

      your comment makes no sense to me.

      killer argument, calling willard, willlard?

      • Makes no sense.
        Makes no sense to me.
        Spot the difference.

        As bender would say,
        Next.

        ***

        Since I’ve been hailed, Pat’s claim might be more interesting if he’d quote the relevant argument. I suspect the usual one against parametrization.

      • ==> “Pat’s claim might be more interesting if he’d quote the relevant argument. I suspect the usual one against parametrization.”

        Indeed. And counter-critiques that point that out are valid.

        And then there’s the other laughable responses in the sub-thread.

        sameolsameol.

      • interestingly enough, jim2’s wasn’t that bad!

      • Willard – Pat’s claim might be more interesting if he’d quote the relevant argument.

        Here it is:

        In the lack of other Earths, ’scientific simulation’ could be proposed. The above experiment [examining other Earth’s under different forcings] could be performed by climate model simulations rather than with other Earths. Nevertheless, conclusions on the hypothesis that “increased atmospheric GHG affects the global climate” on the basis of this approach are not valid. The climate model is a mathematical formulation of the hypothesis (together with some auxiliary hypotheses and physical laws) we want to test. The hypothesis is explicitly added to the climate model. So, the hypothesis is tested by a formalisation of the hypothesis itself.

      • Steven Mosher

        a difference that makes no difference makes no difference.
        or
        a charitable interpretation of
        ‘it makes no sense”
        is
        ‘it makes no sense to me”

        But you never were big on charity

      • Steven Mosher

        Now, imagine that I read Mann’s dissertation and made those bald assertions?

        My experience with Regional climate models is pretty much the same as detailed in the dissertation.

        What is your experience like?
        What about Willard’s?
        What about Pat’s

        Judith works with ECMWF. Day to day works with it to give advice to folks who want direction. You think her experience using models to give advice might be more relevant that Pat whats his face, or you? or Willard?

        Those of us who have actually worked with this data have reservations.
        That’s not nothing.

      • > a difference that makes no difference makes no difference.

        A difference that makes a difference does.

        Next.

      • > Now, imagine that I read Mann’s dissertation and made those bald assertions?

        Now, imagine ze Moshpit read Wegman’s report.

      • > So, the hypothesis is tested by a formalisation of the hypothesis itself.

        So now GCMs are used to test AGW.

        Fascinating.

    • Apparently his committee believed that a synthesis of these critiques coupled to a bottom-line conclusion about the use of GCMs constituted and independent contribution. I’m reminded of a history of science professor pointing out the key argument that Vesalius made about why the works of Galen were not adequate guides to human anatomy: “He studied gibbons.” Bringing that point to bear, even though everyone “knew” it, was the conceptual breakthrough needed to move on. I suspect that Vesalius’s actual beautiful anatomical drawings would have been done pretty quickly by someone else once that gibbons-aren’t-adequate-models-of humans point was taken on board generally.

    • “He seems to be saying that no hypothesis about a system can be tested by a mathematical model of that system.”

      I haven’t read it but if he is saying testing the system by a mathematical model of the hypothesis to check the hypothesis isn’t going to be useful I’d have to say I agree and it seems trivial to me.

      I don’t see regional models doing well until they can figure out if the unforced oscillations created by the GCMs are real or not and what in the models make them:

      http://www.ldeo.columbia.edu/~jsmerdon/papers/2012_jclim_karnauskasetal.pdf

    • Pat,

      We know your sincere and working hard in this area. Many others are too, undoubtedly. The fact that you are on this blog is I hope is an indication you are open for fresh ideas or challenges to old. It is hard to challenge the usefulness of climate models by lay people such as myself without knowing what successful predictions can be made now, (beyond general warming when we see El Nino). My reading sees the earlier the model for CO2 influence was created the further off its predication is to the present. When a forecast is blown one expects the forecaster to provide the explanation for the unforeseeable events that sunk their prediction. I don’t see that on C02 much. I see a changing of the subject to how many warmest years we are having. Or, “How about this weird storm,” or, a new spooky “vortex” is appearing. BTW, what happened to all the hurricanes Katrina was to have been the parade leader of?

      Is there any thought to breaking this amazingly complex puzzle down in other? Non-water planets weather should be easier to model. Is there a lot of effort currently being put into modeling Mars’ GMT? We have probes there. The atmosphere is thin, dry and mostly CO2. If models can work there that is the place to start I would think. Also, looking at paleo climate to study the glaciation cycle. If paleo climate followed just C02, (or CO2 in any respect,) that would be evidence even the public could understand. And don’t say M-cycles. A very weak, predictable and gradual force does not explain an extreme, rapid and chaotic signal. (You actually have to do a transform to be able to coax to see the M-cycle signal.) Explain paleo and you are on a solid first step.

      Is it just possible the dissertation is correct that jumping into comprehensive multi-decade climate models was a bridge too far at this point in the science?

    • “Part I is a provcative essay of dubious value and inferior scholarship.”

      1. it is provocative, perhaps explaining Judith’s point
      2. it has value.
      3. The scholarship is just fine.

      See how easy argument by assertion is.

      You get an A in being fatuous.

  41.  
    Everyone has heard by now that BAS (Bulletin of the Atomic Scientists) recently advanced the minute hand on the Doomsday Clock by 2 minutes. Now, we’re 3 minutes to midnight. This is a group of experts!

    They’re concerned about AGW leading to the demise of humanity. Basic math says, before these experts codified their current collective concern about our future, it was 5 minutes to Doomsday.

    Question: Shouldn’t Obama’s stopping of the seas from rising bought us at least a few seconds instead of costing us 2 minutes? Is it possible the Earth’s most vulnerable have more to fear from these ‘experts’ than from America’s SUV-driving soccer moms?

     

    • Did they ever move the hands back from the 1973 scare about new ice age?

      Ironically, looking at the 800,000-yr reconstructions we are darn lucky to still be riding this relatively long inter-glacial high. The party should be over by now. If CO2 is responsible for our reprieve we should be working to conserve it so it can continue to save future generations from glaciation’s increasing threat.

      • … or, we can return to an economy built on trapping and start hunting polar bears for their fur when the productive finally draw their last breath.

    • John Smith (it's my real name)

      Doomsday Clock
      hysterical
      the new secular monks admonishing ‘repent lest the end is near’
      missing heat?
      I suggest we call it the ‘messiah heat’
      better fits the religion

  42. Quinn the Eskimo

    The three lines of evidence on which attribution rests are temperature records, both instrumental and proxy, physical understanding of climate, and models.

    These are claimed by IPCC and EPA to combine into >95% certainty of attribution.

    Trillions of dollars are being spent and huge changes in public policy are being made based on this conclusion.

    Is it sound?

    Current temperatures are not anomalous in either instrumental or geologic records. The warming that has occurred is regional, not global.

    The claimed physical understanding is a joke. There are a number of excellent quotations in this thread that show this in compelling fashion. Theory and models predict and require the hot spot, but it simply doesn’t exist in nature, as demonstrated by >50 years of balloon data and 35 years of satellite data. The consensus cannot reconcile their theory with these data, but claim they’ve got the physical understanding part nailed down pat.

    The third leg of this no-legged stool is modeling. The models that are wrong about temps, wrong about the hot spot, wrong about humidity and so much else, that are incomputable representations of the inadequate physical understanding, these are the models which are indispensable to the circular reasoning of AGW attribution.

    So, out of that stack of high-density crap we are told with >95% certainty that AGW is an urgent reality and that the world must be remade in its name. The attribution analysis is a bonfire of logical fallacies, one piled on top of another in an enormous positive feedback loop of crap. The edifice is shielded by vast clouds of bafflegab and gobbledygook spewed out by millenarian zealots funded with billions in research grants.

    The AGW geniuses wailing that AGW threatens human health and welfare 100 years from now do so even though cheap energy from fossil fuels has done more to improve human health and welfare than anything in human history. Life expectancy has doubled, and population has gone up 4x or more as direct and indirect results of cheap energy from fossil fuels and industrial civilization. In the big cold waves, for example, millions of people would freeze to death were it not for fossil fuels. But we are told that fossil fuels threaten health and welfare. 100 years from now.

    I guess compared to WWI this is not the craziest thing in history, but its getting up there.

  43. “I seriously doubt that such a thesis would be possible in an atmospheric/oceanic/climate science department in the U.S. – whether the student would dare to tackle this, whether a faculty member would agree to supervise this, and whether a committee would ‘pass’ the thesis.”

    I seriously doubt that this statement could be produced by anyone other than a seriously committed ClimateBaller (TM).

    • In all fairness, Judith didn’t exactly say why she though it might not get past a defense.

      Perhaps it was for the reasons Pat outlined above?:

      https://judithcurry.com/2015/02/02/questioning-the-robustness-of-the-climate-modeling-paradigm/#comment-671003

      BTW – did I mention that I have a bridge right near Manhattan that I could let do go for real cheap?

    • ‘Atmospheric and oceanic computational simulation models often successfully depict chaotic space–time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but nonunique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model. Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.’ http://www.pnas.org/content/104/21/8709.full

      I would first of all dismiss quibbles of unoriginality and poor scholarship as purely motivated by unfounded spite. Personal disparagement that is mostly all that AGW groupthink cult of space cadets – such as Joshua and Michael – seem capable of.

      People like Pat Cassen will first need to understand perturbed physics ensembles and irreducible imprecision.

      https://judithcurry.com/2015/02/02/questioning-the-robustness-of-the-climate-modeling-paradigm/#comment-670985

      • Curious George

        Rob – you are right under an assumption that the models are not wrong. That’s a very dangerous and totally unproven assumption.

        We may learn about the level of irreducible imprecision of a model – which probably has nothing to do with a real climate. That is a very poor justification for burning tons of coal to power the Yellowstone supercomputer running the model. Especially a model that overestimates the heat transfer by evaporation from tropical seas by 3%.

      • As usual you have misunderstood completely – and substituted your own reality.

      • “People like Pat Cassen will first need to understand perturbed physics ensembles and irreducible imprecision.” – Indi

        Yeah, what a clueless fool he is.

        If only he was a rolled-gold genius like yourself….

      • Well clueless fools abound – aye Michael?

        But I actually do reference actual science. Unlike yourself.

      • Curious George

        Rob – please prove that models are right. Or that irreducible imprecision of wrong models matters.

      • Thank God for you Rob.

      • As I said George – you have misunderstood entirely – and my experience with you is that it is just no worth it. Ditto and more with Michael.

      • Curious George

        My reality is that modelers are unwilling to correct glaring errors in their models.
        https://judithcurry.com/2013/06/28/open-thread-weekend-23/#comment-338257

      • Rob,

        It will be great when you finally publish and show all those ‘climate scientist’ ninnies how it’s really done.

      • What would be great Michael was if you actually tried to understood any of the published science I routinely quote and link to – and not simply pursue your disparagement of things you don’t understand but don’t like on what is obviously the flimsiest basis. That also describes Joshua to a t and a w and an i and a t. Yea team.

      • Rob,

        I’m too much in awe of you to attempt anything that you can do.

        Besides, 1 thing I’m absolutely certain of, is that I’m no genius. Though I realise that you are unencumbered by such limitations.

      • I can’t think of anything sillier than the antics you and Joshua indulge in. Pat Cassen – in this case making trivial, irrelevant tribal comments – followed up inevitably by the tribal jesters.

        Don’t you get bored with yourself?

      • Rob,

        Just look at your first response to Pat.

        It’s cute that you think comments here are anything but a farce.

        It’s either nuttier than a fruit cake, or over-blown ego’s strutting and preening as they show off how clever they think they are.

      • Better yet – let’s repeat it so no one – but Michael – is under any illusion.

        ‘Atmospheric and oceanic computational simulation models often successfully depict chaotic space–time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but nonunique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model. Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.’ http://www.pnas.org/content/104/21/8709.full

        I would first of all dismiss quibbles of unoriginality and poor scholarship as purely motivated by unfounded spite. Personal disparagement that is mostly all that AGW groupthink cult of space cadets – such as Joshua and Michael – seem capable of.

        People like Pat Cassen will first need to understand perturbed physics ensembles and irreducible imprecision.

        https://judithcurry.com/2015/02/02/questioning-the-robustness-of-the-climate-modeling-paradigm/#comment-670985

      • “Irreducible imprecision”

        A bright shiny bauble has caught Rob’s eye.

      • Irreducible imprecision in atmospheric and oceanic simulations – http://www.pnas.org/content/104/21/8709.long

        One of the many sources I cite.

      • Curious George

        “we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures.”

        Please outline how to distinguish a plausibly formulated model from one that is not plausibly formulated.

    • Rob Ellison – I would first of all dismiss quibbles of unoriginality and poor scholarship as purely motivated by unfounded spite.

      Hmm. Quibbles. Purely motivated. Unfounded spite. Of course.

      • Heh, envy.
        ========

      • I am not prepared to even countenance unoriginallity. It is a trivial quibble that quite frankly is not your call. Its sole purpose is to discredit the entire work on the flimsiest pretext. It is moreover utterly irrelevant to a blog discussion.

        Poor scholarship based on a quibble about a potted greenhouse gas theory in a footnote is totally pathetic. This simple is presumed to have passed the review committee. But you take it upon yourself to querulously quibble in a blog comment. Very unimpressive indeed.

        Try addressing the substance of my comment instead.

      • Oh, there’s novelty; a warrior wondering.
        ===============

  44. Judith wrote: “I am further impressed by his thesis advisors and committee members for allowing/supporting this. Bakker notes many critical comments from his committee members. I suspect that the criticisms were more focused on strengthening the arguments, rather than ‘alarm’ over an essay that criticizes climate models. Kudos to the KNMI.”

    Controversial work can be strengthened by critical peer review. Unfortunately, most critical peer reviewers in climate are not motivated by a desire to make skeptical papers stronger.

  45. Dr. Curry – will Mr. Bakker be invited to respond to some of the questions?

  46. Throughout this paper, and throughout many other papers concerning various topics in climate science, the term climate change signal — or often just ‘climate signal’ — appears in a variety of places in a variety of technical contexts.

    This paper is no different from those others in that nowhere is there included in the paper a precise definition of what a ‘climate signal’ represents — what is its provenance, what are its descriptive characteristics, what are its dimensions, and so on.

    At any rate, I have been asking climate scientists for more than a decade to offer a precise definition for the term ‘climate signal’. So far, not a one of them has responded to this seemingly reasonable request.

    But maybe this week will be different.

    • Consider a parallel, like the definition of God. Were there no climate it would be necessary for us to invent one.
      =================

    • Beta, “At any rate, I have been asking climate scientists for more than a decade to offer a precise definition for the term ‘climate signal’.”

      It appears to be anything that can be declared “unprecedented”. Or perhaps “robustly” unprecedented :)

    • “Climate signal” is usually accompanied by a phalanx composed of mights, coulds, mays, and an occasional should.

    • An example of a signal is if you take the summer mean temperature in your region for 30 years, e.g. 1951-1980. If you plot this as a probability of a given temperature it will look like a Gaussian curve with a mean and standard deviation of nearly 1 C. Then you look at recent summers and the new mean is a standard deviation higher. That would be a climate change signal. A picture helps.

      • Exactly. What about it?

      • 1912 to1944 and 1945 to 1976.

      • Curious George

        Jim D – a good point. It would be even stronger if you could show such a graph for a global mean temperature. Anyway, what is the region you are showing? Is it measured data or adjusted data or homogenized data? BTW, your standard deviation looks rather narrow .. are you sure it is a correct picture?

      • Why start at 1951?
        I took HADCRU4 global, the worked out ajecent 10 year average and subtracted recent from previous decade (i.e. last point is average (2014 to 2004)-average(2004-1994).

        So now you know why they started at 1951

      • Climate isn’t only global. In fact the more important climate signals to people are the regional ones. The question was about climate signals. I think the graph was for North America, but that is just an example. Europe and China would have similar shifts over the last half century or more.

      • We endure the daily weather, we suffer the invocation of climate; neither much predictable.
        ===============

      • Enduring Koldie’s easier.
        More predictable.

      • I gotta rock,
        Push to the tock
        Tic of the clock
        Watch mocks all the talk.
        ================

      • John Smith (it's my real name)

        Jim D… appreciate the explanation
        so a ‘climate signal’ is observed data
        that differs from some previous set of observations
        is there a non ‘signal’ norm?
        I’m suspicious that if your graph shifted to the left it would not be considered a ‘signal’

      • With their use of the term “climate signal” a.k.a “climate change signal”, climate scientists are attempting to draw a parallel with traditional signal processing practice & theory, which has a long and valuable history in the hard sciences. But the question must be asked, does the parallel go only so deep? And if so, how deep?

        Jim D: An example of a signal is if you take the summer mean temperature in your region for 30 years, e.g. 1951-1980. If you plot this as a probability of a given temperature it will look like a Gaussian curve with a mean and standard deviation of nearly 1 C. Then you look at recent summers and the new mean is a standard deviation higher. That would be a climate change signal. A picture helps.

        It would appear from Jim D’s example that what does, or does not, constitute a ‘climate change signal’ is context sensitive.

        The implication here is that it is not possible to apply the term ‘climate signal’ universally inside a specific research paper without first defining and enumerating the specifically applicable characteristics which allow some body of reference data, plus the scientific interpretation of that data, to be labeled as a ‘climate signal’ in supporting the conclusions of the research paper.

        On the surface of it, the graph Jim D references would be one example of a sub-class of climate signal.

        If there are multiple subclasses of climate signals, then what are the rules for establishing the characteristics which allow some body of reference data, plus the scientific interpretation of that data, to be labeled as a ‘climate signal’ within the analytical context in which it will be used?

        Without establishing a context-specific definition for the term ‘climate signal’ as it is being used in a specific research paper, then it might be easy for a climate skeptic to assert that the term is being used merely to establish a veneer of scientific credibility which isn’t necessarily present in the research itself.

        Let’s use as my graph of Central England temperature (CET) 1772-2013 as an example of how one might go about defining what a climate signal represents within the specific context in which it is being used.

        Between 1840 and 1870, CET was rising at approximately +0.3 C per decade. GMT was rising at the same time, although not quite at the same rate. Here is an example where a local temperature variation appears to be happening in rough correlation with global temperature variation. Looking at the right side of the graph, the same can be said for more recent temperature variations in CET and in GMT post 1945.

        All right, a question …. can we say with justification that any local rise in Hadley CET is a mere temperature signal, while the rise in Hadley GMT for a similar period is a certified climate change signal? Why or why not?

        Similarly, are there local climate change signals in addition to global climate change signals? If Central England is known to be warming twice as fast as the rest of the planet, does the local temperature change signal also simultaneously represent a global climate change signal?

        If we were to state that we believe the local increase in CET between 1950 and 2000 is most likely a reflection of a persistent change in global temperature which is occurring on a worldwide basis — and if we were also to believe that the localized change in CET is an example subclass of a climate signal we might call a ‘local climate change signal’ — i.e., it is something more than a mere temperature signal occurring locally — then how would we view the rise in CET of +0.4 C per decade in the period of 1810-1835 where there is no corresponding Hadley GMT data to compare with?

        Can we discount the possibility that there was a rough corresponding increase in Global Mean Temperature between 1810 and 1835, even if it wasn’t of the same magnitude as the change in CET? Why or why not? (The same question applies to all pre-1850 temperature trends in the CET record.) Moreover, if we choose to discount that possibility, what lines of evidence would we marshal to support our opinions?

        The larger point here is that within the context of a specific research paper, climate scientists must define precisely what it is they mean by a ‘temperature signal’, a ‘climate signal’, and/or a ‘climate change signal’.

        If there are multiple subclasses of climate signals which apply to different areas of climate science, then climate scientists must document the rules they use for establishing those characteristics which allow some body of reference data, plus the scientific interpretation of that data, to be labeled as a ‘climate signal’ within the analytical context of the research paper in which the term is being used.

    • Ed Hawkins did a few posts on this. They may or may not answer your questions but here is one of them

      http://www.climate-lab-book.ac.uk/2014/signal-noise-emergence/#more-2509

      I think they have spotted something other than what they think they have spotted but we all have opinions.

      Jim may want to take a look and note how well the ocean and land time of emergence match up. You wouldn’t think CO2 forcing would need beachfront property like that.

      • Hansen showed that the summer signal was clearer than the winter one because there is much more variance in winter average temperatures in the northern hemisphere. The winter curves would be broader, and the shift would be less than a standard deviation.

    • You “think” the graph was about North America and you wave your arms to include Europe and China. You are sinking, yimmy.

  47. Reblogged this on Centinel2012 and commented:
    There is one key assumption that drives the system and the physics and that is the expected CO2 sensitivity values as establish by the 1979 NAS Charney report of 1.5C to 4.5 C with the expected value being 3.0 C. IF that 3.0C number is different than the GCM’s don’t work; and it seems that more current papers fall in the lower range or even below. If the CO2 is really .5C to 1.5C with an expected value of 1.0C than there must be other factors besides CO2 alone and than makes room for other factors.

  48. It’s easier to ”predict” the wining numbers on lottery; than to ”predict” the ”localized” temp 6 months in advance! For ”overall’ global temp doesn’t need any ”predicting” because the ”global” temp is always the same!

    Therefore:: those ”predictors” should stop fleecing the Urban Sheep – and make themselves rich, by predicting the winning lottery numbers.

    B] when they are wrong in ”predicting” the ”predictors” shouldn’t be blamed – BUT the Dung Beetles, that expected correct predictions in the first place. Nobody is forcing the con ”predictors” to predict and make up lies => they are guilty as hell! As long as there is demand for bullshine – there will always be producers… {{DEMAND FOR IT, CONTROLS SUPPLY}}

  49. “Global Warming is about politics and power rather than science. In science there is an attempt to clarify; in global warming language is misused in order to confuse and mislead the public.
    The misuse of language extends to the use of models. For the advocates of policies allegedly addressing global warming, the role of models is not to predict but rather to justify the claim that catastrophe is possible. As they understand, proving something to be impossible is itself almost impossible”
    Richard Lindzen

  50. Impact assessments are usually trying to look at the climate change in a small region, like a country. This is known to be difficult, and GCMs often don’t agree with each other 100% on how much the regions change, especially in factors like precipitation that might affect crops or fresh water supplies. This level of unknown must be quite frustrating for planners. Another one the Dutch would consider important is sea level, and climate models give little to no clue on how quickly the glaciers will melt. In fact, the IPCC punted and gave an estimate assuming that the glacier melt rate doesn’t accelerate. Others like the Army Corps of Engineers and NOAA, have extended the IPCC projection to range between 0.2 and 2 meters by 2100 for planning purposes. The Dutch would probably also be extending the error bars for caution. Uncertainty in models works both ways. They all underestimated the Arctic sea-ice loss rate, as another example. So, yes, if I was trying to do regional assessments of climate change, I might double the range of the model projections as a precaution, and I would not take their upper limits of dangers as a given. Regional projection with just GCMs is a risky business, even if you think you know the emission rates in the future.

    • There is no model uncertainty – they are absolute pointless for regional ‘projections’ and for the global summations built on these grid sale and less processes.

      It is an utter nonsense and that is the point. 8 years and the penny finally drops? What a dumbass. I suppose it is better late than never.

      In reality you extrapolate from known conditions using a statistical model to devise levels at return periods – and have different immunities for different infrastructure. High for hospitals and emergency services for instances.

      Jimmy Dee’s typical methodology – on the other hand – is to pull it out of his arse and hope it sounds impressive.

      • Thanks for your input.

      • Well, they just support the point I made. Want to try again?

      • Which bit supports your unfounded and profoundly lacking in any reference to actual science nonsense?

        Sorry they say that the range is even broader than the IPCC opportunistic ensembles.

        The questions remain.

        ‘Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.’ http://www.pnas.org/content/104/21/8709.full

        Which solution of many 1000’s do you pick and why? Why would you expect a correspondence with physical processes in the real world when it is hugely unknown and vastly uncertain?

        Not that you inhabit the real world Jimbo.

        ‘Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable. They echo other famous limitations on scientist’s expectations, namely the undecidability of some propositions within axiomatic mathematical systems (Gödel’s theorem) and the uncomputability of some algorithms due to excessive size of the calculation.’ op. cit.

      • You pick a broader range than the models, Rob. That’s what you have to do when planning with such uncertainty. The last thing to do is to pretend the range represented by the models is the full one.

      • No as I said – you apply historical data and extrapolate to extremes. You don’t appeal to inherent nonsense. An engineer might even use safety margins but base it on something real.

    • Guest blog Roger Pielke Sr.
      Are climate models ready to make regional projections?
      The question that is addressed in my post is, with respect to multi-decadal model simulations, are global and/or regional climate models ready to be used for skillful regional projections by the impacts and policymaker communities?
      This could also be asked as

      Are skillful (value-added) regional and local multi-decadal predictions of changes in climate statistics for use by the water resource, food, energy, human health and ecosystem impact communities available at present?

      As summarized in this post, the answer is NO.
      http://www.climatedialogue.org/are-regional-models-ready-for-prime-time/

      Kevin Trenberth Was Correct – “We Do Not Have Reliable Or Regional Predictions Of Climate”
      https://pielkeclimatesci.wordpress.com/2012/05/08/kevin-trenberth-is-correct-we-do-not-have-reliable-or-regional-predictions-of-climate/

      • How water resources and ecosystems behave under climate change is a very difficult thing to get from GCMs that don’t represent the factors affecting those components well. Does that mean we don’t have a problem? No, it means we have an even bigger problem, with an unpredictability that only gets worse with more climate change. The way the uncertainty scales with the amount of climate change means we should be doing everything we can to minimize both.

    • Regional Climate Downscaling: What’s the Point?
      https://pielkeclimatesci.files.wordpress.com/2012/02/r-361.pdf

      • > The dude is impervious

        And impermeable …

        Now he would have us double – nay, triple – the assumed boundaries of “scarey bear” just-in-case. No practical suggestions on how to prevent society from violently imploding while we shut everything down, of course

      • The only way to reduce uncertainty is to reduce the factors leading to climate change in the first place.

        And the ‘factors’ that led to the ‘Dust Bowl’ were???
        the Sahel Drought?
        the frequency of North Sea Storms?
        the Little Ice Age?
        the Anasazi Droughts?

        Religions have an all powerful deity, to the exclusion of others.

        It appears CO2 is filling that role for many.

    • Jim – Dr. Pielke (Senior) makes the point repeatedly that Regional Downscaled GCMs exhibit NO SKILL. “No Skill” is a term of art that you should understand if you intend to continue this line of argument.

      • brent – Well done, I was typing my reply as you were posting your (far more complete) responses.

      • See my comment above. Saying that regional climate change is less predictable doesn’t make it better in any way. It actually makes it worse, because you lose a comfort factor you might have by believing a model as a constraint. The only way to reduce uncertainty is to reduce the factors leading to climate change in the first place. Models won’t help by themselves.

      • Jim – now you are embarrassing yourself. Your statement “Model’s won’t help by themselves” implies that GCMs add “some” value when combined with other unspecified sources. “No Skill” means that you are better off without the models, because you don’t mislead yourself into thinking that they are adding value to the analysis.

      • David, yimmy dee is always embarrassing himself. We have tried to help him. The dude is impervious.

      • David, if you are not even going to trust the temperature change from GCMs, what would be your starting point? Paleoclimate? Extrapolation? Note that regional dowsnscaling is a particular method that doesn’t apply to the GCMs themselves, so you have to make that distinction. They didn’t say that GCMs have no skill, nor could they know in advance whether a 50- or 100-year projection has no pattern correlation with what actually will happen to global temperatures and precipitation. Given that they can represent seasonal changes, which are an order of magnitude larger than climate changes, I would not rule them out in how they handle a 2-4 C global warming. To the extent they handle seasonal change, they can handle climate change. Its a subset of what they do already.

      • There are thousands of feasible solution to any model. Plus or minus 10 degees C?

  51. It’s cute that you think comments here are anything but a farce.

    It’s either nuttier than a fruit cake, or over-blown ego’s strutting and preening as they show off how clever they think they are. Michael

    It’s like a movie called Bikini Girls on the Dinosaur Planet. Wow – bikinis and bad animatronics. A moment of levity – but a diet of Joshua’s and Michael’s pathological pursuit of the lowest common farce grows very tedious very quickly. I miss most of the antics – as I find I do more often and more generally. Although Joshua can’t quite believe that I can bear to pass up most of his smearings of unsavory – if not down right unseemly – commentary.

    It all makes for a quagmire very deliberately engineered to devalue, debase and trivialise. Something about trolls, why they do it and why they attribute their own base motives to others. It is not really something I am much interested in ‘deconstructing’.

    • Where did my italics go?

      It’s cute that you think comments here are anything but a farce.

      It’s either nuttier than a fruit cake, or over-blown ego’s strutting and preening as they show off how clever they think they are. Michael

      • Here’s a nice fruitcake trick. Use a banana bread recipe but substitute jelly for the aged bananas. If you let it sit around awhile it even smells like it has fruits and nuts in it.
        =================

  52. Pingback: Nueva tesis doctoral: el paradigma de los modelos climáticos está en crisis | PlazaMoyua.com

  53. Pingback: Nueva tesis doctoral: el paradigma de los modelos climáticos está en crisisDesde el exilio | Desde el exilio

  54. Here are some guidelines for developing Computer Simulations.

    Golden Rules of Verification, Validation, Testing, and Certification
    of Modeling and Simulation Applications

    .

    Can anybody show me evidence that any GCM has been validated along these lines? (especially rules 4, 16 and 17)

    Steven Mosher?

    I’ve been looking but all I find is mindless newspeak like this, which only confirms my worst suspicions.

    • ‘Suspicions’? Is that what that funhouse mirror cackling I hear is called?
      =====================

    • KenW, I could find no such evidence. If you go far upthread, you will find a tuning discussion with references and explanarions and examples on what I did find. The models are parameterized for processes they don’t simulate. For NCAR CAM3, these expressly include (technical manual references) 4.5 Prognostic Condensate and Precipitation Parameterization and 4.7 Parameterization of Cloud Fraction. Both are important feedbacks.
      The CMIP5 experimental design called for submitting 10, 20, and 30 year hindcasts. Think of this as in sample model math and parameterization verification. Then it called for using the RCPs to submit (IIRC) 30 year projections. All in the ‘near term’ experimental design half; long term is > than a century. Think of these ‘near term projections’ as enabling out of sample validation. And the 18 year pause means CMIP5 has now failed validation by the >17 year temperature divergence criterion Santer established in his 2011 paper.

      • I like this one from page 490 of the BAMS pdf:

        “Users of CMIP5 model output should take note that decadal predictions with climate models are in an exploratory stage.”

      • I’ll give Namoi a little lesson in verifying your model.

        Take your pde. Take an analytic function (perhaps one that mimics a solution) and plug it into your equation. Move everything that remains to the right hand side. This becomes your forcing function. Note your boundary conditions (inherited by your choice of function).
        Now if you put these boundary conditions and this forcing into your model and solve you should get approximately your original function back. Now do a grid refinement study. Compute the L2 error between your solution and your known solution. You should start to see a convergence rate. Verify that it matches the known rate for your solution method.

        Oh but thats insane you say. How can we do such a thing for a massive code like a climate simulation? This is your out, yes?

        No. This is exactly how we verified our massively parallel solvers at Sandia. Fluid solvers, fire solvers, coupled elastic material/fluid solvers, etc etc…. Solvers much more complex than a climate model.

        However, a caveat. This requires your code to be well designed and flexible. It requires using a solution framework that can be separated from your code. It requires adaptivity and that your code be grid independent.

        Long live fortran.

      • Rud, that was quite a brawl up thread, but I’m thinking an a much lower plane here. It’s not easy making software do what it’s supposed do, even when you know EXACTLY WHAT it IS supposed to do.

        Never mind the 32 free parameters. I haven’t even got to that yet.
        (With 32 parameters you can torture any piece software tell you any answer you want to hear – but you could never trust anything it says)

      • Curious George

        “Fluid solvers, fire solvers, coupled elastic material/fluid solvers, etc etc…. Solvers much more complex than a climate model.” Nick, do you really believe that a climate model should not include fluids, fires, biological influences? No, what climate modelers want to achieve is unprecedented. They just are not up to the task.

    • KenW
      I suggest rules #3, #7, #12, and #13 are critical.

      Imo, the issue is whether AGW is going to cause the climate to be significantly worse for humans and where, and when. The changes will not occur at the same time. Imo, we have no reliable data to know the answers today.

      • Actually they are all important. I’m looking for 4, 16 and 17 because that should be the easiest evidence to produce. If a major software consulting firm would put its stamp of approval (and reputation) on the fidelity of a GCM, that would show they are at least trying to be diligent. The Oreskes paper tells me they’re trying not to.

        Too bad nobody’s got what I’m looking for. I’ll just have to come back to this later. I’m sure the subject of simulations will come up again ;-))

    • “Verification and validation of numerical models of natural systems is impossible. ”

      Always good when your paper on verifying your model begins with this.

    • Hmmm…Oreskes and model validation. Maybe she changed her mind, or her mind is changing.

      http://www.nssl.noaa.gov/users/brooks/public_html/feda/papers/Oreskes1.pdf

      I can help her…

  55. Of course they know the truth about the models – or even more importantly the biases of the inputs – but they don’t like anyone pointing it out and they don’t like to admit it because turkeys don’t vote for xmas.

  56. All these issues arise because people seem to believe that it is possible to model climate deterministically. This is rather like trying to predict the behaviour of a gas by mapping the trajectory of every molecule. We have 135 years of measured temperature data. If we look at it stochastically instead of deterministically, we see that the variance spectrum of this short time series shows that it has the characteristics of “pink noise” ( = “flicker noise” = “1/f noise”) like many other naturally generated time series. There is nothing unusual about the climate of the twentieth century so why all the fuss? See http://blackjay.net/?p=161#comments .

    • Interesting…pink noise. But we humans like to find phantom patterns – in star positions (constellations), clouds, and maybe climate. Is that Jay Zuz I see in the tree bark?

  57. Bakkers choice to look critically at climate modelling is a good one. Although he has not solved the problem, he has made a useful contribution. In particuler, he exposed the folly of trying to make sense of ensembles of climate models with the hope of ariving at a more accurate result. This would only work if the models were all correct except for an added random series.At any place on the earth’s surface, the seasons would repeat themselves, but they don’t because of random effects which need to be understood and applied at the correct place in the model.

    It is only by understanding and validating every physical process in the model that success will be achieved.

  58. Group of physicits

    All readers should go back to this comment (now in over 200 social media climate threads) and study carefully the errors in the assumptions for the models.

    • @Group of physicits

      Mr physicist, ”natural climate cycles” have nothing to do with the phony ”global warmings” Climate changes for different reasons, not because of increase / decrease in global temp = they can confuse even physicists…

      2] ”clouds” don’t increase, OR decrease the global temp!!! clods make upper atmosphere warmer – on the ground cooler = overall is the same (when for the physicist upper atmosphere is not part of this globe… tragic..

      b] WV, and clouds make day temp cooler on the ground, BUT nights warmer – reason monitoring only the hottest minute in 24h and ignoring all the other 1439 minutes -> is concocted for brainwashing the ignorant. Before they do a complete job on you, go there and arm yourself with real proofs and facts, please read every sentence, if you are a physicist, you’ll understand:
      https://globalwarmingdenier.wordpress.com/2014/07/12/cooling-earth/

    • Group  of  physicists

      Stefan

      “WV, and clouds make day temp cooler on the ground, BUT nights warmer”

      Nights warmer? Not according to valid physics as here, nor according to my study of actual data that gave mean daily temperatures …

      Wettest third: Max: 30.8°C Min: 20.1°C
      Middle third: Max: 33.0°C Min: 21.2°C
      Driest third: Max: 35.7°C Min: 21.9°C

      When is your study and explanation based on the laws of physics?

      • Doesn’t the fact that WV or CO2 act to more quickly diffuse heat, even it they also retain it, work also separately to impede emission since black body emission energy is not linear to black body temperature. The Second Law makes a loss every time the heat is transferred in more diffuse fashion. To make up for this loss of efficiency of emission the whole has to rise in mean temperature to remain in budget.

        I also hypothesize this is why ocean currents transferring heat to the poles increases the global mean temp.

    • Group of physicists said: ”When is your study and explanation based on the laws of physics?”

      G’day physicist! I have studied and compared Sahara and Amazon basin places that were on same latitude. b] I have researched coastal areas of Australia and the inland deserts. c] now I live in place surrounded by tropical rainforest – only 13 degrees south of the equator (last 5 weeks was difficult to sleep at night, like in sauna.(rain started and cooled last night)

      2] in nature, few factors affect temp in humid places – density, from which direction winds blow, the altitude of the clouds and so on. Putting numbers or formulas on such things – only shows that you don’t know what you are talking about. Experiment in the kitchen or in laboratory means nothing! b] ”learning” ANYTHING connected to the phony ”global” warming is – created for confusion and to get people away from the reality!!!

      3] I have read couple of your comments – you are trying to prove the Warmist wrong, by proving that: Santa comes trough the backdoor, not via the chimney… Reason I asked you to read my post and realize first that Santa is not for real – if you know physics, you will understand the post better than most – it’s a challenge: read every sentence first of that post; so we can have interesting and constructive debate. Because: now I know what you know, but you don’t know what I know = I have unfair advantage on you. Read the post, I’m a soft target. So far: you are using physics, to prove that: Warmist dragon has six heads instead of seven…Please read the post first https://globalwarmingdenier.wordpress.com/2014/07/12/cooling-earth/

      • Group  of  physicists

        Yes well my survey was far more comprehensive, and you haven’t even quoted your data and results, whereas I have published both. I was also careful to select locations that were inland by at least 100Km and in the tropics and in the hottest month when the Sun passed nearly directly overhead each day, so that angles of incidence don’t play a significant role. I also selected 15 locations from three continents within a range of altitudes from 0 to 1200m and then adjusted temperatures (using realistic lapse rates) to what they would have been at a common altitude of 600m.

        So, as I said, where is your survey showing raw data, any adjustments and the means derived? Where is your evidence of 15C° of warming for each 1% WV as implied by IPCC documentation?

        What I have written, first and foremost, is derived from valid physics.

        It is now summarized on a website here that is endorsed by our group that now numbers five persons qualified in physics or suitably knowledgeable in thermodynamics. All endorse the content of the website.

        What I also pointed out is that the radiative forcing GH paradigm implies a warming of about 15C° for each 1% of water vapor in the atmosphere. You have not produced evidence of such.

      • Group  of  physicists

        OK Stefan – Reading your linked site and commenting as I go …

        “Almost ALL the warmth is produced on the ground and in the water”

        No it’s not. On Venus next to nothing is “produced” (whatever you mean by that) on the ground, yet it’s hotter than 460°C. On Earth the solar radiation is also nowhere near sufficient to explain the surface temperatures.

        Your language is not the language of physics, so I have no idea what processes you are thinking of when you write “oxygen & nitrogen expand instantly and waste the extra heat in few minutes. Those two gases … disperse that extra heat, to be collected by the unlimited coldness in the upper atmosphere. “

        Well, seeing that oxygen and nitrogen to not radiate “that extra heat” to space, and nor does convection continue into space, you have proved to me that you don’t have a clue as to what is happening.

        And, as I’ve said many times, there is nothing that holds together any parcel of “warmed air” as it supposedly traverses the height of the troposphere. That is because molecules move randomly in all directions between collisions and the chance of any particular molecule making its way through 10Km of air without the help of wind is infinitesimal.. If some upward wind blows some air in a generally upward direction that air does not cool in accord with the adiabatic environmental lapse rate, because the wind is adding energy and so the whole process is not adiabatic.

  59. “Decision makers do need concrete answers”

    All weep.

    • And
      “Most climate change scientists are well aware of this and a feeling of discomfort is taking hold of them”

      Can this discomfort of the 97% not be measured somehow?

  60. > The ’climate modelling paradigm’ is characterized by two major axioms:

    Are there any citations?

  61. Note to TonyB
    Re CET changes
    I just checked the MET office’s CET data, year 1659 is upgraded as the rest for whole of the CET 350 year long record, the average upgrade is 0.03C for some years goes to just above 0.07C.
    I am afraid I appear to be to blame for your displeasure, but the Met-Office is far to embarrassed to admit that they have been wrong since their annual CET numbers were published, and even more so to be associated with someone called ‘vukcevic’. For more details see my email of 29 July 2014 or my comment on WUWT
    http://wattsupwiththat.com/2015/01/08/anthropogenic-warming-in-the-cet-record/

    • Vuk

      Thanks for that. I remember our conversation at the time. Whilst small, these changes are throughout the record and should have been recorded as such. I have today asked the Met Office if that will be done by way of some formal note.

      Congratulations on your diligence and if the changes were due to you, well done also to the met office for responding.
      tonyb

      • Indeed, as said at the time they are minor changes as shown here

        (apology to HR, I posted in a wrong place)

      • Hi Tony
        Thanks for the email. I now got one up on Dr. Svalgaard, he has been trying for about 2-3 years to change the 310 year long the SSN ( sunspot record) and has not got the official change accepted, and here is ‘pseudo-science’ Vuk, just one email and the 350 CET got changed. If would be nice of the Met Office if they told me too.
        I will email the appropriate people and thank them for the accepting the recommendation.

      • Vuk

        Yes, there is no doubt the CET record was changed thanks to your diligence. Whilst small, The changes are still larger than the tiny differences that mark whether the hottest temperatures in the record is THE hottest or in the top ten.

        The met office were circumspect in their handling of the 2014 data and they have been receptive to your work on CET so credit to them as well as you.

        Tonyb

      • Indeed, ranking of the warmest and the coldest years is another important point. Since not all years are up at an equal amount even a tiniest of change may make a difference.

      • Steven Mosher

        WHAT… WHAT WHAT
        how DARE they change history

      • Hi Steven
        I’m starting to appreciate your sense of humour.

        Linear regression correlation of the two statements (see italics) R^2 = 0,65 within 95% certainty, but If the first was correctly phrased then R^2 increases to 0.75, again within 95% certainty

        Vukcevic July 2014: Since monthly data is made of the daily numbers and months are of different length, I recalculated the annual numbers, using weighting for each month’s data, within the annual composite, according to number of days in the month concerned. This method gives annual data which is fractionally higher, mainly due to short February (28 or 29 days). Differences are minor, but still important, maximum difference is ~ 0.07 and minimum 0.01 degrees C. The month-weighted data calculation is the correct method.

        MetOffice 2015: Because February is a shorter month than all the others, and is usually colder than most all other months, our previous method, which was giving February equal weight alongside all other months, caused the annual temperature values to be pulled down, i.e. giving estimated annual values which were very slightly too cold (the difference varying between 0.01 and 0.07 degC.

        On matter of models you defend so valiantly
        using (sadly totally ignored) planet’s magnetic model I get results which don’t entirely conform to your view

      • Steven Mosher

        well vuk

        I am having fun with the people who questioned whether a historical
        record could be improved.

        They will never admit their conspiratorial ideation.

        but they are kooks

      • Mosh

        Vuk’s ,method of calculating is more accurate. Kudos to him for suggesting it and Kudos to the Met Office for accepting it. Not the same as using unreliable or unlikely data in the first place which then gets manipulated to turn it into gold..

        I am currently watching one of the links to a lecture you posted from Cambridge that I thought included Tamsin. Two thirds of the way through and as yet no Tamsin.

        (sorry did not watch the other 22 videos you linked to)

        tonyb

      • Hey Mosh

        False pretences! The video has finished. . I had thought it was to be Tamsin presenting, but it was someone else presenting her work. I’m not sure he was a bona fide scientist as he wasn’t wearing a white lab coat with an array of pens sticking out of his pocket, nor thick black framed glasses….

        BTW, I do not believe in conspiracies nor that all climate scientists are idiots. Some are activists, some are over dominant and some overplay their hand, but I wish sceptics would stop believing that they automatically know more than someone who has studied the subject for decades and that everyone is in on some giant hoax and a conspiracy to change the world.
        tonyb

      • nottawa rafter

        Tony, I agree. Skeptics should stick to the science. They lose the high road when they don’t.

      • The point Tony is simple.
        Skeptics jumped to a conclusion.
        The data changed therefore it must be bogus.
        Once they see that one of their own drove the
        Change they shut up.
        Nothing here move along.

        Problem. Where is the apology
        Where is the accountability
        Where is Willis demanding that other skeptics
        Denounce those who jumped to conclusions.

      • Oh, my Gaia, moshe; make me dig up the etymology of ‘apologia’. I’m looking for new species of dung beetle. Oh, that’s entomology.
        ==============

      • mosh

        You are clearly lumping all sceptics together as if we are one monolithic block who all believe the sane thing. Clearly the sceptical world incorporates all sorts of viewpoints.

        As an example of wrongly seeing us all as the same you say;

        ‘Where is Willis demanding that other sceptics Denounce those who jumped to conclusions.’

        Willis certainly does not speak for me. Neither does Antony Watts nor Heartland nor Christopher Monckton.

        You SOMETIMES do. Judith SOMETIMES Does. The Met office SOMETIMES does. I often do…

        But ‘I am not a number I am a free man.’

        http://www.imdb.com/title/tt0061287/quotes

        tonyb

    • @ vukcevic, tonyb, mosher, nottawa rafter, et al

      “…….but I wish sceptics would stop believing that they automatically know more than someone who has studied the subject for decades and that everyone is in on some giant hoax and a conspiracy to change the world.
      tonyb”

      “but they are kooks”

      “Indeed, ranking of the warmest and the coldest years is another important point. ”

      “Tony, I agree. Skeptics should stick to the science. They lose the high road when they don’t.”

      So, all of you agree that the ‘Annual Temperature of the Earth (TOE)’ is precisely defined and that we have a historical data base with sufficient coverage and accuracy that allows us to rank the years since 1880 with hundredths of a degree precision, so that the science alone justifies worldwide headlines announcing that 2014 was the warmest year since 1880, by 0.02 degrees? And that the inevitable, scientific conclusion from this blistering record, as widely reported, is that ACO2 is causing the TOE to rise rapidly and will result in planetary catastrophe unless immediate, world wide, coordinated action is taken to curb it?

      Or is it possible that other, non-scientific factors may influence the headlines, the attribution conclusions, and the recommendations for action, but skeptics should just not mention them? In an environment where every attempt by skeptics to introduce conflicting, SCIENTIFIC data is instantly and thoroughly refuted by ‘KOCH BROTHERS! KOCH BROTHERS! from those who have studied the subject for decades and are NOT engaged in a giant hoax or a conspiracy?

      I am not the only ‘unscientific kook’ who has noticed that the Patron Saint of the ‘Climate Science Consensus’ is Saul Alinsky, NOT Karl Popper.

      “……that everyone is in on some giant hoax and a conspiracy to change the world.” The key word Tony is ‘everyone’. It doesn’t have to be ‘everyone’ in the climate science field, and clearly isn’t. In fact, the very definition of ‘conspiracy’ implies that ‘everyone’, or even MOST are NOT ‘in on it’. Just the ones driving the boat.

      I give you this from a formerly respected climate scientist who has first hand experience with the non-conspiracy:

      “I seriously doubt that such a thesis would be possible in an atmospheric/oceanic/climate science department in the U.S. – whether the student would dare to tackle this, whether a faculty member would agree to supervise this, and whether a committee would ‘pass’ the thesis.”

      Nope; no conspiracy here. Only unscientific kooks could possibly suspect one.

      • Bob

        I have written a dozen articles on historic temperatures and the over confidence we ascribe to it, and said numerous times that the Global average (of almost anything) means that the important regional nuances are ignored.

        I think scientists may use potentially bad data, such as global SST’s to 1850. I think they are too ready to believe the data they habitually use for their models is scientifically derived, accurate, and fit for purpose. I think many have no historical perspective and aren’t aware of previous periods of cold or warmth or changing sea levels etc and therefore come to confident conclusions based on less than a full picture of the climate.

        It is the bad science arising from bad data -as we see it- that needs to be challenged, together with asking that greater historical context is used.

        EVERYONE is not in on a giant hoax or conspiracy. SOME do have much greater influence than they should. SOME are activists and over promote their findings, which are not always as robust as they claim.

        But that merely reinforces my point that we are not dealing with idiots who are ALL intent on changing the world order by making up things according to some grand plan.

        What it means is that the uncertainty monster runs amok amongst much of climate science, but that many do not recognise the creature and SOME would prefer to pretend it does not even exist.

        There are too many of those people involved in the climate science industry who won’t admit to uncertainties in the business or even that there are large and fundamental gaps in our knowledge.

        The only time I have ever heard a senior scientist say ‘we don’t know’ is when Thomas Stocker admitted that we didn’t know the temperatures of the deep oceans as we didn’t have the technology to measure it.

        The :’don’t know’ monster needs to be added to Judith’s climate science menagerie as its every bit as active as its stable-mates.

        tonyb

      • It is an ‘Extraordinary Popular Delusion and Madness of the Herd’. Sure, there were those bellowing early, rightly in view of the ominous portents. But the madness that seized the herd needed little breathing together, and now the madness is worse than the portents, which have fizzled out like summer lightning.
        ===================

      • “You are a slow learner, Kim.”
        “How can I help it? How can I help but see what is in front of my eyes? Two and two are four.”
        “Sometimes, Kim. Sometimes they are five. Sometimes they are three. Sometimes they are all of them at once. You must try harder. It is not easy to become sane.”

      • @ tonyb

        Hi Tony.

        “I have written a dozen articles on historic temperatures and the over confidence we ascribe to it, and said numerous times that the Global average (of almost anything) means that the important regional nuances are ignored.”

        I know you have, and for a long time in fact you have been my ‘go to’ guy on the site when I need a dose of sanity.

        But do you really believe this?:

        “I think many have no historical perspective and aren’t aware of previous periods of cold or warmth or changing sea levels etc and therefore come to confident conclusions based on less than a full picture of the climate.”

        That people with PhD’s in climate related fields who have worked for years, successfully, to advance to the pointy end of the climate science pyramid aren’t aware of the things you list above?

        In your drive to present a ‘reasoned’ perspective are you willing to overlook the blatantly obvious fact that Climate Science writ large DOES coordinate efforts to suppress ANY dissension from the ‘party line’ and DOES do everything in its power to destroy apostates, personally and professionally, and DOES work closely with (exclusively) leftist politicians to provide ‘studies’ and ‘conclusions’ that will provide ‘scientific’ justification for policies that the leftists/progressives have been working for decades to advance?

      • Bob

        You said;

        “In your drive to present a ‘reasoned’ perspective are you willing to overlook the blatantly obvious fact that Climate Science writ large DOES coordinate efforts to suppress ANY dissension from the ‘party line’ and DOES do everything in its power to destroy apostates, personally and professionally, and DOES work closely with (exclusively) leftist politicians to provide ‘studies’ and ‘conclusions’ that will provide ‘scientific’ justification for policies that the leftists/progressives have been working for decades to advance?”

        This will make a fascinating article. You write it and I will keep an open mind and check your evidence.

        As for my saying that many do not have an understanding of the historical context, I would point out that, as you see many tines here, it is often thought of as being anecdotal and that novel proxies, such as tree rings, are given unreasonable weight in the historic narrative.

        The lack of historic context was unfortunately spelt out to me by a recent personal communication from the Met Office whose belief in the historic record stops at 1772 and who do not believe there is any great merit in using research funds to carry on the work started by such as Lamb.
        tonyb

      • @ tonyb

        “This will make a fascinating article. You write it and I will keep an open mind and check your evidence.”

        Well, most of my evidence actually comes from Dr. Curry’s posts on this site, the commentary provided by the denizens, and 50 years of observing ‘progressives’ developing the self-licking ice-cream cone comprised of progressive politicians and progressive scientists providing them with data justifying political action. ACO2 as an existential threat to the biosphere is merely the latest. And the furtherest reaching, as EVERY action has a ‘Carbon Signature’ and is thus subject to regulation. And taxation.

        I will cite one recent example, documented on this site, that involved ‘Climate Scientists’ producing a paper with pre-agreed conclusions, coordinating who were to be the authors, in which professional journals it was to be published, what it was to say, how it was to be reviewed, and which MSM outlets would publish stories about the the paper and what actions they would demand, citing the paper as evidence. All before any actual ‘data’ was ever ‘collected’. I don’t remember the exact subject, but the leaked emails documenting the conspiracy?? appeared here fairly recently.

        And I am not writing an article. You have a lifetime of observing the process just as I have. And are a far better writer. YOU write it.

        “As for my saying that many do not have an understanding of the historical context, I would point out that, as you see many tines here, it is often thought of as being anecdotal and that novel proxies, such as tree rings, are given unreasonable weight in the historic narrative.”

        You make my point, rather than yours. These people are no more stupid than the average bear, yet they claim that a few trees scattered around the world, carefully selected (calibrated) from thousands of their ‘uncalibrated’ brother and sister trees living in the same area, provide a better record of past planetary climate than the handwritten accounts of people observing the ‘climate’ first hand. What they actually realize, and will go to the mattresses to justify, is that a ‘calibrated tree’ can be made to say anything that they desire it to say, while eyewitness accounts are immutable. And therefore to be discounted, at all costs, as anecdotal and scientifically useless.

      • Bob

        I don’t think we are disagreeing that much. I think the number of climate scientists perpetrating a ‘hoax’ or involved in a ‘conspiracy’ are small, but there are those few who are also activists and have an influence far beyond their numbers who are pushing the envelope. Politicians may also take up the cause and political/scientific/green activism is a very powerful beast.

        The latter are represented at the highest levels at the UK Environment agency and the UK Met Office. Many of those slightly lower down the chain are merely trying to develop the science, albeit they will be nudged in certain directions by dint of grants being available for certain types of work, and peer pressure. I know of a number of scientists in both organisations who are scientifically and personally sceptical but, as you observe, being a sceptic is unlikely to further your career.

        So it is not the great mass of scientists who particularly concern me, but those relatively few with great influence who seem to have discarded the scientific method.

        I have many articles in the pipeline at present so will not have time for a properly researched article that develops the theme. However, I did write a piece on the politics of the subject from a UK viewpoint, which I am horrified to note was written well over 5 years ago!. I repeat it below in case it is of interest, although many of the internal links will have disappeared.

        —– —— —–
        Article: Politics of climate change. Author: Tony Brown

        Climate change has become highly politicised and the British Govt – long time leaders in funding research into the subject – were very heavily implicated in making it a political issue in order to promote their own agenda. An unusual subject for me, but very well referenced with numerous links and quotes from such bodies as the Environmental Audit Committee of the House of Commons.

        http://noconsensus.wordpress.com/2009/10/19/crossing-the-rubicon-an-advert-to-change-hearts-and-minds/#comments

        ——– ——-
        tonyb

      • @ tonyb

        “Article: Politics of climate change. Author: Tony Brown”

        Thanks Tony!

        Forwarded the link to my daughter, who teaches 8th grade science at a local high school.

        Hope she reads it.

        Bob

  62. Kenneth Fritsch

    A timely thread here for me. I became interested in how and how well climate models handle the estimation of equilibrium climate sensitivity (ECS) and embarked on this investigation because of the more recent work of those contained in recent publications making estimates of ECS from observable data. It appeared at some level these comparisons of the modeled versus observed ECS, while readily admitting to the uncertainties in the underlying data used for calculations from empirical observations were not detailing the possible shortcomings of the modeled side of these comparisons and how those values were being calculated/estimated..

    While I am admittedly no more than a layperson in this matter and only recently familiarized with this area of climate science, I was surprised to find that the ECS estimation from the CMIP5 models for AR5 review using the abrupt 4XCO2 experiment and regression of the resulting global mean Net TOA radiation and surface temperature change required significant temperature and Net TOA radiation adjustments using the pre-industrial control spin up (piControl). Even then the regressive results for the various CMIP5 models depended on regression method as a choice between the ordinary least squares and total least squares as was suggested to me by the poster Carrick. I was also surprised to learn that most models are tuned to give a reasonable TOA radiation balance.

    My current study in this area has involved determining how well the various CMIP5 models under RCP4.5 conditions arrive at a reasonable TOA energy budget by using the potential global sea water temperature changes (ocean heat content) from each model and model run. Interesting that that study is on temporary hold until I can resolve with KNMI (who has always been very cooperative with me in these matters) the issue of obtaining global mean TOA radiation data from gridded nc file data. Downloading the gridded data from the original sources while an option can be very time consuming and better avoided by using the KNMI data that has been conveniently converted to global means.

    I have previously looked at the CMIP5 climate models from the perspective of comparing the noise levels in these model and it is there that one sees large differences between models and also the noise levels that make shorter term trend comparisons with the observed temperatures a most uncertain proposition. Even the time dependence in terms of fitting a particular ARMA model to these models outputs varies considerably from model to model. It was these frustrations that led me to attempting to extract something more meaningful and practical from these models – which in turn led me to looking at ECS and transient climate response (TCR). Even the authors of the method used in calculating the current published ECS values of CMIP5 models have recommended that more work be done using TCR. Perhaps TCR is a more practical emergent climate parameter to use – since it deals with changes over decades and not millenniums as for ECS – but in my view it lets the TOA budget off the hook.

    • Willis Eschenbach

      Kenneth, first, a most excellent comment. I do enjoy a man who runs his own numbers.

      Regarding the models and the equilibrium or the transient climate sensitivities, (ECS & TCS), over a series of posts and with invaluable assistance from commenters I established that the global temperature output of the models is equivalent to a lagged linear function of the forcings. See my posts:

      Testing … testing … is this model powered up?
      Model Charged with Excessive Use of Forcing
      Zero Point Three times the Forcing

      I went on to using that to look at climate sensitivity

      Model Climate Sensitivity Calculated Directly From Model Results

      Regarding your comment on “obtaining global mean TOA radiation data from gridded nc file data”, what you need may be available in the CERES data. It’s available here. My first of many uses of the CERES data was Observations on TOA Forcing vs Temperature .

      If all you need is some form of the gridded monthly variation in TOA solar, let me know and I’ll extract it from the CERES nc data as a 1° x 1° grid.

      Finally, if you are using R, the package “ncdf” contains the functions to read/write the .nc files.

      Best wishes in your explorations,

      w.

      • Kenneth Fritsch

        Thanks for the comments and links Willis. I use ncdf4 in R and that is not my holdup. From KNMI I can obtain the nc gridded data already converted to global means and the downloads are in kilo bytes while the gridded nc from DKRZ can be several tens or hundreds of mega bytes. I am currently downloading from DKRZ since Greet Oldenborgh of KNMI has a busy schedule right now and we cannot resolve some differences I have seen using KNMI and my conversion methods. Small differences mater when making the TOA Net radiation calculations because the rsdt, rsut and rlut numbers are large and the required subtraction yields a relatively small Net number.

        http://cera-www.dkrz.de/WDCC/ui/EntryList.jsp?acronym=ETHr4

        I have found my initial study in this area following the path of my studies of temperature proxies used in published temperature reconstructions whereas even reading both the published papers and those analyses from blog posts and papers critical of the paper I do not fully realize the state of the science until I do my own analyses of the details going into these papers.

    • KF, an excellent post. You might want to look at essay Sensitive Uncertainty. It delves into this topic a bit, with emphasis on the empirical rather than the CMIP5 derivations, the differences between TCR and ECS, and even some ECS/TCR implications. Plenty of footnotes to source materials.

      • Kenneth Fritsch

        Thanks for the comment and the essay lead – I’ll have a look..

      • Kenneth Fritsch

        If that is the paper by Nic Lewis and Judith Curry, I have read it couple of times and recommended it to others. It has been primarily from Nic Lewis’ work and writings on empirical derivations of ECS and TCR that got me interested in learning more. I also enjoyed reading his Bayesian analysis on the matter and analyses of what others are doing. Like you say the essay do not go into great detail on the CMIP5 or other model derivations and that is why I wanted a deeper look at their competition to determine whether it was even worthy of the (intellectual) battle.

    • Kenneth, first, a belated thank you for your kind comments about me. I’m sorry that I have only just spotted what you wrote.

      Secondly, I echo your comments regarding difficulties in processing the raw CMIP5 data files. I don’t find size in itself a problem – I have downloaded well over a terabyte of monthly data. But the failure of the modelling centres to agree and stick to a standard file format is very unhelpful, and makes automating processing of the data difficult and time consuming. A single large netCDF file for each run, with all models having the same month end, would have been much easier to process.

      I did eventually manage to prepare global and zonally-averaged annual TOA imbalance summary files by model for the abrupt 4x CO2 experiment, but I concluded that I really needed to deduct the corresponding values from the preindustrial control runs, and I haven’t had a chance to resolve the problems involved in doing so, or to move on to other CMIP5 ‘experiments’. If you can get KNMI to provide properly processed annual CMIP5 data – ideally zonal-averages as well as global means – that would be great.

      • Nic

        SoD has been writing recently on the fuzziness of “estimated internal variability” in the IPCC reports. Doesn’t this relate to preindustrial control runs? What is your confidence in those runs?

        Thank you,

        Richard

      • Yeah Nic, rls has a good question. Personally, it looks like there are some serious issues to me :)

  63. the previous post, week in review, contained a link to an Ed Hawkins post.
    http://www.climate-lab-book.ac.uk/comparing-cmip5-observations/

    The post includes a graph. I’ll try insert the image

    the red hatched box is described as “indicative IPCC AR4 assessed likely range for annual means”. This box is clearly different ( although overlapping) with the grey shaded model ensemble. It seems to me that if the red box represents the IPCC future prediction out to 2035 then in a way then in a way they’ve already moved away from the model ensemble being the best estimate and something else has replaced it.

    • Indeed, as said at the time they are minor changes as shown here

    • try again on image

      • fail again. Anybody, the right code to post an image?

      • Anybody, the right code to post an image?

        Post the url alone ( no code ).

        But just make sure the url you post ends with a well known image extension ( .jpg, .png .gif )

        Sometimes the url will include other parameters xxx.jpg?WIDTH=600
        Strip off everything starting with the question mark (?)
        The blog will scale the image appropriately.

    • HR, I tried to track down the origins for Hawkins red cross hatch area. Went through all of SOD and final AR5 WG1. Bit of a slog. Nothing. I think it is his opinion only, trying to soften the pause implications.
      There are some additional AR5 shenanigans concerning the pause that the slog uncovered. These are illustrated in essay Hiding the Hiatus.

  64. Antonio (AKA "Un físico")

    Alexander, I have no time to read your full thesis: only your abstract. I hope you find time to read at least my abstract at: https://docs.google.com/file/d/0B4r_7eooq1u2TWRnRVhwSnNLc0k/edit . Please focus in the idea (A.c). Very few scientist are aware of that idea, but time-scales are key for getting climate change statistically-valid predictions.
    I agree with your thesis: CMIP5 models have no predictive capacity. But I bet you could not defend that thesis in the ETHZ (in Switzerland) as it is the domain of Reto Knutti.

  65. @ Dr. Curry

    “I seriously doubt that such a thesis would be possible in an atmospheric/oceanic/climate science department in the U.S. – whether the student would dare to tackle this, whether a faculty member would agree to supervise this, and whether a committee would ‘pass’ the thesis.”

    Using ‘seriously doubt’ in the same context, I ‘seriously doubt’ that if I climb up on the roof of my house and jump off, my demise will be precipitated by asphyxiation as I pass through the stratosphere en route to the moon.

    • I have no personal knowledge of this guy but reading his blog makes me think he’s open minded, I’ll suggest Isaac Held would support a rigorous review of climate models.

  66. Thank you Dr. Curry for another important post. As I’ve interviewed for Atmospheric Science jobs in insurance/reinsurance/catastrophe modeling, government labs and academia, one thing has struck me. It is nearly impossible to get a job in this field if you work from data and are not a climate modeler. Reinsurance companies (by virtue of catastrophe models) set their rates based largely on un validated output from climate models. I’ve been told I’m qualified to work as a scientist in their industry (catastrophe modeling) because I’m a synoptician and stochastic modeler and not a physical modeler. I eventually left the field in frustration.

    Climate models are not only being used to make policy. They are being used in insurance. They are being used by engineers and actuaries who understand absolutely nothing about they work to evaluate risk associated with extreme events. They are also being used in water resources by engineers who have no understanding of how they work. As a statistical modeler, I find it troubling to think that models that are so complex they can’t be fully understood or validated to make any decision about risk. No rational person in their right mind would make a decision based on these models. Unfortunately, insurance companies are being forced to by regulators. We need to have a conversation about what are appropriate uses of climate models before they lead us into making very bad decisions.

  67. I think Nickels has made a number of interesting comments on this thread. GCM’s do not use the best modern numerical methods. I don’t quite understand why.

    I’m not sure what Steve Mosher is getting at with regard to tuning. It would be a sad commentary on modeler’s intelligence to suppose they don’t know what effect parameters have on model outputs. They no doubt the the parameter to give more reasonable results or ones that agree better with the data. Turbulence models are tuned this way all the time and specialists are smart enough to know what they are doing.

    • According to Mosher the models are not tuned. They are tooned. When they fail, as they all do, they are sent to the Climategate Metro Universal Studios over in Toontown, for reanimation.

    • Thx. If I had to guess why to solvers are old, its $. Sandia modernized its solvers, I believe mostly on ASCII money. I worked with guys with an interest in writing a modern solver at NCAR, but no one is still there due to $. Plus, these solver are very hard (all the different wave speeds and physics) and some of the experts in atmospheric science would have to buy in and help, and there didn’t seem to be much motivation in that.

  68. “I was very impressed by Bakker’s intellectual integrity and courage in tackling this topic in the 11th hour of completing his Ph.D. thesis. I am further impressed by his thesis advisors and committee members for allowing/supporting this. ”

    I am impressed too. Perhaps he would like a job on one of our production lines after he cannot be hired by a university as the state funding won’t cover someone not toeing the line.

  69. Just a kid trying to get a job by exploiting the pause.

  70. Thank you Judith!
    I thought the paper, “The Robustness of the Climate Modelling Paradigm, Ph.D. thesis by Alexander Bakker from The Netherlands”, was well ordered, phrased clearly, intelligently and quite coherent.

  71. It has been perfectly obvious ab initio that GCMs in general and the IPCC referenced GCM’s in particular have no utility for climate forecasting. Their outputs fall into the “not even wrong “category of results. There is also no reason for scientists to appear bewildered and to seem unable to conceive of other forecasting methods,
    Simple, transparent and reasonable forecasts can be made using the natural periodicities in the temperature data and the neutron count and 10Be data as the best proxy for the solar “activity” driver.
    For a discussion of the uselessness of the climate models see Section 1 at
    http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html
    For a discussion of the important natural periodicities see Section 2 and for forecasts of the timing and amplitude of the coming cooling which began in at the peak of the natural cycle in 2003 already see Section 3.
    also see
    http://www.woodfortrees.org/plot/rss/from:1980.1/plot/rss/from:1980.1/to:2003.6/trend/plot/rss/from:2003.6/trend
    The key periodicity on time scales of human interest is the quasi-millennial periodicity most clearly seen in Fig5.
    I have drawn attention to these model deficiencies and the need for new methods and forecasts on various threads on this blog for several years now but Judith has never seen fit to comment.
    Perhaps the thesis discussed here will finally cause her to recognize the
    the need for a different approach.

    • Dr. Page, “It has been perfectly obvious ab initio that GCMs in general and the IPCC referenced GCM’s in particular have no utility for climate forecasting.”

      The sad part is that the climate models could provide reasonable information. Judith had a post before where guys at the GFLD “initialized” their model by revising the ENSO region SSTs to actual observation. It did a pretty good job then because models include a “convective triggering” parameterization for surface temperature over about 28C degrees. For some reason the geniuses still believe that climate is a “boundary” value problem which the models will “discover”, but with the models missing actual surface temperatures by several degrees on a water world, reality hasn’t quite kicked in for the “experimenters”.

      Since the Clausius Clapeyron also depends on surface temperature not surface anomaly, the models don’t get diddly right as far as water vapor, precipitation and clouds. I find it hard to believe they are that stupid, but perhaps they are?

  72. captdallas Your reference to convective triggering is most interesting see

    This is Fig 2 (from Trenberth) in section 1.3.2 at
    http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html
    You can see why including a convective triggering parameter would improve the connection to reality .However models of this type remain inherently incomputable see Section 1.2

    • Dr. Page, I would think “not computable” is a stretch, you might not get the precision you like but you could get a reasonable ballpark.

      However, when you miss the tropical absolute temperature by that much, you are choking your chicken as far as productivity goes.

  73. Captdallas With such a large number of variables you can get compensating errors producing the same outcome so that there is no means of knowing what a reasonable outcome is from the model outcomes themselves. It is well worth the time to watch the Essex presentation linked in Section2 which says
    “The modelling approach is also inherently of no value for predicting future temperature with any calculable certainty because of the difficulty of specifying the initial conditions of a sufficiently fine grained spatio-temporal grid of a large number of variables with sufficient precision prior to multiple iterations. For a complete discussion of this see Essex: https://www.youtube.com/watch?v=hvhipLNeda4

    Models are often tuned by running them backwards against several decades of observation, this is much too short a period to correlate outputs with observation when the controlling natural quasi-periodicities of most interest are in the centennial and especially in the key millennial range. Tuning to these longer periodicities is beyond any computing capacity when using reductionist models with a large number of variables unless these long wave natural periodicities are somehow built into the model structure ab initio.”

    • Generally, the past “tuning” is done to low absolute values. Instrumental SST and what the sky “sees” are not always the same. So if you are basing what the model limits are when the initial values or “tuning” criterion are you can come up with all sorts of reasons why “models” won’t work.

      If the models are properly initialized, that would be the information need to evaluate how well they may be able to perform. Once that GFLD example gets to 300K, it starts leveling off as it should.

      Personally, I wouldn’t write them off completely because I am a cheap bastard, they can produce something useful, but I think cleaning house in a few “institutions” might stimulate a little more attention to detail.

  74. If you are a cheap bastard you should use my forecasts based on the natural cycle approach – I don’t get paid by anybody so the forecasts are free to any interested parties and clear and simple enough that even politicians might understand if they read them. You can see why academia in general would not see any value or profit in terms of publications, positions ,grants,or Government jobs and Honours if they used simple commonsense and basic observations in climate forecasting.
    How can you possibly estimate the effect of anthropogenic CO2 without knowing where we are relative to the natural cycles especially the millennial cycle. All the efforts of the modelers have been a wasted journey down a blind alley. Climate science has gone backwards since Lamb was replaced by Wrigley at the MET.See

    Forecasts based on Lamb,s figure from the FAR would do better than all the modeling since.
    The link is also Fig 8 at
    http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html

    • Hell, doc, I got you a few better :)

      There is Lamb “calibrated” to tropical SST and

      and there is “my” tropical SST reconstruction of peak temperatures, i.e. potential convective triggering.

      And there is why the models suck. What more could anyone want an estimate of “climate Sensitivity” to CO2? That is 0.8 +/-0.2 C :)

  75. The models are just supposed to be impressive, like wearing a white lab coat in doctor TV commercials.

    That they’re nonsense doesn’t matter. That they’re known to be nonsense and provably nonsense doesn’t matter. It’s all peer reviewed.

  76. Steven Mosher

    start here

    https://www.newton.ac.uk/event/clpw04

    Watch everything

    https://www.newton.ac.uk/event/clpw04/seminars
    This is one example …Tamsin’s work. excellent

    https://www.newton.ac.uk/seminar/20101207110011401

    Its 40 minutes.

    I predict that David Young is the only person who will watch it

    • Steven Mosher, “This talk will focus on the practical aspects of building an emulator, and show how our emulator can be used to learn about HadCM3, and to learn about the value of palaeoclimate measurements for parameter calibration.”

      Decisions decisions

      Oh wait!

      Thank goodness for NOAA wtf NOAA, the models are saved!!

      • Steven Mosher

        scribbling on a blog is a bad hobby.
        knit more. comment less

      • StevenMosher, “scribbling on a blog is a bad hobby.
        knit more. comment less”

        That’s good advice. On a water world missing the tropical SST by a few degrees would pretty much invalidate any climate model. This particular thread is about “questioning the robustness of climate modeling paradigm” .

        Note that peak temp of about 301.5K (28.4C)

        Since the models have parameterize tropical convection using a “convection triggering” temperature of around 28C, the models aren’t getting it. That is pretty plain and simple, no getty SST no getty climate.

        Now if you toon the models to paleo compiled by some of the true giants in the field, there are no wiggles. If you use Uk’37 “THermo-bugs” you are going to have a low temperature bias of about 1C in ideal areas and more than 2 C in less than ideal areas. So how the f are you going to toon models to paleo in this current state of the paleo art?

        Wave your arms all you like, but models gots issues.

      • http://climate.calcommons.org/article/why-so-many-climate-models

        “The above figure, which comes from the report High Resolution Climate-Hydrology Scenarios for San Francisco’s Bay Area, amply illustrates the conundrum facing all those who would make use of climate projection data. In this figure, 18 different climate models are represented, with temperature change values ranging from less than 1 degree to 6 degrees increase, and precipitation change values ranging from a 20% decrease to a 40% increase. With such a wide range, what value does one use for planning? Why is there such a plethora of models?”

        “Despite the existence of a wide variety of climate models, there are conceptual problems in treating these as independent entities, amenable to statistical treatments such as averaging or taking standard deviations. To begin with, climate models have a shared history, or in other words a genealogy (Masson and Knutti 2011, Winsberg 2012). There is common code between many of these models. Technical knowledge moves from modeling group to modeling group as scientists relocate. At present we lack a detailed characterization of the shared history of climate models, and it is not at all clear what we do with such a treatment in a statistical analysis sense. It is certainly inappropriate to treat different climate models as randomly sampled independent draws from a hypothetical model space, which is what would be required by rigorous statistical analysis. (Winsberg 2012).”

      • Steven Mosher

        captain.

        you dont validate a model for a use by comparing it to reality.

        you compare to the specification for that use case.

        PERIOD.

      • Steven Mosher, “you dont validate a model for a use by comparing it to reality.”

        Now that is right, “I” compares models with reality as a form of verification. “You” are more tolerant of crap than “I” am :)

        Your little subtlety is lost on most of the taxpayers as is calling a “forecast” a “projection” when it misses.

      • Steven, aren’t the climate models we are discussing trying to model the actual climate? Aren’t they represented as models of the real climate to policy makers?

      • ‘AOS models are therefore to be judged by their degree of plausibility, not whether they are correct or best. This perspective extends to the component discrete algorithms, parameterizations, and coupling breadth: There are better or worse choices (some seemingly satisfactory for their purpose or others needing repair) but not correct or best ones. The bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms, and a posteriori solution behavior.’ http://www.pnas.org/content/104/21/8709.full

        So pick the solutions – thousands of them – that resemble the recent past.

        But if they don’t get the ‘component discrete algorithms’ – and they don’t – what is the point?

      • Matthew R Marler

        Steven Mosher: you dont validate a model for a use by comparing it to reality.

        Do you trust the results computed by a model that has never been shown to match what it is claimed to model, within specified tolerances? Why?

      • Where did you get the chart, Rob? It looks like the judgement criteria there is goodness-of-fit between observations and models. Doesn’t say anything about plausibility. I wonder if they say on their applications for grants that they want to spend large sums of other people’s money developing plausible climate models. It’s like Arkansas or W. Virginia requesting a few hundred millions of dollars of federal money to build some plausible bridges.

      • “Steven, aren’t the climate models we are discussing trying to model the actual climate? Aren’t they represented as models of the real climate to policy makers?”

        yes.
        yes.

        Neither of which matters.

        Let me explain it for you AGAIN

        Here is the DOD defintion

        “validation. The process of determining the degree to which a model or simulation and its associated data are an accurate representation of the real world from the perspective of the intended uses of the model.”

        validation is the process of determining the DEGREE to which a model is accurate. note the precision of this definition. It does not say determining whether or not a model MATCHES reality. NONE DO.
        it says measuring the DEGREE to which.

        Next. Note that the degree of accuracy is CONDITIONED by the USE.

        Lets do a simple example: I am a user. I want a model that predicts the range of my aircraft within 50 nm say under standard atmospheric conditions. As a modeller your job is to build a model that is that accurate.

        Suppose you build a model and gets the range right to within 25 nm under a standard day conditions. And suppose It gets a tropical day range that is 100nm off. is the model valid?

        yes. the model is valid. Even though its 25 nm off for a standard day
        and 100nm off of reality for a tropical day. WTF? Why?

        Its valid because it meets the accuracy requirements set down by user.
        within 50nm for a standard day. The spec says nothing about tropical days.

        Why do we define validation this way? Simple. because models are always wrong. And because, if you make “matches reality” a spec
        people will always find a way to complain after the fact.

        So are climate models valid?

        1. Ask who the user is
        2. Ask how much accuracy they demand for their particular use.

        Here is what we know.

        you are not the user, so your conception of what is valid doesnt count.
        PERIOD.

        Another way to look at this is you cannot ask the question of validity because there is no spec.

        Asking the question shows you dont understand what validationn means.

        it means measuring the degree of agreement with reality from the perspective of a given use.

        You guys think validity means measuring agreement from god’s perspective.

        it’s not.

      • I understand all that Steven. But is that how it is sold to the policy makers and the general public? Or are climate scientists happy to have the public and policy makers believe that the computer climate models are like the computer models that got people to the moon and back?

      • Matthew R Marler

        Steven Mosher: you are not the user, so your conception of what is valid doesnt count.

        As voters, letter-writers, investors, planners and such, we are users.

      • Oh impenetrable walls of Ivory Tower!

        Steven Mosher, I’m just a boneheaded Ingenieur, but your definition of validation amounts to “We make stuff up”.

        What you describe here is nothing but a fri**ing computer game.

        Even so, whatever your notion of valid:

        3. I ask Show me evidence that any GCM undergoes INDEPENDENT evaluation and an accreditation. Name the agency or company. Please.

      • Steven Mosher

        matthew

        ‘As voters, letter-writers, investors, planners and such, we are users.”

        NO you are not. you are not a user.
        you’ve never looked at the data.
        you’ve never talked to a decision support people to ask for their guidance.

        in some cases you voted for the users. in other cases the users were appointed.
        in some cases the users are business,

        in no case are you a user.

      • Steven Mosher

        KenW.

        They cant undergo accrediation until users define a spec.

        You are asking the wrong question.

        think harder.

      • Steven Mosher

        Don

        “I understand all that Steven. But is that how it is sold to the policy makers and the general public? Or are climate scientists happy to have the public and policy makers believe that the computer climate models are like the computer models that got people to the moon and back?”

        go watch the video I posted.

        It will start to give you a tiny taste of how its sold.

        in the UK at least.

        As far as emission reductions, you dont need a GCM to decide.
        a simple energy balance calculation will give you all the basis you need to tax carbon.. or build nukes.. your choice

      • okay Steven, I’m reading and thinking very hard…

        “no spec”

        I’m baffeled.

        Do people get paid for something here?

      • Software with no specs, no evaluations, no picky customers and 32 free variables.

        The Ivory Tower must be Paradise.

      • nottawa rafter

        Mosher

        After reading all your comments, even the attempts to clarify, I understand why the climate establishment can look at themselves in the mirror, despite the
        growing divergence between the models and reality. I am not convinced that even if we have 50 years of near flat temperatures, the consensus will ever change their minds due to the elegance of the AGW theory.

        ” But, but it just has to be right, The physics say so.” ….. “Well, the model didn’t get validated by reality, but what the hey, it met our requirements.”

        Both statements born of the same mindset….. rationalization.

      • Steven Mosher: “captain.

        you dont validate a model for a use by comparing it to reality.

        you compare to the specification for that use case.

        PERIOD.”

        LOL!

        You know Steven, when the AGW scam finally goes titsup – as is inevitable, and coming closer every day – have you considered retraining for a career in stand-up comedy?

      • nottawa rafter : “Mosher

        I am not convinced that even if we have 50 years of near flat temperatures, the consensus will ever change their minds due to the elegance of the AGW theory.”

        nottawa, if the global temperature drops 10°C and glaciers came back this weekend, Mosh and Co. will still be claiming that we’re suffering from AGW even as the ice rolls over their houses, because their beloved computer games climate models say so.

        It would be comical if it wasn’t so tragic.

      • Time to re-introduce the term ‘lukewarming cooler’, one with which I self identify. I’m perfectly happy to accept the warming effect of AnthroCO2 and bless it along with the greening. But I also worry that we’re in for global cooling, short, medium and long term. The cooling would fit with the centennial and millennial scale natural changes, possibly solar driven, and in particular now with the concatenation of cooling phases of the oceanic oscillations.

        The phrase really contains two meanings, the first word for the radiative effect, let it be small, of AnthroCO2, and the second word for an expectation of future natural climate changes.

        I must have written ‘lukewarming cooler’ dozens of times at the Blackboard, but teach kept me with the conical cap.

        Remind me sometime to tell you the story about Arthur Smith and his quest for the origin of ‘We are cooling, folks; for how long even kim doesn’t know’.
        ==================

      • Matthew R Marler

        Steven Mosher: NO you are not. you are not a user.
        you’ve never looked at the data.

        I certainly am a user. On that, I think you are trying to narrow the definition beyond any useful meaning. I use aircraft as well, and I wouldn’t intentionally fly on an aircraft that had never flown, no matter how many “tests” the owners claimed it had passed. A bout gasoline I am a little less persnickety, and I accept the claim on the pump that the octane rating was obtained by a valid test — but I also note that I haven’t burned out my engines or clogged the anti-emissions equipment.

        Which data have I not looked at? The data showing that the GCM outputs do not satisfy a reasonable (to me) standard of accuracy?

      • Matthew R Marler

        Steven Mosher: in some cases you voted for the users. in other cases the users were appointed.
        in some cases the users are business,

        Those are all true statements. When I correspond with my Congressman, I alert him to the fact that the GCMs have not ever been shown publicly to satisfy a standard of accuracy that he might want for guidance of spending.

      • Matthew R Marler | February 5, 2015 at 2:01 am |

        Steven Mosher: you are not the user, so your conception of what is valid doesnt count.

        Marler: As voters, letter-writers, investors, planners and such, we are users.

        Bingo! Give Mr. Marler a cigar!

    • Steven Mosher, I think you might want to watch this modeling group.

      http://forecast.bcccsm.ncc-cma.net/web/channel-34.htm

      They are one of the few that get tropical SST close to correct.

      Well lookie there! About 0.6C temperature rise by 2100 with rcp4.5. Instead of blowing smoke and making excuses they just adjusted the initial conditions to match reality. What a novel frigging concept?

    • OK Steven, I watched the very interesting video. I recommend it. But it does not inform me on how the climate models are being sold to the public and to policy makers.

      A couple of the interesting comments (paraphrased) made by the learned prof:

      9:30 HadCM3 is a very poor representation of the actual climate

      36:00 would like the calibration exercise going on to tune HadCM3 parameters to be vigorous…they should hire a statistician to construct an emulator

      And somewhere in there he talked about the twitchy parameters. People should watch this.

      Anyway, he is obviously skeaking to a group of other scientists. This is not how policy makers get briefed. I’ll tell you how it works, from a Congressman up to the POTUS. Staff knows what kind of story their boss wants to hear and they seek out the experts, who will deliver the right story.

      Example: Obama wants to find justification for his health care schemes so his staff comes up with Gruber, who has built an econometric model that can be tweaked to show that a bizarre combination of mandates, taxes and subsidies will result in everybody getting covered by improved health insurance that will save the average family $2500 year and not add to the deficit.

      I am guessing that the climate scientists who have Obama’s ear are Gruber types. They are not going to tell him that climate models are very poor representations of the actual climate.

      We may not be “users” of climate model output, but we pay for it and we and our families and poor children and old folks are affected by the policies that get implemented by uninformed and dis-informed policy makers. And we don’t really care what the DOD says about the validation of models. We are entitled to our own perspective on what should pass for valid information that is being used to make public policy. If not for the public policy ramifications of climate science we would not give a flying F about climate models.

      I have learned a lot from you Steven, but sometimes you are a little stinker.

      • Twitching in albedo fudge. Slow motion Saturday Night Live.
        ===========

      • Don, thanks for persisting, watching the video and laying it out for Mosher.

        I watched a portion of the video (and several others) and concluded that for the most part these are scientists in the weeds talking to other scientists in the weeds and the relation to policy making is virtually non-existent.

        I guess that you have a history with Mosher that leads you to engaging in his pedantic posts and taunts. More power to you.

      • Matthew R Marler

        Mark Silbert: Don, thanks for persisting, watching the video and laying it out for Mosher.

        I second that.

        I guess that you have a history with Mosher that leads you to engaging in his pedantic posts and taunts. More power to you.

        Steven Mosher is sometimes correct, so I find it worthwhile to engage with him. See a question I posed to him about his comment to Rud Istvan, and what ensued.

      • And thanks to you, two. I value your opinions. You are among a group of about a dozen whose comments I won’t pass up. I save a lot of time by skipping FOMD’s contrived BS completely and just skimming over the foolishness of joshie et al.

        I will always pay attention to what Mosher has to say. Even when he is wrong, you should be able to learn something. (I wouldn’t spend 40 minutes watching a video recommended by jimmy dee). Mosher can be pedantic and impatient, but I believe that he strives to be honest and to get the science right. He has redeeming qualities. You just have to keep pestering him, until you hit the right question. He hates it when people ask him wrong questions.

      • Don, & Steven Mosher

        I read where it’s indicated models are not “validated” by reality. So can anyone help me to understand when they are “invalidated” by it? Is this a better question?

      • Steven Mosher

        You didn’t listen closely.
        Listen for the words
        Inform policy
        Uncertainty

        In short the output is not being sold as something certain. It’s an uncertain piece of data.
        Note also that other concerns also drive policy.

        Finally climate models have zero to do with obamas pen and phone. He don’t need no climate model to
        Out smart establishment rhino nitwits

      • Matthew R Marler: “Steven Mosher is sometimes correct”

        Heh!

        Damning with faint praise?

        So is a stopped clock.

      • OK, Steven. Have you ever heard the man with the pen and the phone talk about the climate thing? Does he sound like he has been told about uncertainty? Do you really believe that he hasn’t been shown predictions generated by climate models that he believes actually represent the operation of the actual climate?

        Look, by the time information filters up to the King it has been distilled by the Greenpeace political operatives who are imbedded in every Democratic political staff in the D.C., and the EPA too. Any hint of uncertainty get’s scrubbed, before the King gets his 5 minute briefing. That’s the way the King likes it.

        Obama talks like he believes that CAGW is as certain as the round earth, or tobacco causing cancer. He done seen the proof-CO2 traps heat the King says-and it’s silly to think they haven’t shown him the model predictions that support the scary scenarios.

        The Gruberites address the King: Mr. Your Majesty King, we know how the climate operates and here is our very expensive Official Climate Simulator Supercomputer Climate Model. You just tweak this control knob to toon the thang and….

        It remains to be seen if the Kang has outsmarted the Republican establishment. They and their energy state Demo colleagues have stopped him and his CAGW crowd from passing meaningful CO2 mitigation schemes. That he has taken action outside Constitutional boundaries is not to his credit.

        You are slipping, Steven. Try to be more open minded, patient and tolerant, like myself.

      • Don, you hit the nail on the head. This DOT piece, ostensibly about, well, transportation, mentions climate 94 times! Their talking about a carbon tax. Pick up your figurative pens and write your congress persons!!

        From the article:

        Policies may change the relative costs of different modes. For instance, increasing the cost of carbon could make shipping goods by truck more expensive relative to shipping by barge. Such a scenario would also increase the total cost of travel for all modes that rely on fossil fuels. However, this kind of policy is likely to stimulate research into new technologies that can reduce emissions and energy use, and it may prompt shippers, carriers, vehicle manufacturers and others to seek innovative ways to reduce their costs.Any revenues resulting from new policies could be rebated back to individuals in order to reduce overall costs, or could be used to finance resilient, energy-efficient transportation facilities.
        Without an alignment of costs and incentives, the marketplace—both individuals and the private sector as a whole—is less likely to choose to pursue courses of action that support a responsible future. In the absence of coordinated policy, if we fail to create a more resilient infrastructure, the
        effects of climate change—ranging from higher temperatures to sea-level rise—will mean higher costs, greater disruption, and more damage to vulnerable communities.

        http://www.dot.gov/sites/dot.gov/files/docs/Draft_Beyond_Traffic_Framework.pdf

      • Jim2,

        Taking bets! Should this come about, how much ya wanna bet this don’t ever happen: “Any revenues resulting from new policies could be rebated back to individuals”?

  77. Another important side benefit of moving away from fossil fuels is that we leave future generations without the problem of growth based on energy intensive industries leading to pollution (e.g. China). It would make it a lot easier to bring back those industries to the US and avoid those side effects..

  78. Berényi Péter

    Put a transparent container onto a well insulated rotating table into a vacuum chamber, whose walls are kept cold by liquid nitrogen from the outside. Fill it with a semitransparent fluid, which has phase transition close to the average operating temperature. Heat it with short wave radiation directed at the container.

    As soon as a computational model is constructed, which predicts dependence of the system’s output parameters reliably as function of optical depth of fluid inside (at various wavelengths) and other input parameters of the system, proceed to climate modeling, but not sooner.

    Please note one can have as many experimental runs of this apparatus as one wishes, with close control over input parameters like angular velocity, viscosity and optical depth of fluid, intensity of incoming shortwave radiation, etc.

    In this respect it differs greatly from the terrestrial climate system, where you only have a single uncontrolled run of a unique physical entity, which makes experimental verification of computational models impossible.

    That’s how physics is done.

    • A fan of *MORE* discourse

      Berényi Péter envisions “Fill it [a chamber] with a semitransparent fluid, which has phase transition close to the average operating temperature …”

      LoL … Berényi Péter, please appreciate that aerospace engineers have for more than fifty years been accurately simulating the storage, transport, and combustion of thermodynamic systems that meet your criteria. In particular, the recent SpaceX video CRS-4 Launch — Fuel Slosh, internal fuel tank camera (5G to 0G) is commended to the attention of Climate Etc readers.

      FOMDs Fun Fact  Simulations predict the performance of this complex dynamical multiphase system to within a relative precision ±0.5%.

      Of course, the simulations aren’t perfect. Here’s what happens when the dynamical simulations are perfect … but dang it! … the engine-gimbal actuators run out of hydraulic fluid …

      Ouch! Keep those hydraulic-fluid reservoirs topped-up, folks!

      Conclusion  Good on `yah, NASA mathematicians, physicists, and engineers … for developing the transformational simulation-science that enables *BOTH* space exploration *AND* climate-prediction.

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • Berényi Péter

        aerospace engineers have for more than fifty years been accurately simulating the storage, transport, and combustion of thermodynamic systems that meet your criteria

        Not really. There is a general theory of non equilibrium stationary states, with a caveat.

        Journal of Physics A: Mathematical and General Volume 36 Number 3
        2003 J. Phys. A: Math. Gen. 36 631
        doi:10.1088/0305-4470/36/3/303
        Information theory explanation of the fluctuation theorem, maximum entropy production and self-organized criticality in non-equilibrium stationary states
        Roderick Dewar

        According to it in reproducible non equilibrium stationary systems the Maximum Entropy Production (MEP) principle holds.

        A system is reproducible if for any pair of macrostates (A;B) A either always evolves to B or never.

        Unfortunately the climate system fails to comply with this condition. The butterfly effect means that even microstates belonging to the same macrostate can evolve into different macrostates in a short time.

        Therefore theoretical understanding of chaotic non equilibrium stationary thermodynamic systems is lacking.

        That’s why aerospace engineers intentionally design systems which can be operated far away from a chaotic regime. If it can’t be accomplished, as in the case of fuel sloshing in a microgravity environment, they apply something like an ullage motor, which brings the system back to a fully deterministic state before the main engine is restarted.

        Caption in the video you have linked to reads “This is fuel sloshing – Fuel must be settled before restart”.

        They do not need to predict statistical properties of a chaotic state, only its existence and the means by which it can be avoided. That’s a task completely different from the one in climate prediction.

        One does not need supercomputers to predict when and how the climate system enters a chaotic regime. It is always there, with a relative error of zero percent. And there is no way to switch it to a non chaotic regime, but even if there were one, it would be highly undesirable.

        That’s why one would prefer to see experiments with genuinely chaotic systems like the one described above before building computational models with no proper theoretical background.

    • A fan of *MORE* discourse

      Berényi Péter, your appreciation of the modeling of nonlinear chaotic far-from-equilibrium dynamical systems — and the appreciation of Climate Etc readers — will be enhanced if you focus your attention upon the above video’s infrared images of the spacecraft’s radiatively/regeneratively-cooled Merlin-class RP1/O2 engines …

      Merlin engines are the highest-efficiency hydrocarbon-oxygen engines ever flown. They provide a terrific example of how stable macroscopic performance parameters (the engine’s “climate”) emerge from chaotic microscopic dynamics (the engine’s combustion processes) … and can be reliably predicted by large-scale simulation codes.

      There’s a lot going on, thermodynamically speaking … so crank-up the speaker-volume, folks!

      So much so, that modern NASA-type aerospace engineering would be utterly infeasible without large-scale simulation codes!

      Conclusion  We know from basic thermodynamics that SpaceX Merlin engines will run hot … exactly how hot requires detailed dynamical simulations verified by careful analysis of observational data. Just as we know from basic thermodynamics that CO2-heated planets will run hot … exactly how hot requires detailed dynamical simulations verified by careful analysis of observational data!

      Good on `yah NASA … for developing incredible dynamical simulations of rocket engines *AND* planets!

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • Berényi Péter

        Well, if you are so knowledgeable about climate issues, tell me something please.

        1. How much is the average net entropy production flux of the terrestrial climate system (in W/m²K)?
        2. According to computational climate models what is supposed to happen to the net entropy production, if atmospheric CO₂ concentration is increased?
        a) increases
        b) decreases
        c) remains unchanged
        3. How does the mathematical expression connecting CO₂ concentration to entropy production look like?

        These are entry level questions about a heat engine radiatively coupled to its environment.

  79. Just saw these: “http://www.washingtonpost.com/news/energy-environment/wp/2015/02/04/no-climate-models-didnt-overestimate-global-warming/

    from above: http://www.nature.com/articles/nature14117.epdf?referrer_access_token=MuASyFxPdWRRwNipiZjJf9RgN0jAjWel9jnR3ZoTv0NQ9JoZh4DL5B4vJaZ-VA15G0I06smVMlMxv7BmIGMvuLn6ePpwk__RNSeBhMAB46uU6iOwpSuEI62pSLGNOARL

      • “Our estimates of climate sensitivity lie well within the range of 1.5 to 4.5ºC increase per CO2 doubling summarised in the latest IPCC report. This suggests that the research community has a sound understanding of what the climate will be like as we move toward a Pliocene-like warmer future caused by human greenhouse gas emissions.”

        Sound understanding of what the climate will be like within the range of 1.5 to 4.5 C increase. That is a pretty big range. If it is closer to the 1.5 then that is a very different answer than the 4.5 one. I have a sound understanding that the stock market will be within a large range. Does this do me any good? Likely not.

      • ATANDB,

        One unanswered question (may be in the study but it’s paywalled) is what caused the “rapid drop” to 280ppm as discussed in the 8th paragraph down.

        Seems like we don’t have an understanding of the cause of the hiatus.

        Not any detail here either other than pointing towards albedo:” “We find that climate change in response to CO2 change in the warmer period was around half that of the colder period. We determine that this difference is driven by the growth and retreat of large continental ice sheets that are present in the cold ice-age climates; these ice sheets reflect a lot of sunlight and their growth consequently amplifies the impact of CO2 changes.”

    • “But maybe they simply can’t, due to the random ways in which climate can temporarily fluctuate. That doesn’t mean that climate models aren’t valuable to us. They still give us good sense of the long-term picture, the one that is more important for us to worry about anyway: that temperatures are increasing, and that natural factors can’t explain this increase.”

      That last bit, “that natural factors can’t explain this increase.” is based on the models that can’t quite get natural variability, but aren’t really wrong though. A lot of the reason they can’t get natural variability is because they don’t work. So we have a circular reasoning contest.

      Toggwieler, J. R. is one of the GFLD ocean modelers that does a lot of paleo. According to him the combination of Drake Passage opening and panama closing created about 3C of global cooling and a shift in temperatures to a warmer NH and cooler SH. That was about 3C of cooling that didn’t require CO2, in fact it contributed to the reduction to the reduction of CO2. James Hansen I believe said that was not possible, i.e. natural factors cannot explain the change.

      I tend to agree that climate models can be useful it is just some of the climate modelers aren’t all that useful, especially when they say the low end of the range is “well within the range 1.5-4.5C”. The useful climate modelers appear to be associated with the GFDL or are Chinese, generally from areas that ignore Hansen.

      • They are right of course. As long as you ignore any natural factors that could explain the increase there are none.

      • Captain

        Could the models’ problem with natural variability/internal variability be partially due to the hockey stick-like pages2k proxy reconstructions? I’ve been reading Rosenthal, Braddock K. Linsley, Delia W. Oppo (2013) and they have similar conclusions as Oppo et al 2009, that are in sharp contrast to pages2k. Here are some excerpts:

        1. “The reconstructed OHC is compared with modern observations for the whole Pacific at the same depth range (5). The comparison suggests that Pacific OHC was substantially higher during most of the Holocene than in the past decade (2000 to 2010), with the exception of the LIA.”

        2. “The inferred similarity in temperature anomalies at both hemispheres is consistent with recent evidence from Antarctica (30), thereby supporting the idea that the HTM, MWP, and LIA were global events.”

        The following excerpts lead me to asking a question:

        1. “The modern rate of Pacific OHC change is, however, the highest in the past 10,000 years.”

        2. “The similar IWT trends observed between 480 and 900 m suggest that on orbital time scales, changes in Pacific OHC are largely determined by climate changes in the high latitudes that possibly respond to changes in the tilt of Earth’s axis since the early Holocene (24).”

        My question is this:

        Is there any explanation for the rapid increase in Intermediate Water Temperature (IWT), used in this study to measure OHC, other than solar/orbital?

        Here’s a link to the study: https://marine.rutgers.edu/pubs/private/yair_2013.pdf

        Thank you,

        Richard

      • rls, “Could the models’ problem with natural variability/internal variability be partially due to the hockey stick-like pages2k proxy reconstructions? ”

        yep. You may not “tune” models but they do need initial conditions and the “hockey sticks” indicate that initial conditions were as smooth as a baby’s butt. So if you “initialize” the models to say 1880 to 1910 and assume that is “normal” you are likely to start on the low side meaning you over estimate sensitivity. They might settle out in a few hundred years, but not anytime soon.

        The model defenders will say they are “boundary” value problems, but water vapor is an initial value problem, unless you have SST’s close you will miss precipitation, have a higher sensitivity, not get absolute temperatures, pretty much everything they are doing wrong..

        Climate Explorer has most of the model runs and allows some masking so you can compare model temperature and actual temperature. Most of the models miss tropical temperatures by about 2 degrees C, so you would have about 2 degrees more “warming” than should be realized baked in at the start. The only model I saw that had tropical SST pretty close has a low sensitivity.

        This Chinese model has about the best fit of any, including a smaller miss around 1910 to 1915, gets the “pause” well and projects about 0.6C more warming by 2100 with the RCP45 scenario. throw +/- 0.75 C in as error bars and that is about as good as it gets. The models can actual do a pretty fair job.

        “(2) Decadal prediction on 10-30 yr time scale (decadalXXXX): Based on the set of simulations, Gao et al. (2012) evaluated the model’s prediction capability in regional and global surface temperatures on decadal time scale, and aimed to explore their dependences on the initial observed states of ocean in comparison with the historical experiment (historical).”

        http://forecast.bcccsm.ncc-cma.net/web/channel-43.htm

        Mosher mentions all the time how badly the models do with absolute temperature but seems to look the other way when people talk about what impact that has.

      • rls, on the other part, Rosenthal, Oppo and Linsley seem to do a pretty good job, one PR eureka moment at a bad time, but on the whole very good.

        Ford has an Eastern Pacific “sub-surface” temperature reconstruction that shows some in and out of phase warming/OHC change that I haven’t seen many try to addressed. Toggwieler also has some papers on ocean circulation changes impacting “ventilation” so I would have to hold off on saying too much about that since that is where a serious ocean modeling will be required.

      • “Mosher mentions all the time how badly the models do with absolute temperature but seems to look the other way when people talk about what impact that has.”

        sorry I dont look the other way.

        further, getting the temperature wrong can have ZERO impact for some uses.

        you cannot discuss the impact of errors in a vaccuum.

        period.

        you must specify a use.

        you havent

      • Steven Mosher, “further, getting the temperature wrong can have ZERO impact for some uses.

        you cannot discuss the impact of errors in a vaccuum.

        period.

        you must specify a use.

        you havent”

        Mentioned it quite a bit actually, water vapor. Right now I am looking at one of the “parameters” that never gets tuned, convective triggering. There are quite a few parameters that depend on absolute temperature, C-C for example. I actually started looking at tropical SST when I read a paper by Solomon where she was sure, SST couldn’t have anything to do with the changes in stratospheric ozone and water vapor. A climate scientist says “it can’t be that” that is where you start, right?

        CMIP5 model mean is about 1.8C low. When you adjust it match starting temperatures you get that. Now if you are into clouds, water vapor and all that stuff that are just little ol’ “boundary” conditions, you wouldn’t want to miss absolute temperature in the tropics.

      • You need to publish a paper, Dallas. Your tropical SST limit can become known as the “Dal Threshold,” 27.4 C.

      • r. graf, “You need to publish a paper, Dallas. Your tropical SST limit can become known as the “Dal Threshold,” 27.4 C.”

        pttt, it is basic thermo, water evaporates.

    • Matthew R Marler

      check out the tweet from Marcel Crok on the upper right of the page ( Thu, 2/5, at 3:18 Pacific Time)

      • “Just because three of the models successfully backcasted something or another does not imply they did so because they are good models. This can happen by chance and you have to prove it wasn’t simply chance.”

        no you actually dont have to do that.
        in fact you cant do it.
        for any model.

        there is no proof in science.

        more importantly USING a model isnt doing science.

      • there is no proof in science.

        A semantic quibble. Science doesn’t “have” “proof” in the sense of Euclidean Geometry, but it has an effective equivalent. From Wiktionary: “Prove”:

        From Middle English proven, from Old English prōfian (“to esteem, regard as, evince, try, prove”) and Old French prover (“to prove”), both from Late Latin probō (“test, try, examine, approve, show to be good or fit, prove”, verb), from probus (“good, worthy, excellent”), from Proto-Indo-European *pro-bhwo- (“being in front, prominent”), from Proto-Indo-European *pro-, *per- (“toward”) + Proto-Indo-European *bhu- (“to be”). Displaced native Middle English sothen (“to prove”), from Old English sōþian (“to prove”). More at for, be, soothe.

        Or, “proof-marks” on armor (Renaissance), “the proof is in the pudding”, “the exception proves the rule, etc.

      • Matthew,

        Thank you! And please, by all means be direct with me. I appreciate you.

        I asked before in a post to Bob and Steven Mosher so will ask again here. If models are indeed not validated by “reality” (according to Steven) at what point are they invalidated?

      • Let me give you an example.

        Suppose I am going to do a study on heat waves.

        1. I select a criteria for hindcast. The model has to be +- 25%
        accurate.
        2. I look at 5 models.
        3. 2 of the 5 meet my criteria, one is 20% low the other is 15% high
        4. the other three are invalidated for my use.

        validity of a model is determined by the user and the use
        different user, different use, validity changes.

        no model is valid for all purposes.

        there is no getting around the question of who is using the model for what purpose.

      • Steven,

        Thank you. That makes complete sense. You’re talking of providing the tool and the user selects the use. One can use a screwdriver to drive a screw or pry open a paint can. I have a better grasp once again thanks to your assistance.

      • Lewis’s piece is a little odd. First he goes into all sorts of esoterics, but buried at the end he agrees with what most would regard as Marotzke’s main conclusion that in 15 year periods natural variability likely does dominate over systematic trend differences. Accepting that, it would be statistically unfounded to assert that the models are overheating in a 15-year period, so this unravels Lewis’s whole argument against the “unfounded” statement of the paper. The Washington Post article and Max Planck piece both emphasize this 15-year noise part.
        http://www.washingtonpost.com/news/energy-environment/wp/2015/02/04/no-climate-models-didnt-overestimate-global-warming/
        and
        http://www.mpimet.mpg.de/en/kommunikation/aktuelles/forschung-aktuell/das-plateau-in-der-globalen-temperatur.html

      • JIm D,
        Thank you. The Max Planck piece was more concise to this reader. Still digesting the circularity but that will come over time. One must learn enough to ask better questions. Impressions are that the accuracy is “off” due to missing feedbacks or paramaters for the feedbacks. And it’s complex.

      • Jim D,

        Either you didn’t read Nic’s analysis of the paper properly, or you did but you’re scratching around for any reason, no matter how desperate, to defend Marotzke and Forster’s original conclusions.

        Nic has eviscerated the paper, mathematically speaking. If you think he hasn’t, please post up where his criticisms fail.

        As to the closing 15-year noise comment that you pick up on, Nic actually says this:
        “One of Marotzke’s conclusions is, however, quite likely correct despite not being established by his analysis: it seems reasonable that differences between simulated and observed trends may have been dominated – except perhaps recently – by random internal variability over the shorter 15-year timescale.”
        In other words it is very clear that Nic is certainly not arguing against his own conclusions.

        If that’s all you can find to object to, you don’t have much, do you?

      • Jim D. has selective vision and hearing.

      • Jim2,

        Help me here, please. Jim D provided this: http://www.mpimet.mpg.de/en/kommunikation/aktuelles/forschung-aktuell/das-plateau-in-der-globalen-temperatur.html
        and the money shot is the last sentence. I’m not capable of doing the physics, but I can read and would appreciate your critique of the analysis.

      • SM says …
        “validity of a model is determined by the user and the use
        different user, different use, validity changes.”

        Just because three of the models successfully backcasted something or another does not imply they did so because they are good models. This can happen by chance and you have to prove it wasn’t simply chance.

      • Jim2,

        My difficulty is that the Max Planck folks have a completly different take. Who does one trust, and why? I have no quarrel with Nic Lewis’ work as far as I can take it (and it’s not far). But the Planck folks are about 180 degrees out. Why is their evaluation reaching a different result? I’m “the public” so who do I trust? And why? Can you offer insight or are you just trusting Nic Lewis? If so, why? If you don’t trust Planck, I’d really appreciate knowing why. I’ve no reason to not trust you other than we don’t see eye to eye on all things. But would you take the word of an anonymous guy in some blog (me) over the “experts”? I have two competing opposite reviews of a paper and can’t do the stats myself. Teaching opportunity here.

      • The pieced cited by Jim D. does not do a critical analysis of the paper, it is just regurgitating the result. It adds no value.

      • Jim2,

        I get that that is your assertion. But to what end? Why would Planck even make a statement? If invalid, it hurts their reputation. There is no benefit to reiterate and in fact only creates risk and reduces credibility.

        I can see the circular logic argument in the paper (abstract even) so that has my radar up. I cannot dispute the stats myself.

        I have no reason not to trust Nic Lewis. I’d even lean the opposite from what I’ve seen. But I have no reason not to trust the Planck folks either (thought not as familiar). Any ideas why the’d go for all risk and no reward? It doesn’t appear to be a political organization. If ya don’t know, I get it. Just puzzling.

        As always, thank you.

        This makes no sense.

      • “If models are indeed not validated by “reality” (according to Steven) at what point are they invalidated?”

        Climate models can’t be invalidated (at least not their predictive ability). That’s why climate modelling isn’t science.

      • Nickels,

        I guess it’s more appropriately considered mathmatics?

      • Danny, read Ross McKitrick’s breakdown of Nic Lewis’s post @ climateaudit.org. It’s a little ways down in the comments.
        ================

      • Kim,

        I say that last evening but lacking stats background it’s a bit over my head. It will take some digesting but I will do my best. I appreciate the reference!

        Best!

      • Danny, yes, the main statement from Marotzke is “The claim that climate models systematically overestimate the warming caused by increasing greenhouse-gas concentrations therefore seems to be unfounded. ” The basis for this is that 15-year trends contain more natural variability than climate response., so you can’t say that climate models overestimate the greenhouse warming effect when natural variability has likely played a role in reducing the warming. The 15-year trend has varied from 0.3 C per decade to near zero within the last 15 years alone, and this type of variation has been seen in 15-year trends since 1900. How can we tell that the climate model 0.2 C per decade is running hot based on that? It is a very simple point that Marotzke is making. Saying that 0.2 C is a systematic overestimation when only in the last 15 years it has been both more than that and less than that does seem to be unfounded. It is that simple and Lewis was just obfuscating.

      • JimD,

        I’m at a complete disadvantage in that I cannot evaluate the statistics. But in what why does the statement that Prof. Hughes & Murieka that the statistics is “fatally flawed” lead you to state that Nic Lewis is obsfucating? If your suggestion is due to the statistical gymnastics I cannot comment with any value.

        If the statistics used to reach the conclusion is flawed, the conclusion could indeed be correct but the methodology is detailed as inaccurate. Do you see a problem with the statistical review?

        This seems pretty clear: “However, there is an even more fundamental problem with Marotzke’s methodology: its logic is circular.”

        And there does appear to be a chartible comment: “One of Marotzke’s conclusions is, however, quite likely correct despite not being established by his analysis: it seems reasonable that differences between simulated and observed trends may have been dominated – except perhaps recently – by random internal variability over the shorter 15-year timescale.”

        Heck, even I get one right every once in a while but I’d never put it forth in a paper. :)

      • Oh, little jimmy dee. You better rush over to CA and give those guys what for. Nicky Racehorse needs your help. His defense for marotzke is:

        “You can get the paper here.”

      • In his whole post, Lewis makes very little mention about the size of the 15-year natural variability except to support the idea that it may have dominated, which was the main point raised by Marotzke. I think Lewis missed the point, that was made clear by the press release and a good WaPo summary, and instead he was digging into sidebar issues on long-term trends in models that had no bearing on the most often quoted concluding remark. It is very unfortunate he only made this indirect supporting statement about 15-year natural variability.

      • Danny, they are not talking about the statistics related to the Max Planck summary of the results, but something deep in the Nature paper. I don’t know what their complaint about that is, but as for the conclusion about 15 years being too noisy to draw conclusions from, they haven’t addressed that, or, if anything, Lewis grudgingly supported it with his only statement about 15-year trends.

      • Well, why don’t you go over to CA and straighten out Nic, Ross, Roman et al, yimmy? I am sure they will be gentle with you. Got any guts, yimmy?

  80. “A multinational research team, led by scientists at the University of Southampton, has analysed new records showing the CO2 content of the Earth’s atmosphere between 2.3 to 3.3 million years ago, over the Pliocene.”

    Odd that they like discussing a warmer period millions of years ago but not a warmer period just a few thousand years ago. Pity we actually know a fair bit about higher sea levels a mere two thousand years back in time. And eight thousand years back. And lower sea levels four thousand years back. (Not that our wildly fluctuating sea levels and sea ice have to be tied exactly to overall temps, but we’ve had some warmer and meltier episodes than this one, and not too long ago. At least, that used to be a ho-hum consensus till a much needier and more political consensus came on the scene.)

    But denying the very nature of the Holocene by avoidance or issue-blurring is at the core of climate alarmism. The most cursory glance at the constant ups and downs of our present geological epoch just ends the whole conversation, doesn’t it? So that mastodon in the Volkswagen is ignored.

    Well, when you are fielding MULTINATIONAL teams you can’t just call the game off.

    Let a thousand climate conferences blossom!

    • Yes. It only makes sense to look back to the Pleistocene and earlier Holocene to see all the fluctuation in CO2 and temp that you could need to test a model. Why go further back when new variables have to be accounted for like continental drift and closure of ocean circulation? Though, as Dallas pointed, there is a nice independent clue seen there by how that restriction in inter-ocean circulation brought down GMT 3C.

  81. Willis Eschenbach

    There’s much discussion here of whether the global climate models (GCMs) are tuned, and if so, are they tuned to match the historical record.

    As to the first question, there are many statements by modelers that say that various parameters of the models are tuned. Here’s Gavin Schmidt discussing the GISS Model E GCM, as an example:

    The model is tuned (using the threshold relative humidity
    Uoofor the initiation of ice and water clouds) to
    be in global radiative balance (i.e., net radiation at
    TOA within ±0.5 W m2 of zero) and a reasonable
    planetary albedo (between 29% and 31%) for the control
    run simulations.

    Note that this is not tuning some trivial part of the model. Without this tuning the modeled system is not energetically balanced, either losing or gaining more energy than is impinging on the system. Without this tuning the model would spiral into either heat death or snowball … which should be enough to toss the model out right there, because the real world is nowhere near that sensitive.

    As to being tuned to the historical data, all the models are tuned to it, but in an “evolutionary” rather than a direct fashion. By that I mean a model is built, and the only way we have to test it is to compare it to historical data. If a given change in the model makes it fit that historical data better, the change is retained in future incarnations of the model, while changes that make the historical fit worse are discarded. After many, many iterations, eventually we end up with a model that reproduces historical data in some fashion.

    This evolutionary tuning has several effects. First, it means that the fit to historical data is meaningless, because that’s what the models are tuned to fit.

    Second, it means that the IPCC argument that “the models can’t replicate the historical data without the anthropogenic forcings” is also meaningless. If you tune a model with a group of forcings, removing any one or more of them will result in a worse fit, duh.

    Third, it means that the aspects of the climate for which the model is not trained will often disagree greatly when compared with historical data. See precipitation as an example.

    Fourth, and crucially, this evolutionary tuning means that regardless of the forcings used as model input, a model can likely be found that reproduces the historical data. Or as Kiehl put it in his groundbreaking paper,

    One curious aspect of this result is that it is also well
    known [Houghton et al., 2001] that the same models that
    agree in simulating the anomaly in surface air temperature
    differ significantly in their predicted climate sensitivity. The
    cited range in climate sensitivity from a wide collection of
    models is usually 1.5 to 4.5C for a doubling of CO2, where
    most global climate models used for climate change studies
    vary by at least a factor of two in equilibrium sensitivity.

    The question is: if climate models differ by a factor of
    2 to 3 in their climate sensitivity, how can they all simulate
    the global temperature record with a reasonable degree of
    accuracy.

    And a good question it is. The answer is that the so-called “climate sensitivity” diagnosed by the climate models is merely a function of the size of the forcings … and that, of course, puts the lie to the idea that the models are “physics based”. Well, I guess they are “physics based”, but only in the same sense that a Hollywood blockbuster is “based on a true story”.

    w.

    • Willis,

      Thank you for that explaination. It makes sense to even one such as I w/o a science or computer background. If I get the drift, it’s much like taking two different roads ending up at the same destination. One might trust the destination is the same (match via backtest) but going forward if a different parameter is used that what actually happened historically, the forward result will not likely be accurate.

      Am I close?

    • Matthew R Marler

      Willis Eschenbach: There’s much discussion here of whether the global climate models (GCMs) are tuned, and if so, are they tuned to match the historical record.

      The statement challenged by Steven Mosher and explored by me was the following by Rud Istvan: Parameter sets for CMIP5 were selected to give the best 10, 20, and 30 years hindcasts per the experimental design. See Taylor et. al. BAMS 93: 485-498 (2012) open acsess on line. For parameterizarion, see the technical documentarion to NCAR CAM3. Available free on line as NCAR/TN-464+STR (2004).

      What you quote from Gavin Schmidt is a bit different: The model is tuned (using the threshold relative humidity Uoofor the initiation of ice and water clouds) to be in global radiative balance (i.e., net radiation at TOA within ±0.5 W m2 of zero) and a reasonable planetary albedo (between 29% and 31%) for the control run simulations.

      That is, tuning to “get the physics right”, or what they think the right physics is (TOA balance.) That applies to things that have not yet been accurately measured; other parameters such as the density and specific heat of dry air at STP are set to their tabulated values. Granted, that leaves a lot of free parameters that could be tuned to get the best 10, 20, and 30 year hindcasts, but I could not find anything in the source cited by Rud Istvan that supported his claim.

      • Willis Eschenbach

        Thanks, Matt. Please re-read what I said about “evolutionary tuning”. It is the only reasonable explanation I know of for the “Kiehl paradox”, viz:

        The question is: if climate models differ by a factor of
        2 to 3 in their climate sensitivity, how can they all simulate
        the global temperature record with a reasonable degree of
        accuracy>

        If you have an answer for that question that does NOT involve tuning, either “evolutionary tuning” or any other kind of tuning, now’s your opportunity.

        Best regards to you, your comments are always of interest.

        w.

      • Matthew R Marler

        Willis Eschenbach: If you have an answer for that question that does NOT involve tuning, either “evolutionary tuning” or any other kind of tuning, now’s your opportunity.

        I have a suspicion that undocumented tuning may have been done. I began my contribution to this topic by challenging Mosher’s one-word dismissal of a claim by Istvan, and so far I have not been able to find Istvan’s claim supported by the reference that Istvan cited.

      • Matt, I kept questioning you because I was thinking along the same lines as Willis:

        “As to being tuned to the historical data, all the models are tuned to it, but in an “evolutionary” rather than a direct fashion. By that I mean a model is built, and the only way we have to test it is to compare it to historical data. If a given change in the model makes it fit that historical data better, the change is retained in future incarnations of the model, while changes that make the historical fit worse are discarded. After many, many iterations, eventually we end up with a model that reproduces historical data in some fashion.”

        I think you have Rud on a technicality, but the models are tuned to historical data by the Darwin process. Sort of how the climate sigh-entists select temperature proxies. I thinks it’s around the 17:00 mark of the video Mosher recommended that there is a hint of that process. I think the guy calls it filtering.

  82. Group  of  physicists

    You can forget about the garbage “science” which claims that the Sun’s radiation plus radiation from a planet’s colder troposphere somehow explains the planet’s surface temperature. It doesn’t. Not on Earth. Not on Venus and certainly not at the base of the nominal troposphere of Uranus where there is no solar radiation or surface, yet it’s hotter than Earth’s surface down there.

    You can forget the garbage “science” which then claims the temperature gradient (aka “lapse rate”) is caused by rising parcels of air that could only be held together by wind in all its forms. Such a process does not form the expected temperature gradient. Only the very slow diffusion process does so – that same process that you see when your car has been heated up by the Sun in your driveway and you then drive it into your garage, close the garage door and open all the car doors. That is not wind even if you can detect very slow advection due to net molecular motion.

    What does happen on every planet is that radiative balance is attained with the Sun, though in detail the planet cools on its dark side and warms back up by the same amount on its sunlit side. The temperature gradient is formed at the molecular level due to the force of gravity acting on molecules as they move between collisions. We see physical evidence of a (centrifugal) force field redistributing molecular (micro) kinetic energy and producing hot and cold streams of gas in a Ranque Hilsch vortex tube.

    So radiative balance sets the overall mean temperature in the troposphere and then gravity induces a temperature gradient. That gradient is then reduced in magnitude by intermolecular radiation between molecules of water vapor and other so-called greenhouse gases. We know water vapor reduces the gradient, so the thermal profile rotates downwards at the surface end. It is ludicrous to think that water vapor raises surface temperatures by most of “33 degrees” when in fact it lowers them and empirical evidence confirms this. There’s more evidence here.

    • “You can forget about the garbage “science” …

      First, I share your skepticism of Hansen and other’s surety of their thermodynamic physics in the real world atmosphere and oceans. One clue of this is that greenhouses do not work by the Greenhouse Effect. After plants and dirt in the greenhouse convert the visible and UV radiation to IR the water vapor and other gases absorb most of that heat by radiation + convection. The amount of energy that is emitted by the gas and plants in the IR that makes it to the glass actually escapes since traditional glass is mostly transparent in the IR (not that low-E glass). The real trick to the greenhouse is the glass providing a barrier to convective heat loss (air transfer).

      That said, I agree with all your article link’s physics arguments that I can follow but am skeptical of the claim that GHG, including vapor, are a cooling influence on the surface temperature. (I am a lukewarmer.)

      I suspect that the GCMs are likely discounting the Second Law a bit in its abilities to find ways to disperse heat back into the cosmos. And, I agree also that GHG may have some shading effect of the sun’s direct rays and possibly some refracting effect on the very low angle rays. The reason for the first is that CO2 has a significant band at 2 microns, and water vapor in many shorter wavelengths that are in a significant part of the sun’s spectrum.

      On the outgoing heat I think the GHG are indeed a small added barrier but the cooling is spread over double the time and over double of more the surface area due to diffusion toward the poles.

      BTW, I also think the 10C ice age fluctuations with CO2 levels lagging temp change like a turtle on a leash would have even Arrhenius scratching his head.

    • @ Physicist, I replied further down the string with some examples to see if I am following you.

  83. Mark B (number 2

    I remember a few years back Dr Vaughan Pratt coming on here with an astonishingly complex model which could describe climate change within 0.1 degree C. Unfortunately, he said that it couldn’t make any future predictions, due to unforeseen circumstances (such as volcanoes erupting).
    I was going to download his program and data onto my laptop, but he said that the whole thing was too complex for open office spreadsheets to handle and had to be in excel only. I don’t have excel, so I cannot judge how accurate his model was. But I would like an update as to how well it is working. If it has required (even more) tinkering to get the required results, I would be a bit disappointed.

    • Vaughan Pratt

      Mark B (number 2: I remember a few years back Dr Vaughan Pratt coming on here with an astonishingly complex model

      Get used to astonishment. With regard to complexity, the CMIP5 models are somewhere between a thousand and ten thousand times as complex as the one-line formula I gave.

      Unfortunately, he said that it couldn’t make any future predictions, due to unforeseen circumstances (such as volcanoes erupting).

      Fortunately I didn’t say that.

      I don’t have excel, so I cannot judge how accurate his model was. But I would like an update as to how well it is working.

      Update: it is working just as well now as then. Any other questions?

      • With regard to complexity, the CMIP5 models are somewhere between a thousand and ten thousand times as complex as the one-line formula I gave.

        And they showed that they cannot reproduce the NH dynamics following a tropical volcanic singularity such as Pinatubo.

        http://onlinelibrary.wiley.com/doi/10.1029/2012JD017607/pdf

        Interestingly they are have not increased their level of skill above that of the CMIP3 models.

  84. Basil Newmerzhycky

    Honestly this students thesis is quite ignorant to put it mildly. I think that skeptics find this so fascinating is that they would rather “burn books” in doing away with GCM’s so that they don’t have to confront climate change.

    The way ANY computer model improves is to discover weaknesses and improve equations and coefficients to correct them. That, combined with steadily increasing computer power produces steady improvement in accuracy. Thats how the medium range weather forecast models (15 days) improved tremendously over the past 30 years. and even more so in the past few years. Tweaking model equations after noticing weaknesses, also using the same model in multiple runs with slightly different coefficients, deriving an “ensemble mean” or “Spaghetti solution” all have vastly improved models. I’m sure a PhD student could have done a theses in the 1970’s or 80’s claiming that models were useless in forecasting beyond 5 days and we should go in a different direction. Today that student would have been proved totally wrong. Do some forecasts beyond 4 days get missed today…yes…but overall model certainty has improved to the point where the average 4-5 day accuracy threshold in the mid 80’s has more than DOUBLED today to the 8-10 day range.

    The fact that this student was allowed to continue with this thesis just goes to show the leniency and generosity of the faculty, who do not want to discourage a hard working, albeit misguided student.

    Judith, with all due respect, the reason this study would not happen in the US or would not get published in a serious journal, (just the blogosphere) is not because of any conspiracy theory…its simply because its utter nonsense.

    • How much has tweaking and more computer power improved 60 day weather forecasts over the last thirty years, Basil?

      • Read the thesis again, Basil. It’s not about models for short range weather forecasts. Duh!

      • Basil Newmerzhycky

        Huh…who said anything about 60 days? Actually producing an accurate 15 day dynamic weather forecast across the globe is more challenging than producing a 15 year GCM climate run, because medium range forecast models like the GFS (not sure if you have enough of a meteorology background to know about it) are much more subject to chaotic effects than a closed system in a GCM run. There are some chaotic effects in climate models but they occur much more gradually along much longer time intervals.

      • Very revealing, Basil:”Actually producing an accurate 15 day dynamic weather forecast across the globe is more challenging than producing a 15 year GCM climate run, because medium range forecast models like the GFS (not sure if you have enough of a meteorology background to know about it) are much more subject to chaotic effects than a closed system in a GCM run.”

        We know about the difficulty in producing an accurate 15 day weather forecast. We also know that you can throw the kitchen sink into a random 15 year GCM run, and nobody will give a squat. We are paying for tons of GCM climate runs. They are running all over the place, like a freaking herd of cats. Point out the top 5 accurate ones for us, bayzil. The GCM’s are toys and they are not fit for policy making. Get back to us in 30 years.

      • Notice how bayzil introduces the analogy with weather forecasts and then goes all Huh, when I talk about 60 day forecasts. What a character.

      • Don

        “We are paying for tons of GCM climate runs. They are running all over the place, like a freaking herd of cats. ”

        Yeah, but they pay the rent for someone.

      • The majority of the Statement Analysis techniques are based on word definitions. Every word has a meaning. When you combine this with the fact that people mean exactly what they are saying, it then becomes possible to determine what a person is telling you and if the person is being truthful.

        I can’t tell if the above statement is a lie, or a self-deception, or simple ignorance of how language really works. On second thought, it’s true (they “mean exactly what they are saying”) but it’s not what a typical reader would think. It’s possible to throw boxcars 2,000,000 times in a row. Not bloody likely, though.

        Hopefully, your link to that gobbledegook was meant as a joke.

    • Basil,

      I don’t think his thesis was about meteorology. Surely one can be a skeptic about hockey stick graphs and 40-year models of CO2 sensitivity and still appreciate the good work of our weather forecasters or any other uses for computer modeling where the track records clearly show there is value.

      I support GCM if it is applied to tasks with less unknowns or compounding error. I understand it has been used by NASA to model Martian weather. I think that would be a great place to test it’s extended forecasts since it is a less complex dry atmosphere. (And there is no political career motivations in the outcome of the predictions. ;)

      • I think bayzil has gone home with his ball and his strawmen. Quite a coincidence, I googled his name and found that he got a climate degree of some sort from the University of Mars.

    • Thats how the medium range weather forecast models (15 days) improved tremendously over the past 30 years. and even more so in the past few years.

      In 1982 the error doubling rate in the ECMWF data was around 2 days (Lorenz) since then the error doubling rate has nearly doubled due to increased heavily parametrized weather models running on faster more sophisticated computers Nicolis and Nicolis 2009.

      • Interesting stuff, Nicolis Nicolis 2009:

        http://www.scholarpedia.org/article/Butterfly_effect

        “Clearly, as soon as the distance between two instantaneous states separated initially by a very small error will exceed the experimental resolution the states will cease to be indistinguishable for the observer. As a result, it will be impossible to predict the future evolution of the system at hand beyond this temporal horizon. This raises the fundamental question of predictability of the phenomena underlying the behavior of the atmosphere.”

  85. Nobody understands me.

    Does anybody remember, along with the climategate emails there was also a bunch of code? I’ve never seen much said about that. Well I looked at that code. It didn’t pass the smell test.

    Now, I don’t know whose code that was, or what exactly it is or was used for, but just from the looks of it – I wouldn’t trust it to run my coffeemaker.

    When I talk about validation, I’m not talking about whether or not tweaking and tuning produces this or that result. I’m talking about the integrity of the entire Software Development Process .

    These are guidelines.
    General Principles of Software Validation; Final Guidance for Industry and FDA Staff .

    For something as important as the climate simulations, which are now being used to determine the course of mankind, I think that at a minimum that the integrity of the GCMs themselves need to be validated by a competent, INDEPENDENT (!), authority.

    This is why Steven Moshers statement to the effect that

    “GCMs have no specs.”

    Absolutely Blows My Mind.

    Until somebody produces a Seal of Approval from an authority with their reputation on the line and no stake in the game – this long discussion about which parameters are tuned or not, is IMO superfluous.

  86. Okay,

    googling “climate modeling software code quality” a Climate Etc. post from 2012 comes out on the top of the list and covers exactly the topic I’m looking for:

    Assessing climate model software quality

    It’s amazing how often that happens. I guess this is just THE go to place for climate stuff!

    Have a heartfelt “Good on ya!” from me, Frau Dr. Curry!

    P.S. from the 2012 post “…collecting defect data from bug tracking systems and version control repository comments” doesn’t even begin to address the issue of independent validation as I consider necessary.

  87. I am not a physicist sadly. So allow me to repeat back what I think you are saying in a few scenarios to model examples.

    ex. 1) A planet that has no nearby star or internal latent heat would have an atmosphere of a few degrees Kelvin, colder than Uranus, at all atmospheric heights and pressures. Nil energy in-out.

    ex. 2) The same planet wonders into orbit around a nearby star, the atmosphere is composed entirely of inert gas and the planet’s surface is made of fine polished silver (0 emissivity). The sun’s photons would pass in and out of the atmosphere without affecting the gas leaving all the atmosphere again near absolute zero temperature.

    ex. 3) The same planet runs into a cloud of black dust making the surface a perfect black body, 100% absorbing and emissivity. Now the black body radiation must come into equilibrium with the energy of incoming radiance. The inert atmosphere cannot participate in shading the surface but it does participate in insulating the heat in a small way by first absorbing heat from the surface by convection and then emitting its own black body radiation in all directions, including down.

    ex. 4) The same planet now has erupts and GHG spews out. Under Hansen’s view now the atmosphere equilibrium temperature increases by absorbing radiation in addition to just direct surface contact. The higher the opacity the higher the equilibrium temperature rises, first rising exponentially and then tailing off exponentially, in correlation to GHG concentration.

    However, in your view the temperature equilibrium at the surface in unaffected by the GHG except to better disperse the heat evenly wherever there is a gradient, north south, day, night. In addition, the surface is cooled to the extent that the GHG intercepts a small amount of the incoming radiation and re-emits it back into space before it gets to the surface.

    Please use my examples to further clarify.

    • I forgot the key point of the GHE: Exiting radiation is inhibited by the GHG mainly because of the cooling that occurs as hot gas rises to lower pressure to the tropopause where finally the convective forces cease to be more significant than radiation in transmitting energy. The photons can proceed unimpeded to space. But this high departure point being at a lower pressure/temperature can no longer emit it’s energy as efficiently as the higher temperature lower atmosphere. In order to make up for that loss of efficiency the whole system needs to rise in temperature to come to equilibrium to meet the in-out energy budget.

    • R Graf,

      Inert gases do absorb low energy photons, fortunately. The contents of a gas cylinder containing compressed argon seems to be at the same temperature as the cylinder. The same is apparently true of any gas.

      Heat the cylinder, and the temperature of the contents rise. Allow the cylinder to cool the, and the temperature of the contents falls.

      The only truly transparent thing is the absence of anything, that is, a vacuum. Gases are relatively transparent to some wavelengths, relatively opaque to others.

      No gas can be totally transparent to any wavelength. Given enough substance and thickness, a layer of any gas will totally absorb all radiation impinging on it. Its temperature will rise, until it is able to rid itself of the extra energy, and assume the same temperature as the surrounding environment.

      The CO2 greenhouse effect does not exist, any more than the luminiferous aether, or phlogiston.

      The threading is broken, so I do not know to whom you addressed your query. I believe you classify yourself as a believer in the GHE, but I incline to the view expressed by either yourself or someone in your last paragraph, except that the surface is kept a little cooler in Sunlight, and a little warmer at night, by comparison with, say, the Moon, and its comparatively less dense atmosphere.

      Live well and prosper,

      Mike Flynn.

      • Gases absorb and emit at specific frequencies. With molecular nitrogen ignificant absorption occurs at extreme ultraviolet wavelengths, beginning around 100 nanometers.

        Other gases in the atmosphere absorb in bands at lower frequencies. Pure gases are transparent to other frequencies.

        More than a third of the energy from the Sun gets through to the surface.

        Heating of molecular nitrogen and oxygen happens with collisions and a transfer of kinetic energy.

      • Rob Ellison,

        I assume your reply was meant for someone else, as it doesn’t appear particularly germane to anything I wrote. Thanks anyway.

        You wrote –

        “Heating of molecular nitrogen and oxygen happens with collisions and a transfer of kinetic energy.”

        Is it possible that the application of heat not involving collisions – for example by radiation from a heat source – might be something to consider?

        I repeat – the only truly transparent medium is the one that does not exist – a vacuum.

        Live well and prosper,

        Mike Flynn.

    • “Inert gases do absorb low energy photons, fortunately. The contents of a gas cylinder containing compressed argon seems to be at the same temperature as the cylinder.”

      I think most can agree in the above example that convection is the main mode of energy transfer, not so much radiation. I believe also that main mode of outgoing heat transfer in the atmosphere is through convection. Only when one gets to the TOA does convection founder and the final expelling of energy to the vacuum of space is through radiation. Let me know if I’m wrong but I believe the GHE is postulated that because at the TOA the outgoing radiation spectrum has blockage in CO2’s fingerprint bands, which make up about 8% of the exiting energy spectrum, the gas density at which enough radiation can escape has to be that much lower. At lower density, i.e. gas pressure and temperature the energy budget is not met until the whole system T rises to bring back equilibrium.

      I believe there are legitimate differences of opinion of how small that effect is with a doubling or quadrupling of CO2. But even alarmists agree that the effect tails off to saturation insignificance at some point between those two amounts.

      I do have two questions that I do not see answered in my studies. First, is the degree, even small, to which CO2 shadows sunlight’s spectrum tail at 2.8 micron wavelength. Because, to the degree that this warms the TOA the opposite of the GHE occurs.

      My second question is if it is yet considered that CO2 could cause warming or cooling by diffusing or concentrating temperature extremes at the TOA. I believe black body radiation is not linear to temperature thus the more pockets of concentration for any area will radiate at an overall higher flow rate.

      • R Graf,

        You wrote –

        “I think most can agree in the above example that convection is the main mode of energy transfer, not so much radiation.”

        You might care to explain, to those who might not agree, how much convection is occurring when the cylinder is cooling evenly, and the gas inside cools along with it.

        Conversely, if you are proposing different energy transfer modes for gases absorbing heat from the environment, or vice versa, you might indicate why you need different physical frameworks to explain heating and cooling.

        With respect to your other questions – addressed generally I know – CO2 absorbs energy as do all other gases, following the same physical principles. Regarding CO2 as opaque to certain frequencies may blind one to the fact that opacity is simply another way of saying that the radiation is absorbed totally. Now, energy is neither created or destroyed, and the energy absorbed interacts with matter in different ways.

        To put it simply, if CO2 absorbs energy (which it must, as you have stated that it is opaque to that energy), then there will normally be a temperature rise. If the CO2 is now at a higher temperature than its surroundings, it will cool, by emitting radiation. No convection is involved, and in the absence of incoming radiation it will cool to absolute zero, at which point it is emitting no photons at all. Talk of CO2 warming or cooling anything in any way different to any other gas is mischievous at best.

        In answer to your second question, CO2 cannot concentrate or diffuse temperature differently to any other gas. A sample of CO2 at 25 C is exactly the same temperature as a sample of O2 at 25 C. A 50% mixture of the two at 25 C is is indistinguishable by temperature alone from the other two.

        None of them will magically cool, warm, or change their energy content if the surrounding environment is also at 25 C.

        There is no warming of the globe, or anything else for that matter, due to magical properties of CO2. I have moved on to another thread. Please feel free to correct me, if you have facts to back your arguments.

        Live well and prosper,

        Mike Flynn.

      • Mike,

        I understand. My point is that both convection and radiation are at play in almost every situation in everyday life and in the atmosphere.

        I have yet to find a physicist who can quantify by theorem exactly how efficient one mode of energy transfer is over another in a complex situation, even in a gas cylinder. Most are taught convection is generally more efficient than conduction and radiation is the backup for the other two unless it’s blocked by a reflective surface. Inert gases, like argon sealed between insulating double pain glass, hamper radiation due to no absorption bands. Argon floating in outer space should not warm from sunshine. Hot argon in a gas cylinder should be able to heat up the walls nearly as fast as hot CO2 since both can emit black body radiation. But cooled argon entering a balloon should have a much tougher time warming up only relying on only convection/conduction with the balloon wall as a means of warming.

        My question is how to quantify, at a given temperature, a gas’s black body radiation emission relative to fingerprint radiation absorption relative to it’s black body absorption, all while competing with convection transfer.

        This, I believe, is what is needed to allow one mathematically model the GHE beyond having to do empirical tests.

      • Shortwave solar radiation is the primary mode of surface heating. Evaporation is the primary mode of surface cooling. The surface is the area of concern for living things.

        Heat Budget 101:

        http://oceanworld.tamu.edu/resources/ocng_textbook/chapter05/chapter05_06.htm

        Memorize it. Know why the global distribution of terms in the heat budget appear the way the way they in all the illustrations.

      • RGraf

        Argon in a double paned window does nothing to hamper radiation. It hampers convection and conduction.

        http://www.nachi.org/window-gas-fills.htm

      • The argon thing is kind of interesting from a climate perspective. It is the GHG in air that make air a better conductor of heat. IR stimulates the GHG, which then migrates to the inner window pane, collides with it and thereby transfers the energy to the inner pane. When convection occurs, more heat is carried to the upper atmosphere by GHGs to be released at the top of the convection cell. Without GHGs, much less heat would be transferred via convection.

      • Jim2, Berényi Péter above just posted on the same idea that CO2 improves heat flux, just like it does in a combustion engine. So yes, CO2 should improve convection by aiding in radiant flux.

        In answer on argon window filling, the reason argon is used is that it allows less conduction than air and not any more radiation. CO2 oddly has very poor conduction as well, right there with argon.
        Thermal Conductivity in Wm/K (conduction, not radiant)
        CO2– 0.015
        Argon 0.016
        Nitrogen 0.024
        Oxygen — 0.024

        Argon thermodynamic profile between two window pains:
        Barrier Gap 0.25″ 0.75″

        radiation 15% 20%
        convection nil 39%
        conduction 85% 41%

        Conclusion: convection becomes more dominant than radiation for heat transfer of non-GHG in open air. Likely the same is true for WV and CO2 but to a lesser gap. Anyone have the figure handy?

        Here is my source on the windows:
        http://buyat.ppg.com/glasstechlib/7_TD101F.pdf

  88. Steven Mosher | February 5, 2015 at 3:47 pm |

    “Finally climate models have zero to do with obamas pen and phone. He don’t need no climate model to
    Out smart establishment rhino nitwits”

    Yeah, he outsmarted them so well the establishment rinos took control of both the house and senate and in two more years the executive office as well. Go Obama!

  89. Figure 5.7 Upper: Zonal averages of heat transfer to the ocean by insolation QSW, and loss by longwave radiation QLW, sensible heat flux QS, and latent heat flux QL, calculated by DaSilva, Young, and Levitus (1995) using the COADS data set. Lower: Net heat flux through the sea surface calculated from the data above (solid line) and net heat flux constrained to give heat and fresh-water transports by the ocean that match independent calculations of these trasports. The area under the lower curves ought to be zero, but it is 16W/m2 for the unconstrained case and -3W/m2 for the constrained case.

    Source: http://oceanworld.tamu.edu/resources/ocng_textbook/chapter05/chapter05_06.htm

  90. Reblogged this on Against the Climate Change Agenda and commented:
    An interesting thesis. Well done.

  91. @Group of physicists commented: ”Where is your evidence of 15C° of warming for each 1% WV as implied by IPCC documentation?”

    Mr ”physicist”: #1: after 20y of misleading propaganda – you know that: is theoretically impossible to prove the truth by one comment / paragraph – so: I asked you ”twice”, to read the ”whole” post, where is proven ”beyond any reasonable doubt” that: if the public on the street knows what’s in that post -> all advisers to IPCC would end up in jail! Instead: you are proud of yourself that: ”your knowledge matches that of IPCC…?!

    #2: using numbers, or formulas – doesn’t work for the public on the street – to explain the truth (I tried): Instead, one must cut off the dead wood and cross the irrelevant zeroes! b] unless you are given a job, as a Warmist – ”to get embedded in the ”skeptic’s” camp – when build trust – to create even more confusion” Trust me: the permanent Dung Beetles here are past that point, ”to get even more confused” So: you are wasting your time and ammunition… c] by ”proving” to me that: ”because Rudolf is for real = Santa must be for real; therefore: I must be ignorant – getting to that conclusion, before reading the ”whole post”… is very silly of you… honest physicist don’t do that, maybe one of your colleagues is honest and open mind?!

    #3: you were bragging about ”your knowledge in physics” – same as person telling that is professional guitarist”- instead of me arguing – I gave you the guitar / the Holly Grail in the phony global warming climatology… you are scared even to touch it…. Therefore: you are blaming me; without reading the post, because: a] you are scared from real proofs b] you are not confident in your knowledge c] you have being instructed in maximizing confusion and to disregard reality d] you memorized ”physics” as lyrics in a song – without understanding the meaning, to be used in reality…????!

    Any ”honest” scientist would have read every sentence of the post, twice; before getting in the debate. Reason I said before: ”I have advantage of knowing the truth, not fair to you, please read the post first, I’m a soft target – then we can have interesting debate” the offer still stands!!!! ”Proving” to me precisely: how many grams of oats Rudolf eats every day – is irrelevant for the reality! Therefore: admit that: you run away, after you read few sentences – because I don’t use IPCC’s gospel: entropy, flux adiabatic, lapse,, signal, noise, energy budget, positive / negative feedback,, hiatus, projecting, algorithm, non linear”’ That is IPCC’s / PROPAGANDA talk / gospel for the Warmist & skeptic’s Dung Beetles; not honest physicist’s talk! Stay in touch!

  92. Pingback: Weekly Climate and Energy News Roundup #167 | Watts Up With That?

  93. @Group of physicists commented: ”On Venus next to nothing is “produced”

    Physicist, my post is physicist’s paradise, made even 9y old to understand. When I said: -”please don’t skip any sentence, read the WHOLE post” I presumed that you have enough brains; to understand that is a reason for it… If you were a genuine / honest physicist – you would have read the lot and said: -”how did you manage to expose in one post, all the misleading that has being going for the last 20y, billions of pages of literature; including what they use ”Venus” for badmouthing CO2” (every paragraph in the post is 10-20 pages in my book – same thing, only in more details facts and proofs)

    Instead, you know from the first sentence – who done it, AND WHY…?! And you are saying that I don’t know what I’m talking about?! Hallelujah!!! By that: you proved that: a] you are a feather brains, or b] you prefer to shovel IPCC’s bull, to feed the Dung Beetles from both camps…or, most probably, both.

    You had a choice: to join us / THE TRUTH, and be proud of yourself, or to keep distributing IPCC’s bullshine to the bull addicts, from both camps… Well… you have to look at yourself in the mirror AND: when you go to bed, to know that: the truth always wins on the end! time is on our side! you can run from the truth; but can’t hide! Physicist: the more you know => the more you are worth! When I said: ”I know what you know (even Venus is in the post and book) but you don’t know what I know – be fair to yourself and read the real proofs”’ … instead, you are turning deaf and mute on the subject – because you suffer from ”Truth phobia” as the other 150 Warmist and about 50 ”skeptics” that already read that post! Lucky, at leas a dozen secular believers & sceptics read it also, so-far; those people are valuable – they are not preprogramed to panic when they see the truth, like you, but rejoice it and they are spreading the truth to the public. Keep in touch!