Should we assess climate model predictions in light of severe tests?

by Judith Curry

This question is posed and addressed in a recent article by Joel Katzav in EOS.

For some background information on how climate models are evaluated, see these previous posts:

Joel Katzav is  a Professor in the Department of Philosophy and Ethics at the Eindhoven University of Technology, the Netherlands.  His primary areas of research are metaphysics, the philosophy of science, the philosophy of technology, the methodology of philosophy and practical reasoning. My work is currently focused on existence, laws of nature and the epistemology of climate models.

Katzav, J. (2011) ‘Should we assess climate model predictions in light of severe tests?’Eos, Transactions, American Geophysical Union, 92(23), p. 195.

Some excerpts:

According to Austro-British philosopher Karl Popper, a system of theoretical claims is scientific only if it is methodologically falsifiable, i.e., only if systematic attempts to falsify or severely test the system are being carried out. He holds that a test of a theoretical system is severe if and only if it is a test of the applicability of the system to a case in which the system’s failure is likely in light of background knowledge, i.e., in light of scientific assumptions other than those of the system being tested.

An implication of Popper’s above condition for being a scientific theoretical system is the injunction to assess theoretical systems in light of how well they have withstood severe testing. Applying this injunction to assessing the quality of climate model predictions (CMPs), including climate model projections, would involve assigning a quality to each CMP as a function of how well it has withstood severe tests allowed by its implications for past, present, and near-future climate or, alternatively, as a function of how well the models that generated the CMP have withstood severe tests of their suitability for generating the CMP.

For example, a severe testing assessment of a CMP generated by a member of the ensemble of global climate models that will be relied on in the fifth assessment report of the Intergovernmental Panel on Climate Change (IPCC) might involve assessing how well the member has done at simulating data that are both relevant to determining its suitability for generating the CMP and unlikely in light of the ensemble of global climate models relied on in the IPCC fourth assessment report. Data capturing global mean surface temperature trends during the second half of the twentieth century are relatively well simulated by, and thus not unlikely in light of, the ensemble of global climate models on which the IPCC fourth assessment report relied. These data would, accordingly, not be expected to challenge global climate models developed since the fourth report and are thus unsuitable for severely testing models that will be relied on in the fifth IPCC report. Data capturing the positive global mean surface temperature trend during the late 1930s and early 1940s are not well simulated by the ensemble relied on in the fourth IPCC report. These data will thus better serve to test severely models used in the fifth IPCC report.

An important question is whether Popper’s injunction should be applied in assessing CMP quality. As we will see, performance at severe tests currently plays a limited role in such assessment. I argue that this should change.

Severe Testing Assessment of CMPs: Current Situation

The scientific community has placed little emphasis on providing assessments of CMP quality in light of performance at severe tests. Consider, by way of illustration, the influential approach adopted by Randall et al. [2007] in chapter 8 of their contribution to the fourth IPCC report. This chapter explains why there is confidence in climate models thus: “Confidence in models comes from their physical basis, and their skill in representing observed climate and past climate changes”.

The focus in this quote, and elsewhere in the chapter, is on what model agreement with physical theory as well as model simulation accuracy confirm. Supposedly, better grounding in physical theory or increased accuracy in simulation of observed and past climate means increased confirmation of CMPs.

CMP quality is thus supposed to depend on simulation accuracy. However, simulation accuracy is not a measure of test severity. If, for example, a simulation’s agreement with data results from accommodation of the data, the agreement will not be unlikely, and therefore the data will not severely test the suitability of the model that generated the simulation for making any predictions.

Severe Testing Assessment of CMPs: Why Do It?

It appears, then, that a severe testing approach to assessing CMP quality would be novel. Should we, however, develop such an approach? Arguably, yes (see also comment 3 in the online supplement). First, as we have seen, a severe testing assessment of CMP quality does not count simulation successes that result from the accommodation of data in favor of CMPs. Thus, a severe testing assessment of CMP quality can help to address worries about relying on such successes, worries such as that these successes are not reliable guides to out-of-sample accuracy, and will provide important policy-relevant information as a result (see comment 4 in the online supplement).

Second, assessing CMP quality using a severe testing approach would assist in assessing the maturity of the science underlying CMPs. This is because the more mature a body of knowledge is, the easier it is to specify severe tests for its claims. Assume that we want to test a prediction severely. The prediction will have testable implications only when conjoined with a set of additional assumptions, including basic theory and quasi- empirical generalizations. So if we are severely to test the prediction, and not just the conjunction of the prediction and the additional assumptions, then the additional assumptions will have to be established independently of the prediction. Only then might the potential falsity of an implication of the conjunction of the prediction and the additional assumptions constitute a real potential challenge to assuming the truth of the prediction, as opposed merely to a challenge to the conjunction of the prediction and the additional assumptions. The more mature a science is, the more such independently established claims tend to be in place and the easier it is to specify severe tests (for an illustration, see comment 5 in the online supplement).

Although severe testing is not typically used in existing assessments of CMP quality, some severe testing of models and CMPs may already occur. Still, and this brings us to a third reason for using a severe testing approach to assessing CMP quality, applying such an approach would increase the extent to which severe testing is used, which, in turn, might help us to develop better CMPs. According to Popper, severe testing is the way in which science progresses and thus the way in which to uncover better predictions. Even if we don’t accept that a methodology based on severe testing is the only way in which we learn about the world, it is clearly one important way of doing so.

JC comment:  Joel just emailed me this paper, somehow I missed it when it was first published.   I really like this paper, and am excited to see a philosopher with Joel’s range of interests taking on the climate modeling problem.

382 responses to “Should we assess climate model predictions in light of severe tests?

  1. I do not believe the thoughts of a philosopher are particularly relevant to the validation of an engineering model. I believe that climate models are engineering models. They should simulate what will happen here on planet earth under a set of conditions.

    Models are supposed to accurately predict things—it is that simple imo—although developing a good model is extremely complex for the climate. It is much more complex and unreliable over longer timeframes.

    Current climate model developers do a very poor job of stating what their models are designed to accurately predict over what timeframe. Why is that???

    When you go over all of Judith’s pervious posts on GCM’s there is a lot of philosophy and general information but little to nothing regarding the specifics of the validation criteria of specific models. Look at http://www.cgd.ucar.edu/staff/gent/ccsm4.pdf and tell me if you can find anything specific of ANY value that would make you believe the results of a climate model.

    What can your model accurately predict over what timeframe? Simple question—how about simple answers?

    • Getting the “right” answer for the wrong reason won’t help you with making future predictions. And the epistemology of complex system models (which are different from most engineering modeling applications) is a subject that is poorly understood, by both scientists and philosophers (and engineers don’t seem to be even aware this issue exists).

      Simple answer to your question: climate models can simulate the annual cycle and meridional variation of global climate variability. They can resolve climate variability on continental space scales. they are hypothesized to predict climate evolution if the external forcing is correct on timescales greater than decadal scale natural internal oscillations.

      • Dr. Curry,
        Does your view of the predictive capabilities of the mdoels differ from that of Dr. Pielke, Sr.?
        http://pielkeclimatesci.wordpress.com/2011/07/08/confessions-of-members-of-the-climate-science-community/

      • well, i don’t see the link you point to as saying much about the capabilities of climate models. The links that Pielke cites are good articles that i don’t have any substantive disagreement with.

      • Dr. Curry,
        Here is what I was referring to, which I may be misinterpreting.:
        “This claim of decade surface temperature prediction skill, of course, is not supported by their lack of skill since 1998. The next section of their paper highlights the wide range of uncertainties for decadal prediction and the movement away from the global average surface temperature as the icon of climate change. For multi-decadal climate predictions, these uncertainties are necessarily even higher.”
        I am really not trying to snark or detour here, but trying to make sure I understand you clearly. FWIW, your link to your text book was very informative. I am going to order a copy.

      • Yes-getting the right answer for the wrong reason is the same as making a guess and randomly being correct. That was not the point.

        Epistemology? I think of that as a branch of philosophy that investigates the origin, nature, methods, and limits of human knowledge.

        Why do you relate to a climate model that is designed to predict specific conditions in a complex physical system to philosophy?

        Personally, I currently see climate models in a very similar fashion as I see macroeconomic models. Both macro economics and climate are very complex systems with many variables–most of which are poorly understood. At least with economic models—those making the models will describe what there models are supposed to predict and we then judge if the model was any good or not.

      • Au contrarie Professor Curry.

        Is not, for example, releasing an external store from an inherently unstable supersonic aircraft with the objective to hit a target that the pilot can’t even see, all the while in the midst of a complex maneuver in ( x,y,z,t) a somewhat complex systems-engineering application?

        There are thousands of additional examples in engineering.

        How would you propose that we compare and contrast ‘complex system models’ and ‘most engineering modeling applications’.

      • G’day Dan,

        Judith is referring to dynamical complexity – chaos theory, non-linearity, sensitive dependence, abrupt change – and not linear systems notwithstanding that they are complex in the dictionary meaning of the term.

        Some background theory in dynamical complexity is required. Climate models are inherently unstable dynamically complex systems – they can shift into a very different solution space with minor changes in inputs.

        Cheers

      • The climate is extremely stable and the climate models are unstable.
        The climate models are not yet right.
        The climate has powerful negative feedback to temperature and the climate models do not.

      • Chief,

        All the elements of my example include non-linearity. We live in a non-linear universe; everything of interest includes non-linearity.

        Engineers have made inherently unstable aircraft, in the full non-linear sense, usable.

        The fundamental equations for my example are the same as those for the the climate. Radiative energy transport, with an interactive media, for example, is important in the engines.

        The systems that I outlined are all inherently complex. The problem is not made complex by being comprised of a group of simple problems.

        Is Judy’s objective to discuss how chaos alone makes the climate problem wicked?

      • Chief Hydrologist

        Hi Dan,

        The fundamental equations of models are the partial differential equations of fluid motion. Which are the equations Edward Lorenz used in his early convection model – to discover chaos theory. So we are using non-linearity in the sense of a chaotic bifurcation in chaos theory.

        Climate of course has no equations – here we have competing negative and positive feedbacks with thresholds of forcings that lead to abrupt change. Tremendous energies cascading theough powerfull systems. The criteria here is abrupt climate change – defined as a sudden change in climate out of proportion to an initial forcing.

        ‘Recent scientific evidence shows that major and widespread climate changes have occurred with startling speed. For example, roughly half the north Atlantic warming since the last ice age was achieved in only a decade, and it was accompanied by significant climatic changes across most of the globe. Similar events, including local warmings as large as 16°C, occurred repeatedly during the slide into and climb out of the last ice age. Human civilizations arose after those extreme, global ice-age climate jumps. Severe droughts and other regional climate events during the current warm period have shown similar tendencies of abrupt onset and great persistence, often with adverse effects on societies.

        Abrupt climate changes were especially common when the climate system was being forced to change most rapidly. Thus, greenhouse warming and other human alterations of the earth system may increase the possibility of large, abrupt, and unwelcome regional or global climatic events. The abrupt changes of the past are not fully explained yet, and climate models typically underestimate the size, speed, and extent of those changes. Hence, future abrupt changes cannot be predicted with confidence, and climate surprises are to be expected.

        The new paradigm of an abruptly changing climatic system has been well established by research over the last decade, but this new thinking is little known and scarcely appreciated in the wider community of natural and social scientists and policy-makers.’ http://www.nap.edu/openbook.php?record_id=10136&page=1

        Au contraire – the problem is made dynamically chaotic – in the sense of chaos theory – by the conflation of mechanisms in a dynamically complex Earth system. The ‘group of simple problems’ is inherently chaotic.

        Now – I realise we may not agree on this – but you will be wrong. Which is why I emphasised the need for an appreciation of chaos theory.

        Cheers

      • none of the examples are as complex as climate system with a large number of linked subsystems. A difficult engineering problem requiring a high degree of precision in the simulation is different from an extremely complex and chaotic large scale system with multiple subsystems.

      • Judy, we could continue to discuss this aspect from positions of authority and using hand and arm waving.

        Or we could devise some technical evaluation criteria and associated metrics that relate to inherent complexity of physical phenomena and models for these.

        Coupled subsystems are present in many engineering problems. Coupled Multiple-physics, multiple-scale phenomena and processes also.

        None of this impacts the focus of this thread, however. I suggest we just drop it and forget that the compare and contrast issue came up.

        In closing, however, I’ll note that almost every engineering problem, including the inherently complex, demand that solutions of the discrete approximations to the continuous equations be demonstrated to be solutions to the continuous. That means among many other things that the numbers have not been polluted by purely numerical artifacts of the numerical solution methods.

        No GCM has yet attained this degree of sophistication.

      • Yes, my understanding is that climate systems use huge amounts of computing power, if that could be the measure. However a system that involves human components like a traffic system or a network can be quite chaotic as well and have modeling limitations. System Architects and engineers use hierarchy, modularity, decoupling, and redundancy used in systems engineering along with figuring out how people, potentially chaotic people, will use the structures.

        Think of the global Internet, it cannot be fully modeled but it doesn’t need to. Operators just take care of their own area, cooperation is split into manageable tasks and common protocols and standards and then it just let’s people connect or go to the moon

      • Oops. Scratch the moon ending

      • Well when we could not model the system behavior we didnt fly there.
        Think of how we went from putting limiters on aircraft. to keep them from stalling to designing them to have post stall manuverability.

        The models we had could not be used in certain circumstances. We didnt throw them out. We built shit that hopefully ( except the f16, lawn dart) stayed within its limits.

        I think of a climate model in the same way. It wont get certain things right. Cant in fact. But it can give us an understanding of sorts.

      • Sorry, I do not believe that an “understanding of sorts” is sufficient to use as the basis from which to formulate scientific theories, much less to underly the formation major public policies and the expenditure of trillions of dollars in an impossible undertaking to control and manipulate climate.

      • I think of a climate model in the same way. It wont get certain things right. Cant in fact. But it can give us an understanding of sorts.

        Sure. Would you have bet your life or even your life savings on the behaviour predicted by those early aerospace models in situations where the model was unproven to be correct? Or perhaps more to the point, would you force others to do the same? Whole countries? Whole continents? The entire world? Would you call those who were reluctant to do so “deniers”? When several planes crashed killing thousands of people, would you still insist that there were no problems and we should continue to trust the experts and the models?

      • Judith–If someone is paying to develop a climate model that will: “
        “simulate the annual cycle and meridional variation of global climate variability. They can resolve climate variability on continental space scales. they are hypothesized to predict climate evolution if the external forcing is correct on timescales greater than decadal scale natural internal oscillations.”

        Then they are wasting funds, because what you wrote is a bunch of unmeasureable junk.

      • not unmeasurable at all. These things are measured and compared to climate models, which is how we know climate models can simulate climate on these space and time scales.

      • Judith

        Can you list a sample of the specific criteria by which a climate model might be measured for validation. I apologize, but I am confused by your response.

      • Here is link that I provided on the previous posts that were linked in the main post for background information, I suggest reading these posts as a starter.

      • woops no limk

      • the links are in the main post

        The culture of building confidence in climate models
        Climate model verification and validation

      • Judith

        You have completely dodged the question and you know it.

        Please look back at what you previously posted and define the specific criteria. Specific criteria (as you well know are independently verifiable). Perhaps I missed it in your link but would like to investigate

      • And I believe I already read it and the criteria were not specifically measureable

      • we know climate models can simulate climate
        Thanks for the guffaw.
        Oh, wait… you used the correct term, simulate. We know they fail at predicting seasonal, annual, decadal, regional, and continental climate. However, they do simulate something that could be, someday, somehow, and somewhere.

      • Yes; compared to AGW modelers, the Wizard of Oz was a paragon of professionalism and probity. He at least had the grace to become contrite when the curtain was pulled back.

      • Rob, the confusion here is between science and engineering, and it is fundamental to the climate debate. Note that this is an epistemic point, not a scientific of engineering point, because neither science nor engineering studies the difference between themselves. That is philosophy’s job.

        The climate model are not engineering models. They are not being built by engineers, nor published in engineering journals. But they are being used for engineering purposes by policy advocates, who advance their results as confirmed predictions. As Dr. Curry, and I, have repeatedly said, the quarrel is not with the scientists but with the advocates, including the advocacy scientists, who insist on using these models as though they were engineering models.

      • David

        What you have written may be a very key point that I certainly have not understood regarding the development and goals of a climate model. It certainly has not been communicated that way. If what you write is true-climate models should be used for dicsussions, but certainly should not be used for policy making.

        What I really do not understand is the difference between climate models and macro economic models. Both are very highly complex, but with and economic model, the same approach as Judith describes is not utilized. They are just complex engineering models.

      • There is little difference between climate models and macro economic models — both are based upon historical data that can not be repeated. Engineering models are based upon data that can be tested and confirmed over and over again, by independent testing labs. The historical climate record is linear, not to be repeated.

        Popper would reject climate models as non-scientific for the reasons he outlined in his essays on Historicism.

      • Macro economic models are used to predict specific future behaviors.

        These models are not perfectly accurate and the developers and users of the models acknowledge that fact. They are certainly better predictors of future circumstances than random guesses.

        The point of the comparison is that in the case of equally complex macro economic models, those who develop the models have VERY SPECIFIC MEASUREABLE CRITERIA by which people can validate their model. All ( or most) acknowledge that it is a process.

      • Just a brief quote from Popper. “…a statement asserting the existence of a trend at a certain time and place would be a singular historical statement, not a universal law. The practical significance of this logical situation is considerable: while we may base scientific predictions on laws, we cannot (as every cautious statistician knows) base them merely on the existence of trends…..There is little doubt that the habit of confusing trends with laws, together with the intuitive observation of trends (such as technological progress), inspired the central doctrines of evolutionism and historicism.” – end of quote.

        The observation of the climate is the observation of trends. The data (the historical temperature record) upon which climate models are based are not “data” in the sense that it can be replicated under strict guidelines. The data is a one-time historical occurrance never to be repeated in the same sense that the performance of a certain grade of steel “repeats” itself under the same test by independant laboratories. This is the fundamental difference between science and non-science. Science is based upon data that can be replicated, as well as “predictions” that can be replicated.

      • “Not perfectly accurate” = barely and rarely better than chance in their predictions?
        Your vocabulary is in need of some tightening up.

      • Rob (to David):

        What you have written may be a very key point that I certainly have not understood regarding the development and goals of a climate model. It certainly has not been communicated that way. If what you write is true-climate models should be used for discussions, but certainly should not be used for policy making.

        I don’t think there would even BE any skeptical sites if that weren’t what is being done – and alarming a lot of people about going off half-cocked, based on un-solid (i.e, as it is being discussed in this part of the thread, non-engineering level models). I myself had assumed early on that such level must have been reached, and when I recognized that the science had not reached the “applied science” level, I had a HUGE red flag go off in my head. I thought, “WTF are they DOING?

        So, long story short: I agree.

      • “The climate model are not engineering models. They are not being built by engineers, nor published in engineering journals. But they are being used for engineering purposes by policy advocates, who advance their results as confirmed predictions.”

        And hence the pseudoscience. And the religious aspects. All the annoying cargo-cult aspect of AGW.
        The enormous waste of public funds. The enormous waste of public’s time.

      • Indeed. The dual nature of their usage (both as investigative tools and for IPCC projections) is something I’ve always considered a problem. Easterbrook defends it on the grounds of scare resources, but that seems a bit weak.

      • A propitious Freudian typo: “scare resources”. Thx!
        :)

      • Dear Dr. Curry:
        How come climate models failed to predict the cooling trends between 2008 through early 2011? The models predicted more than 0.15 degrees C/decade of warming, and observations show 0 0.08 degrees C/decade maximum. How do you explain the discrepancies?

      • Dear Dr. Curry:
        This familiar equation, N=F-λ ΔT, is used by climate models to describe climate energy balance. It is wrong and cannot describe climate energy balance. Without fixing climate physics, testing or retrofitting climate models would be a waste of time and money.

      • Is that equation really used by climate models? I thought it was an equation that describes the behavior of climate models.

      • Yes Sir, unfortunately, this is the main equation! Take a look at this major paper on climate science.“Efficacy of climate forcing” Journal of Geophysical Research, Vol. 110, D18104, doi:10.1029/2005JD005776, 2005.

        You will find that IPCC projection is a copy of this paper.

      • Dear lolwot:
        The authors of the paper “Efficacy of climate forcing” are Hansen et al. It is a available on GISS website under publications of 2005. Sorry for that.

      • No, that is an equation for looking at the output, just like it is used to get sensitivity from observations. It has the same meaning whether observations or models are used.

      • Dear Jim,
        I suggest that you read the paper and compare it with IPCC report.

      • In my previous message i stated that climate models do not produce the observed phasing of the natural internal variability associated with ENSO, AMO etc. While they do produce such fluctuations, they don’t produce them at the year they are observed owing to the chaotic nature of the climate system and the models. So you need to look at longer time scales to compare climate model simulations with observations.

      • That’s very interesting. James Hansen famously included an optional scenario of a major volcanic eruption sometime in the 1990s, which confirmed the model’s ability to accurately model a large, sudden addition of aerosols. But that was a single event, nearly instantaneous from the point of view of a multi-year or multi-decadal climate model. I imagine that adds nothing more to the model than a decaying exponential function with a large initial magnitude. Are the major oscillations like ENSO, AMO etc. too complex to do something similar?

        An implication of Popper’s above condition for being a scientific theoretical system is the injunction to assess theoretical systems in light of how well they have withstood severe testing. Applying this injunction to assessing the quality of climate model predictions (CMPs), including climate model projections, would involve assigning a quality to each CMP as a function of how well it has withstood severe tests allowed by its implications for past, present, and near-future climate or, alternatively, as a function of how well the models that generated the CMP have withstood severe tests of their suitability for generating the CMP.

        Suppose one year increments for the various possible start/end of El Niño/La Niña. Now, if the teleconnections from that (the strongest of the oscillations, afaik) to all (or most) of the other major oscillations are strong and well understood, then we have a potentially large, but not necessarily unmanageable number of permutations of scenarios to run, for each model. We might even be able to do increments of months with today’s computers.

        But if ENSO, AMO and all the others are “random” (or apparently random because they’re related in no way that we understand) then with so many oscillations, for any useful number of years, the factorial function probably makes the simulation of anything approaching every possible scenario absolutely impossible, and we could only wait until after the fact, then run the models with the observed actual parameters of ENSO, AMO, etc. to apply “severe testing” of the type we’d all prefer, where scientists say in advance exactly what will happen, and then it either does or does not happen, clean and simple. If that’s not possible then we just have to make policy by using the mean of randomly fluctuating stuff.

        But as long as the model code is published in advance, and it is, it is still “severe testing” as Popper defined it either way.

      • PS
        “… which confirmed the model’s ability to accurately model a large, sudden addition of aerosols” when Pinatubo obligingly blew up for him.

      • Dear Settledscience:
        The computer model modeled Pinatubo eruption that occurred in the past. Will it model future climates?

      • Steve Garcia (who posted above as SteveGinIL, sorry)

        And was the code for that 20-20 hindsight eruption ‘prediction’ ever released for review? As Katsav says:

        If, for example, a simulation’s agreement with data results from accommodation of the data, the agreement will not be unlikely, and therefore the data will not severely test the suitability of the model that generated the simulation for making any predictions.

        This makes the real question:
        How much “accommodation of the data” [and accommodation of the formulae and constants used] occurred in Hansen’s paper?

        This is exactly the kind of thing that the skeptics have been trying to determine.

        As I see the principle Katsav is getting at, the severe testing is essentially a “put up or shut up” comparison of the model outputs with empirical data. Am I misunderstanding this? And the output for comparison (addressing someone from further up the thread), why would it not be just the average global temperature? Isn’t the average be all the hoohah has been about, from before Hansen’s time?

      • Judith
        Can you possible write something in a more obtuse manner?

        Do you believe that you are writing in a manner such that a reasonably educated member of the general public will fully understand. LOL-yes I do understand what you are trying to explain, but do you think you explained your position clearly?

        Longer time scales are an excuse to explain the averaging of a bad model. Overall the longer term such averaging would make any model no better than a guess.

        Just a fact

      • What JC wrote is a key distinction between decadal prediction and climate projection. Elsewhere on this thread I see someone confusing the two. Here the discussion is on climate projection where ensembles are used to average out random fluctuations of internal variability. You want to know the mean climate change from mean current climate in the decades around 2100, not when El Ninos, PDOs are occurring for example.

      • Jim

        I want very specific measureable criteria by which a model can be measured. A weather model meets these criteria. A climate model generally FAILS.

      • Why is being able to simulate the last century by using the correct forcing a failure?

      • Jim

        Hindcasting is a part of model development and not the ende result. It is a tool to develop a model and not the goal. the goal is to accurately predict specific criteria in the future.

      • The hindcast is to give confidence in the models. Since models are developed independently, if these concur on a projection there would also be confidence in that, which is why IPCC pays more attention to areas where these independent models agree, and basically ignores things that only one model projects.
        Obviously as the future plays out we will be able to verify them in real time with any parameter of interest. the longer this goes on, the more confidence we can have in the rest of the projection, and I am not talking about anything less than decadal averages for verification, being careful to avoid shorter fluctuations.

      • Jim D, that sounds good,except verifying or falsifying out there 90 years from now (“around 2100”) is certainly not an option in anybody’s real world.

        Also, what decade doesn’t get influenced by the occurrence of the PDO and ENSO?

        Talking about the models being in agreement (you, just below here), I would use Katsav’s term “accommodation” to suggest that the models that agree might all be using 90+% of the same assumptions, and that might make them agree, but for the most part this agreement doesn’t test them against anything but those common assumptions. The single outlier may have it right (how far out was it?) – but their “agree or be rejected” policy only forces conformity of thinking.

        Looking out into some near-perfect future when there is a perfect, proven, engineering-level climate model, THAT one will have all its formula terms and constants and data acquisition down pat. Getting there from here – can that be done in a climate (no pun intended) of “conformity or else”? May the gods help anyone who thinks that. We’d still be working with epicycles, the ether, and a flat Earth, if that is the standard to work with.

      • As I have often said in response to this kind of comment: How can you prove that a model will be correct in the future? If you know what will satisfy you, you need to say it, but actually I think it is an impossibility. Does this mean climate projections should not be tried, because such proof is impossible?
        You only prove it by waiting. I think only one or two more decades will be enough for most to agree that these models are actually correct, and the scientific community is already convinced these models have the necessities to do a good job with their prediction, so your alternative is to listen to them.

      • Dear Jim D:
        “You only prove it by waiting. I think only one or two more decades will be enough for most to agree that these models are actually correct, and the scientific community is already convinced these models have the necessities to do a good job with their prediction, so your alternative is to listen to them.”

        This way of thinking is unscientific and dangerous. The output of these computer models have already bankrupt countries in Europe and Japan and they pulled the plug on “green” projects for they can no longer afford them. And we cannot afford to go by the flawed physics of these computer models for they will devour our resources.

        There is a lot of data relative to to the earth, and if a computer model can duplicate the past and present based on transparent science that is accepted by the science community, there is no reason why the model will not be good to project the future.

      • I guess the basic disagreement with the GCM makers is about whether something wrong in details can be right in gross. The detail is driven by the same forces which determine the long term patterns, and if they’re wrongly identified or quantified for the short term, it is hard to see how they could also be correctly identified and quantified for the long.

      • “Getting the “right” answer for the wrong reason won’t help you with making future predictions.”
        ________

        Probably not, but if you consistently get the “right” answer for the wrong reasons, you might be on to something.

      • The thing someone is onto who gets the right answer for the wrong reason often involves cheating.

      • It’s hard to cheat on a forecast.

      • Not if it’s 100 year forecast.

      • No, eventually the 100 years will pass.

      • Prove it.

      • M. carey –
        Silly boy – none of us are likely to be here to see it so it’s not a “forecast” but rather a prediction. No better than Ouija boards or chimps throwing darts.

        In 1910, how many people predicted today’s automobile/airplane/cell phone /computer society? Why do you think any prediction made in 2011 will be worth the paper it’s written on in 100 years? I keep telling you, babe – ignorance of history is also ignorance of the future.

      • Brian isn’t sure a 100 years will pass, and wants proof it will happen. I can’t help him. Indeed, as I write I can’t even prove the next 24 hours will pass, but I am nevertheless making plans for tomorrow.

        Jim’s Owen’s egotism makes him believe honest forecasters don’t forecast beyond his life expectancy. Jim might consider interpolation.

      • Not if you change your predictions or write them so vaguely they can be reinterpretted as needed.

      • Judith: You say: “Climate models can simulate the annual cycle and meridional variation of global climate variability.” How well should climate models be able to simulate the annual cycle and other types of variability before we can trust their climate change projection. On most of the planet, the annual cycle is driven by massive changes in incoming solar radiation. If the amplitude of the annual cycle were 4 degC (over ocean) or 8 degC (over land), how accurately should a useful to predict the amplitude of the annual cycle? With tens to hundreds of annual cycles to average, starting state and chaotic behavior should not be critical problems.

        If you think this question is worthy of an answer, you can decide if you are satisfied with model performance by consulting Figure S8.2b from AR4 WG1 at: http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-chapter8-supp-material.pdf Unfortunately the authors expressed their results in terms of the standard deviation in monthly means. The standard deviation of the monthly means is dominated the annual cycle where the amplitude of the annual cycle is large. (Outside the tropics, I estimated that the amplitude of the annual cycle is about twice the standard deviation of the monthly means. Inside the tropics, other types of variability (ENSO, monsoon seasons) less directly linked to solar forcing may be more important.)

    • I rather think the issue of whether GCMs are regarded as engineering models or scientific models is the point in question (or perhaps when it is appropriate to regard them as each).

      The former produce results over a constrained set of circumstances that have been well validated empirically and can therefore be used to estimate other behaviour within those constraints.

      The latter are developed from much wider principles that have large or even have aspirations to universal applicability and produce inferences that carry that wide applicability with them. They purport to offer insights into phenomena outside a narrow tested range of assumptions.

      In dealing with the problem of what global temperatures will be in 50 years time, I suspect GCMs fall between both stools.

      • well put, a culture clash between scientists and engineers over a very complex modeling problem

      • Dr. Pielke has come down firmly in favor of the engineering model side of this issue.
        Do you see this as one way or the other, or is it possibly a mixture of both?

      • That is utter BS.

        A model is both scientifically developed and validated or it is a “religious like” position based upon philosophy. A philosophy may be correct but it is little more than a random shot from a science perspective.

      • This is basically the very purpose and philosophy when The Royal Society was formed 350 years ago – validation, validation, validation. It was established that way specifically to separate Plato/Aristotlean type discussion from hard science. And they came down solidly on the side of hard science.

        Engineering as I learned is “applied science,” as opposed to theoretical science. Engineering is for AFTER all the specifics have been identified, codified, and charted. If policies are going to depend in climate science reaching that level of repeatability, policies are going to have a LONG wait.

        Judy’s discussion here from several months ago about uncertainty was to address that very point – that it is not at any engineering level. I am not certain myself that it cannot some day get to that certainty-engineering level. I actually have always thought it would. Complex and chaotic though they may be, I can’t see it as being unattainable.

        But that time is absolutely not now. But it will not get there, not if conforming to narrow output ranges, based on comparisons with other MODELS is the measuring stick of how accurate (in the real world) any “member” model is.

        When Hooke and Newton went toe to toe with each other, none of science was at an engineering level, yet many of those have attained that since then. They couldn’t see it then, but it happened, And we can’t see it today – but why should it not happen? Do we think no one will solve chaotic systems, ever, just because we can’t? Heck, we are only 70 years into the computer age. Let’s give the future geniuses some credit.

      • Steve –
        Do we think no one will solve chaotic systems, ever, just because we can’t?

        Those who think the science is settled see no need to wait for that certainty – or for the resolution of chaotic systems. They have no faith in the future – or in those future geniuses. Rather, they believe they have all the answers that are needed – now.

        I’ll let you guess what I think of that attitude.

      • Jim, I have to agree. The thing that got me first bothered about climate projections was the certainty coupled with the amount that we knew we did not know. There is a strange philosophical dichotomy in believing on the one hand that small inputs can have large effects yet on the other believing that possibly small, possibly large unknown inputs will have little to no effects.

      • HAS

        produce results over a constrained set of circumstances that have been well validated empirically and can therefore be used to estimate other behaviour within those constraints.

        It appears that the GCM’s fail to predict the major decline in global warming over this last decade. They also systemically overestimate the global warming since 1980. Consequently, As an engineer, I would judge that GCM’s are NOT “engineering models” and have NOT been “validated empirically”. In light of the increasing contrary evidence, I am not sure they qualify as falsifiable “scientific models” either.

      • I think there are baby and bathwater issues here.

        If one says that weather forecasting models are a special case of GCMs then i think in this domain these models have characteristics of acceptable engineering models.

        Similarly GCMs can be used to successfully model in a scientific sense subsystems of the atmosphere.

        So GCMs aren’t bad per se. The issue is are they the best tool to model what global temperatures will be in 50 years time?

        As I said I suspect GCMs fall between both stools.

      • Dear HAS:

        “The former produce results over a constrained set of circumstances that have been well validated empirically and can therefore be used to estimate other behavior within those constraints.”

        Engineering models are scalable based on empirical data and well suited for complex systems or processes. A scientific model can go astray easily.

      • I think we both said much the same thing, unless you were claiming that engineering models are robust when scaled to untested dimensions.

        Would you fly on an airplane that was based only on engineering models at 1:1000 scale?

        Moving this conversation on a bit it is most unlikely that we will develop a scientific model (GCM or otherwise) that tells us what the global temperature will be in 50 years time.

        It will need to be more in the nature of an engineering model.

        The questions are what does that engineering-type model look like and what science should we be doing to increase the robustness of that engineering model?

      • Dear HAS:
        There is a lot of past and present data relative to the earth, which makes the engineering model very attractive. I prepared an engineering model and scaled down to the hemispheric level. Also, the present waring trend is scalable from the last warming cycle. Future climate can also be projected. If you wish to see, please look at Appendix A-3, of my book global warming calculation and projection; Article-2, global warming calculation methods; and items 2.2, 2.3, and 2.4 of Article-12, Earth’s magic. All posted on my website: http://www.global-heat.net. Please let me know what you think.

      • Dear HAS:
        When you are done reading the engineering model. You will find that 2008, 2009, and 2010 were projected satisfactorily. Please see Tables 1 and 5 pages 58 and 60 of the book and Table 5A, page 44 of Article-12 Earth’s Magic. From 2011 data, I gather that this year will be projected reasonably as well. The model has been tested and appears to have passed a difficult test satisfactorily. I’m saying difficult test because all other models projected considerable warming, whereas the engineering model projected the temperature and sea level rise correctly.

      • Ta will look in slower time

      • And if you apply a known force to a ball of unknown mass, gravitational theory would “fail to predict” the trajectory of the ball. Do you hold against gravitational theory its inability to make predictions without even known input values? And if not, why the double standard with respect to climate models?

      • In either scenario, I would stop the project until all data is made available. But, in the mean time, if desired, and the mass of the ball is assumed to be m1, m2, m3, m4…mn, there will “n” accurate trajectories. What about present climate models, can they do the same?

      • I commented below on your comment along similar lines, thinking that you had misunderstood the severity test notion, but I suspect seeing this you are making a different point.

        Your point seems to be that the failure to predict the current state of the climate was because the GCMs were given the wrong independent parameters. Is this what you are saying?

        Alternatively you could be saying that there are independent parameters that are intrinsically unknowable, but I’m not sure this is your point because the wind’s behaviour is intrinsically unknowable in a system consisting of a rock and gravitational force.

      • I wouldn’t claim to know they’re “intrinsically unknowable,” but ENSO, AMO and the like are certainly unpredictable so far, and yes, a result of that is that the models were recently given several “wrong independent parameters,” all at once, all wrong in the direction of cooling. And yet, we have still not had cooling, we have had the warmest decade in human history, just not warmer than all others by enough to satisfy shut up climate science deniers.

      • If ENSO etc are key drivers of global climate and their occurrence (and evolution?) “unpredictable so far” where does that leave us?

      • That leaves us with large error bars (which I think everybody knows) due to the random independent parameters more than the quality of the model, which I think is less well understood or at least less well appreciated. And since ENSO, etc. average out over time, that leaves us able to predict climate in the long term very well. In the short term we can use commodities markets and flood insurance to minimize the risks of the short term fluctuations we can’t quite predict, but in the long term we can’t get around the spectral lines of CO2, H2O etc.

      • So in fact we know quite a lot about ENSO etc. I’m not sure if you are saying these are in fact one the “random independent parameters” you refer to – are they?

        But in any event we know their impact averages out over time – so clearly we know about their statistical parameters. Out of curiosity what is the period that these effects average out over, and what else do we know about their distribution and impact?

      • Chief Hydrologist

        We know that ENSO is ‘non-stationary and non-Gaussian’

        It varies over decades to millenia – this Tsonis study reproduces an 11,000 year reconstruction at Figure 5 – from memory

        http://www.clim-past.net/6/525/2010/cp-6-525-2010.pdf

      • Chief Hydrologist

        We know that ENSO is ‘non-stationary and non-Gaussian’

        ENSO varies over decades to millenia – this Tsonis study reproduces an 11,000 year reconstruction at Figure 5 – from memory

        http://www.clim-past.net/6/525/2010/cp-6-525-2010.pdf

      • “non-stationary and non-Gaussian”, “varies over decades to millenia” AND average out over time AND their effect can be partialed out from other effects from the short initial conditions available to GCMs.

        We surely do know a lot about ENSO et al. Under these constraint I’d say totally predictable for all intents and purposes.

      • Please correct me if I’m wrong, but it seems to me that if ENSO is unpredictable over timescales that are greater than the period of ‘recent warming’ then there is little meaningful ‘averaging out’ that can be assumed in the models.

      • RobB

        I guess the answer is “Yes”, so at least in that way it’s settledscience :)

      • What you are saying here is based on the fact that , though climate science would like to believe otherwise, it is still in the early stages of just getting their feet under them about data collection – about what data to collect – and about determining what factors/parameters even exist. The near total lack of understanding of water vapor and clouds screams out at us on this very point.

        It is simply too early in the development of this discipline for them to claim the exactitude they claim. They should be solidifying the fundamental evidence, not putting out future predictions. They are making what can be a science look like astrology. And they don’t understand any better than the astrologers do, what is underlying the effects they are observing. (I hate to bring in the pseudo-science card, but…) There simply isn’t adequate enough information yet, even though they think so. An adolescent thinks he or she is ready for the adult world, but they almost always are not.) Climate science now is as little developed as zoology and botany were 250 years ago.

      • No, that was not what I was saying at all.

        What you are saying here is based on the fact that , though climate science would like to believe otherwise, it is still in the early stages of just getting their feet under them about data collection – about what data to collect – and about determining what factors/parameters even exist. The near total lack of understanding of water vapor and clouds screams out at us on this very point.

        What I am saying now is “BS.” We know for a fact that water vapor is strong and positive, and that clouds, whether positive or negative (probably positive) are in either case very small compared to water vapor.
        The major climate oscillations remain unpredictable, but the fact that water vapor is a greenhouse gas is just as settled as carbon dioxide.

        What you don’t know is not the same as what is not known.

      • Do we know for a fact that water vapour is strong and positive taking into account atmospheric dynamics and the hydrological consequences
        of a moister atmosphere, or do we know for a fact that GCMs produce that result?

      • Steve Garcia

        Replying here to SettledScience because there is no “reply” link in SettledScience’s comment just below…

        Yes, I mis-worded that, about water vapor. I meant to say they do not know enough about water vapor – specifically its GHG effects – to represent it in the models. Do I know this as absolute fact? No. I don’t have their code in front of me. If you, SettledScience, know otherwise, please fill me in.

        We seem to be at the end of our “reply” links; they only seem to go so deep, and then they don’t appear anymore. So I don’t know where we can take this discussion. It is also a bit OT here.

      • HAS & Steve Garcia, you’re both just being silly. Yes, it is known “for a fact that water vapour is strong and positive taking into account atmospheric dynamics and the hydrological consequences of a moister atmosphere.”

        This is one of those results that is verifiable, and repeatedly verified, in laboratories, starting with Tyndall’s — but not “settled” merely by Tyndall. It’s settled by the repeated verification of his finding. Water vapor is a greenhouse gas, which absorbs at different wavelengths from carbon dioxide, thus both add to the total planetary greenhouse effect.

        Clouds trap some heat below them, but also cool by increasing the albedo. Which of those two effects dominates depends on the altitude of the cloud. The total effect across the planet is known to be small, within uncertainties that constrain it to a much smaller effect than water vapor. It may be a small positive feedback or a small negative feedback overall, but in either case, no serious scientist believes it is similar in magnitude to the strong, positive water vapor feedback. The error bars just aren’t that large on how many clouds are at altitudes that cause them to have a net cooling effect, and which have a net warming effect. The challenge with clouds is reducing those error bars with better observations, not learning whether water vapor or condensed water vapor in clouds has the greater effect. That is known. The water vapor feedback is strong and positive and much greater than the effect of clouds on the planetary energy balance.

      • Agreed that is “verifiable” but it’s a big leap to say that what happens in a lab is with what happens “taking into account atmospheric dynamics and the hydrological consequences of a moister atmosphere”.

      • No Sir,
        As discussed above with lolwot, the energy equation used in climate present science ( N=F-λ ΔT) is incorrect, and therefore, climate models are failing to project the future. I solved this equation and applied it so many times, it is just the wrong equation for climate and defies the laws of nature and observations. This equation assumes a warming planet earth, which is not warming. The surface is warming, but the upper atmosphere is cooling equally such that planet earth as a whole never warms up. Thermodynamics and observations are in agreement. For more, please see Earth’s Magic on my website http://www.global-heat.net.

      • As an engineer, I would judge that GCM’s are NOT “engineering models” and have NOT been “validated empirically”.

        This kind of comment has always intrigued me – as often as I’ve seen similar comments at this and other climate science oriented blogs.

        Don’t get me wrong – I respect engineering as a field and as a profession (my brother is a professor of EE with practical experience and many years of applying theoretical EE towards the goal of producing medical equipment). But I would imagine that anyone who reads this blog has purchased products that were designed by engineers who put their stamp of approval on designs that contained obvious flaws. No doubt, sometimes the fact that a finished product contains flaws cannot be directly attributable to the failure of an engineer or an “engineering model,” by dint of financial constraints in production or other failures in the production or marketing stages. And of course, there’s always factors like “designed obsolescence.” But I find it hard to believe that anyone reading is blog, even the most engineerish of the engineers reading this blog, has any illusions about whether “engineering models” are always sufficiently “validated empirically.” I find it hard to believe that anyone reading his blog doesn’t regularly use some engineered product and say to themselves, “How could an engineer possibly have felt that design was of a reasonable quality.”

        Empirical validation is an inherently limited concept – limited by the infinite variability of how modeled objects or concepts interact with the real world.

        Of course, that doesn’t diminish in the least that there are many products that I use every day that are remarkably durable and that perform extremely well.

        Still, I could be wrong – but my sense is that there is an often at this blog an others an unrealistic vision about the relative depth of “engineering quality” and that reflected in the models developed by scientists – and that the genesis of that unrealism in the fact that many of this blog’s participants are, in fact, engineers.

      • So we should all go down the fantasyland express Joshua and treat the many bold predictions regarding AGW that in fact supported on by fitted and abstract climate models as the alternative?

        Billions have been sacrificed, essentially people die to support the politically motivated agenda of AGW mitigation while the sacred cow of climate models and fuzzy measurements, data and conclusions should never be “put to the test”?

        The problem in our society as a whole is that the abstract ivory tower that to some degree always exists has now shifted to become a major political center of influence. Massive defunding of soft and politically funded agendas is called for. Let private industry carry more of the research load and we should investigate more deeply the political linkage of agenda science. Climate models are a good place to start.

      • So we should all go down the fantasyland express Joshua and treat the many bold predictions regarding AGW that in fact supported on by fitted and abstract climate models as the alternative?

        Strawman, cwon. If you want to engage me in dialogue – you’ll have to deal with what I actually say, or at least present implications that flow more logically from what I say. Bottom line, nothing in the comment that you responded to was intended, in the least, to support the conclusion that you seem to have drawn.

        So let’s try again. I’m not even remotely suggesting that the theories of climate scientists not be subjected to rigorous scrutiny. To repeat – what I’m suggesting is that some of the engineers that participate on this blog, I suspect, have an unrealistic viewpoint of the relative merits, generally speaking, of the real-world viability of some abstracted “engineering model” vs. computer modeling done by climate scientists. And I’ll add that because of having witnessed broad-scale over-confidence in the computer modeling done in the early years of AI, I am inherently skeptical of computer modeling. Still, when I can now speak into my cell phone and have it talk to me and tell me how to get from my house in Philadelphia to the CoHo Cottages in Walnut Creek, CA – I am fully cognizant of the achievements of both computer modeling and of engineering models.

      • Joshua
        See: To Engineer is Human: The Role of Failure in Successful Design
        Obviously we as humans make mistakes. NASA’s space program had its share of failures. Consequently it set up its
        Independent Validation Verification Project See Climate Etc. on Climate Model Verification & Validation for further discussion.

        With increasing data available, Meteorology prediction accuracy declines exponentially – e.g. it is able to successfully predict 10-12 days out, possibly 15 – more so in Europe where it has US data to rely on.
        When you have multivariable non-linear chaotic systems, the ability to predict is limited – and to control is not possible. See chemical engineer Pierre LaTour who is a world specialist on the issue of controlling nonlinear multivariable systems.
        See: Pierre LaTour “Engineering Earth’s Thermostat with CO2?” and Letters between Latour & Temple
        Committing to “control” climate when we do not even know all the physics – particularly on cloud feedbacks – cannot accurately measure the temperature signal – have large delays between forcing and feedback – have numerous uncontrolled variables – have nonlinear relationships with chaotic interactions – etc – is incredible hubris and ignorance. When a world premier expert raises warnings over control terms most are not familiar with, it is worth taking notice!

        (Note – open cycle “control” by replacing fossil fuel with cheaper solar thermochemical fuels would “mitigate” anthropogenic global warming – if it proves to be an issue.)

        Many scientists and computer specialists are working hard on GCM’s to make them more accurate. But that is not “validated and verified”. Yet we have “policy” advisors claiming 90% accuracy and persuading politicians to commit trillions of public funds – on what?

        See my post below on the failure of IPCC GCMs to predict the long term trend since 1980 and especially for the last decade. To confidently state that current GCM’s can predict temperatures to a fraction of a degree out 100 years is expecting too much even for “scientific” models.

        We need these “severe tests” to check basic functionality, the accuracy and length of prediction. Currently most appear to have been found wanting.

      • John Carpenter

        Joshua,

        I don’t think lumping all engineers into one mold is any more valid than lumping all climate scientists into one mold. You make it sound like all engineers are alike. Engineers that design and build for aerospace vs engineers that design and build, say, kitchen appliances work to much different requirement. I am not saying one group of engineers is better than the other per se, but those that design for aerospace generally have to go through much more rigorous validation of the design compared to kitchen appliance design for obvious reasons. Since I work regularly with engineers in the aerospace community, I guess I have a bias to think about that ‘type’ of engineer. What I mean by ‘type’ is one that uses multiple levels of testing and validation of the design prior to use. I would guess that any engineer engaged in the design of hardware where safety is a primary concern, in general, uses more rigorous validation methods. So when I see commenters use ‘engineering quality’ as a standard that means ‘well tested’… I am thinking of the aerospace type vs the kitchen appliance type.

      • A fair point, John. My comment was too broad. There are certainly different levels of comprehensiveness involved the validation processes of different types of engineering.

        But still, as was mentioned by David we can find more than a few instances where engineering models have failed in the aerospace engineering community.

        So even with the caveat you address – I still question the level of confidence in “engineering models,” relative to models developed by climate scientists, as often expressed by engineers engaged in the climate debate.

      • John Carpenter

        As David also pointed out, engineers are prone to making errors as well… only the ones that aerospace engineers might make tend to be a bit more dramatic and thus more likely to ‘make the news’. One engineer I worked with used to make decisions on hardware based on whether it would pass ‘the newspaper test’…. i.e., he was very conservative in the changes he would consider for the ‘flight safety’ piece of hardware he owned. Air travel is still statistically one of the safest ways to travel.

        What would be really interesting would be to compare how a group of engineers would approach the problem of modeling climate vs the way climate scientists currently do. I do not mean by using some alternate version of radiative heat transfer or the GHE, just the methodology differences between the two at approaching and solving the problem.

      • I suppose one can look at my analyses.

      • John Carpenter

        WHT, ok… link or reference?

      • For example, I use engineering analysis of impulse/response in the link on my handle above. Also, engineering involves uncertainty analysis and characterization of real disorder via probabilities, which I consider significant in this domain.

      • Dear John,
        Yes, you can approach things differently and reach the same results. The advantages of an engineering model is that it will definitely make one think out of the box and look around to the real world. It is not necessarily the case for a scientific model. Look at the present climate models that we have, they created a virtual earth that has nothing to do with the one that we live in.

      • Joshua –
        I still question the level of confidence in “engineering models,” relative to models developed by climate scientists, as often expressed by engineers engaged in the climate debate.

        There is no comparison. Or at least there was none while I was still working. Engineering models are/were required to be verified and validated. I was part of that process.

        Climate models apparently have no such requirements even though the consequences of their output has been used to justify otherwise unjustifiable actions and proposals.

        The engineering “model” failures you might think of (like Challenger) are not generally failures of the models, but rather human failure. The Challenger incident is a case in point. As were many of the other stories I could tell you.

      • Even engineers can have their hands forced by managers, politicians, and funding bodies. In Climate Science, care has been taken to try and make sure no one in the loop has enough leeway even to make a quick scribbled cautionary footnote.

      • We should get far more detailed studies of what the IPCC consensus players, climate “scientists” are “like”. I think many of already know the similar nature of the Sierra Club, Elite Academic faculty in largely government debt driven universities, government funded science enclaves, Greenpeace, The U.N. etc.

        Again, they aren’t all uniform but why are they self-obfuscated with a comparable media that plays along?

      • Steve Garcia

        John –
        I have done both “kitchen appliance” type engineering and “precision R&D well-tested” type, too, as a mechanical design engineer.

        It all depends on how close the application is to the frontiers of what is possible.

        SOOO much work has been done in earlier decades to codify things like strength of materials and other standards (of structural shapes, available alloys, screw forms, etc.) so that the information is at the fingertips of engineers – specifically so the engineer does not have to re-invent the wheel every time out. All engineers lean on those who have gone before. Within those established standards, there is room for flexibility in design. It is the engineer’s job to stay within those standards, and to also produce a cost-effective product.

        Aerospace is out on the edge, and it is a small niche, and there are relatively few established standards. So, much of what you do MUST be tested first – the testing that kitchen appliance engineers don’t have to reinvent.

        I would liken those two somewhat to weather models and climate models. The former have established pretty reliable and repeatable forecasts, based on empirical experience. Climate models are based on the same principles, but are still out on the edge of what is known — and they can’t go down to the shop floor to take a measurement of worldwide climate in 1743 BCE or even 1803 CE. So their limitations make their models – even the bases of their models – less certain. To claim then a certainty they don’t yet have is not exactly warranted.

        And thus SOME sort of testing must be done, at some point. This Katvas suggestion is as good a place to start as any, at least for discussion purposes. SOME of what he says needs to be incorporated into whatever testing is done. To deny that ANY testing should be done, s some here seem to be saying – what kind of science is that?

      • Steve Garcia

        Being a non-degreed engineer who worked his way up, but who is also very much interested in academic topics, IMHO engineering begins when serious repeatability is needed, in order to convince customers that what is being sold will work reliably for them, now and on into the future – time after time after time. Prior to reaching that level of consistency and predictability, it is still unreliable and not salable.

        If science as we call it is held to answer for its predicting capacity, then let’s say that at the near 100% predictability level, it can be then passed on to engineers, to apply it to whatever the marketplace can dream of applying it to. In some (most?) industries, that means it really really really needs to be 100% reliable. Perhaps not all. Maybe in electrical there is some room for unreliability. I am mechanical, and ours is all known, short of super extreme applications – in which case the customer is informed of risks and asked if they are prepared to deal with that. So, basically, when a concept or phenomenon becomes a product to sell, engineering begins at that point.

        There is an overlap, and that is where industrial R&D comes in. That does not include government R&D, I don’t think, though I know Los Alamos does both.

      • No. The AVERAGE of all the models was high. Individual runs of individual models has some examples that show less cooling that occured.

      • Now, tell us in advance which models will be high and which will be low.

      • The AVERAGE of the models was cited as the “best” and/or “most likely” result, was it not? Were not the MAJORITY of models (model runs) that showed cooling rejected as “unrealistic”? If the answers are both “Yes”, that would seem to indicate that:
        1) models (model runs) encapsulate such a large range of scenarios as to be unusable as predictive tools, at least in the short term (no long term data is available to make a judgement at this point in time);
        2) the (subjective) criteria used to select “realistic” scenarios is likely biased by expert preconceptions.
        Under such conditions should we base high impact policy decisions on these models? The answer must be a resounding “No!” at this time. And yet we do…

      • You don’t have to be a enjineer to distrust a model with a crappy record. Better than room temp IQ should suffice.

      • HAS
        See Interpretation of Global Mean Data as a pendulum by Girma Orssengo. He shows the temperature Min/Max and mean fit a 60 year cycle on a rising trend from 1860 to present.

        Christopher Lord Monckton shows statistical fallacies of “accelerated warming”. See: Open Letter to Chairman Pachauri Dec. 18, 2009 SPPI.
        Monckton shows that the opposite accelerated “cooling” conclusion can be obtain by a similar statistically erroneous selection of end points.

        Monckton shows three periods with similar warming trends: 1860-1880, 1910-1940, 1975-1998. This trend analysis shows ~60 year cycle,
        As Chief Hydrologist points out, Tsonis, Easterbrook, Scafetta and others are showing similar analyses of a cyclic variation on a long term trend. Such a 60 year cycle on a warming trend since the Little Ice Age forms the “null hypothesis” of natural variation. The IPCC’s scientific task is to show statistical evidence that anthropogenic warming can be detected and how it varies from this null hypothesis.

        To date, I judge the evidence “Not Proven”, for starters because they fail to acknowledge this long term natural cyclic/trend.

      • David

        My observation related to whether GCMs were the best way to model climate in 2050, and I was suggesting not. I would go for a higher level “engineering” type model and constrain the domain being explained, rather than soldier away with a first principles more “scientific” model that attempted to explain everything at the level of a grid cell.

        On the question of cycles in the climate I’d just observe that if cycles are an intrinsic feature of our climate, climate models should reproduce them, if climate models are accurate (although that runs dangerously close to the old tautology: if we had ham we could have ham and eggs if we had eggs).

      • The significant difference between the two types of models is that engineering (scientific) models are based upon data that can be replicated, while climate models are based upon historical trends that cannot be replicated. This is at the heart of Popper’s arguement. It’s not just the “prediction” that needs to be repoducible, but also the data.

    • Norm Kalmanovitch

      An engineering model uses validated parameters to predict a quantifiable output that must be free of errors; the climate models use a fabricated input for the effect of increased atmospheric CO2 that is designed to create catastrophic warming out of nothing. Since the input is a fabrication (that is in violation of the quantum physics that controls the interaction between the thermal radfiation of the Earth and CO2) the output is also a fabrication so Popper’s arguments for scientific validity do not hold because this is just make believe and not science!

  2. The concept of severe testing as described in the excerpt appears appropriate for testing a specific theory or hypothesis. For that purpose it’s essential that the data used has power in making a difference between that hypothesis and alternative hypotheses. But I cannot see the issue to be of that nature in climate model development.

    The alternative models are not developed on the basis that one would be correct and others wrong. It’s rather expected that all models have something right and something wrong. There may occasionally be questions, which are likely to have the binary alternatives, right and wrong, but that’s exceptional, if present at all.

    By that I don’t want to say that testing the models would not be essential, but I cannot see much value in the approach of Katzav in that.

    Something related is certainly true: Tests should be defined to test essential features for the planned application of the model. The case of Hadley climate models can explain, what I have in mind. They are based on meteorological weather models used in short and medium term forecasting. Their skill is regularly tested in that use. Much information is available on their capability in describing atmospheric phenomena on short time scales. When models based on that are used in climate modeling, they are run over much longer periods. All kind of tricks must be used to assure that the models don’t go astray, when the period gets long. Specifically corrections must be made repeatedly to satisfy conservation laws. Testing the models as climate models must put very much emphasis on assuring that they do satisfy conservation laws, but that they are not forced by that to break some other physical requirements. There’s, e.g., the risk that the models are forced to produce less variability than their basic equations would produce without the artificial corrections added to the numerical methods to satisfy conservation laws. (The original equations satisfy conservation laws exactly, but that cannot be maintained in discretization for all conservation laws, which leads to the need for extra corrections. That creates various problems of cumulative errors over long periods.)

    I don’t want to imply anything about the present level of understanding or testing in that area, because I don’t know anything about that. I just try to give an example of issues that should get particular attention in testing. This is not the same approach that Katzav is proposing as far, as I can see, although both emphasize the importance of right tests.

    • Dear Pekka:
      I agree that climate models are a retrofit of weather models, and climate is not the same as the weather. But let us not forget that climate science is flawed and the physics of climate models is wrong. These need to be addresses as well.

    • Steve Garcia

      (If I can say this briefly…)

      Physicist Freeman Dyson has long been interested in the origins of life. Long ago he wrote that he thought that the beginning of life could not have started with proteins/amino acids or the ability to replicate, because neither could tolerate errors at all. He thought there had to have been a way which would allow more errors than either of those were seen to allow at that time. (This was so long ago that I don’t know if his argument still has any truth to it.) He thought that it needed more flexibility, more or less.

      The accumulation of errors you talk about in climate models may be like that. The main difference is that in the beginnings of life, the result was no life at all, if it failed, but in climate models (early ones, anyway) it is that it goes to infinity, which is kind of the opposite of being dead. The modelers first had to tame that infinity problem, and now you are saying that they also have to not constrain it too much with extra corrections.

      Whereas the beginnings of life began with one cell, though, climate science deals with vast quantities of molecules and heat flows. I would think that the flexibility is built into those huge populations. If a few molecules don’t do something exactly right, they will drop out of what is going on or get carried along anyway. Perhaps the % of molecules (real, not modeled) affected varies enough so that any model misses this variability/error. Maybe the corrections are meant to represent this missing %. Maybe chaos theory or even quantum theory can work that out. Perhaps Heisenberg’s Uncertainty Principle is at work here.

      • Steve,
        I was not very satisfied to my own message, because i didn’t really get it clear, what I wanted to say, but decided to post it anyway.

        In very general terms my intended message is for the first part that’s essential to find and use tests, which have power in verifying or disproving those features of the model that are most crucial for its success. For the second part my intended message is that the excerpt from the paper from Katzav that Judith posted doesn’t really lead to that direction as he is approach is right for a different problem. His approach is correct, when specific theories or hypotheses are compared, not when the task is to develop step-by-step better and better models through improvements both in description of a multitude of phenomena and in computational methods.

        The example that I used referred to the latter part, i.e. finding better and better validated computational methods for a hyperbolic type set of problem, which can never be linearized and discretized completely consistently and which leads with certainty to false paths, if used without certain corrections to maintain those conservation laws that are violated in the discretization. As such corrections are typically forcing the model behave more smoothly that it would without corrections, they don’t only save the conservation laws, but influence the outcome also in other ways. When this is done a very number of times, the cumulative effect may grow to be determining on the results. This is a well known problem that the modelers certainly worry about, but it’s also a very difficult problem, and it’s a problem, whose severity is difficult to determine by testing. Comparing different discretizations and perhaps also by studying simplified cases, whose correct answers are known gives some understanding, but not very much.

        The problem is closely related to the issue of chaoticity of the Earth system and of the models, because it may specifically modify the type of chaos that the solutions have. It’s also in a very essential way related to the occurrence of quasiperiodic oscillations of long periods in the solutions. Long term projections may be influenced in many different ways by these problems, while the dominating uncertainties are very different in the weather forecasting.

        The question is: How much do the climate modelers really know about the long term relevance of their models, and how they can validate their views?

      • Dear Steve,
        Climate science has to be correct. The reason is that we are conducting calculations relative to a massive size body and for long periods of time: Multiplications are in the order of 10 to the power of 17,18, and 20, and the results are in millimeters and 0.008 degrees centigrade annually. In order to be able to get these small numbers with accuracy, science has to be correct, there is no way around it.

    • Mathematically, errors do not “accumulate” with time, they multiply. The odds of the error bars expanding to infinity (practically speaking) within the forecast period are much higher than CS (or you) want to acknowledge.

  3. The primary issue with atmospheric and oceanic simulation models (AOS) arises from non-linear complexity. This leads to ‘irreducible’ imprecision’ in output arising from the range of plausible initial conditions (sensitive dependence) and boundary conditions (structural instability). Very different solutions arise from input in the range of plausible values.

    ‘Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable.’ http://www.pnas.org/content/104/21/8709.full

    Models are judged as to formulation – how physically realistic they are – and on the basis of a subjective judgement on solution behaviour.

    ‘AOS models are therefore to be judged by their degree of plausibility, not whether they are correct or best. This perspective extends to the component discrete algorithms, parameterizations, and coupling breadth: There are better or worse choices (some seemingly satisfactory for their purpose or others needing repair) but not correct or best ones. The bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms, and a posteriori solution behavior. Plausibility criteria are qualitative and loosely quantitative, because there are many relevant measures of plausibility that cannot all be specified or fit precisely. Results that are clearly discrepant with measurements or between different models provide a valid basis for model rejection or modification, but moderate levels of mismatch or misfit usually cannot disqualify a model. Often, a particular misfit can be tuned away by adjusting some model parameter, but this should not be viewed as certification of model correctness.’ McWilliams op. cit.

    The question of the plausibility of model formulation is open – ‘seamless forecasting’ requiring orders of magnitude more computing power than is available.

    ‘James Hurrell and colleagues in an article in the Bulletin of the American Meteorological Society stated that the ‘global coupled atmosphere–ocean–land–cryosphere system exhibits a wide range of physical and dynamical phenomena with associated physical, biological, and chemical feedbacks that collectively result in a continuum of temporal and spatial variability. The traditional boundaries between weather and climate are, therefore, somewhat artificial.’ http://www.gfdl.noaa.gov/bibliography/related_files/Hurrell_2009BAMS2752.pdf

    Prediction as such from AOS is not ‘plausible’. .

    • When can we say that the IPCC models systemically overestimate anthropogenic warming?

      Well, you might wait until the target year of the forecast.
      Then if the models overstate the observed warming, you might ask overstated compared to what?

      • Chief Hydrologist

        Models have an ‘irreducible imprecision’ that derives from chaos, dynamical complecity – whatevere you want to call it. We don’t know what that imprecision is because it hasn’t been explored in systematically designed model families.

        So there are a range a results possible from a plausibly formulated model – and the ‘right’ result is a subjective choice of the modeller – a posterioiri solution behaviour.

        The range of ‘estimation’ is between 1 and 6 degrees C – all subjectively selected from an unknown range of solutions.

    • Steve Garcia

      Four thoughts:

      1. McWilliams’ passage sounds eerily like explaining why they can’t ever really know how many angels can fit on the head of a pin.

      2. With the gross level of precision of input data (temps measured in full °­C, for example) – combined with the lack of historical data points over great swathes of time – attempting to get precision down to 0.1°C seems like walking on a log while two log rollers on it are having a head-to-head contest. But if that is an excuse for why the model outputs/predictions aren’t “plausible,” then this implausibility needs to be communicated to the policymakers. If course, that might lose funding, so it is probably not a popular choice among climate scientists/modelers.

      3. I dispute that such chaotic systems are not solvable. Just because we can’t do it now doesn’t mean it will not ever be done.

      It is amazing how several magnitudes more computing power is needed, yet overall the climate stays within its narrow range of a degree or so, and it does so apparently as easily as we breathe – with or without humans’ CO2 and land use changes. Does the atmosphere have access to more computing power than we do? I see 0.7C rise and am amazed how small that is, for 111 years.

      • Yes the climate is rather obviously stable in a macro sense despite all the chaos that lies beneath.

        This is actually not a particularly unusual situation in science (think “top down materials science vs bottom up), and is bread and butter to the engineers as can be seen from the comments above.

        I think our hostess would like to draw some distinction between the techniques of the engineers and the scientists in this regard, with purity on the side of the latter. However when push comes to shove the odd scientist isn’t averse to redefining the element of analysis up a level or two to look for order.

        Paul Dirac and others did it to reconcile Schrödinger and Heisenberg’s work into Quantum Theory (stop worrying about position and velocity, just look at the PDF). And I see some in the climate science world are talking about using stochastic method to create order in disorderly subsystems.

        I should add though before people jump down my throat that GCMs aren’t scientific theories and the analogy is just that.

      • Chief Hydrologist

        McWilliams is explaining why a precision in models of between 1 and 6 degrees is obtained.

        Climate is not stable.

        ‘Recent scientific evidence shows that major and widespread climate changes have occurred with startling speed. For example, roughly half the north Atlantic warming since the last ice age was achieved in only a decade, and it was accompanied by significant climatic changes across most of the globe. Similar events, including local warmings as large as 16°C, occurred repeatedly during the slide into and climb out of the last ice age. Human civilizations arose after those extreme, global ice-age climate jumps. Severe droughts and other regional climate events during the current warm period have shown similar tendencies of abrupt onset and great persistence, often with adverse effects on societies.’
        http://www.nap.edu/openbook.php?record_id=10136&page=1

      • “Climate is not stable”

        However climate is safely contained between the planet’s surface and the top of the atmosphere. In the light of all this chaos perhaps the boundary conditions are the place to start.

  4. Are IPCC’s global warming models falsifiable? If so, on what basis?

    Lucia Liljegren at The Blackboard is thoroughly statistically testing IPCC models predictions against subsequent surface temperature evidence.
    Since 1980, long term , temperature rise is 0.14 C/ decade versus 1980 projections of 0.208 C/decade.

    For this last decade, see: May T Anomalies: Cooler than April.
    Since IPCC’s pre 2000 prediction, actual red corrected temperatures trend 0.023C/decade have been running much below predictions of 0.208 C/decade with Hadcrut3.(i.e. 89% below.) e.g., See: Hadcrut3 NH&SH Land and Sea Surface temperature anomaly trend from Jan2000 through month 05, 2011

    With HadCRUT:
    If we pick Jan 2000 as the start date for comparing models: The multi-model mean trend is inconsistent with HadCrut; the 137 month mean anomaly is also inconsistent with the observed temperature. The absolute values of both d* values are 2.81 and 2.31 computed assuming residuals are ‘red’ or ‘arima’ respectively; both larger than 2. So, even starting comparison in Jan 2000, these observed trends are inconsistent with the multi-model mean..
    if we pick Jan 2000 as the start date: The 137 month multi-model mean anomaly is inconsistent with the observed 137 month mean using either ‘red’ noise or the wider uncertainty intervals computed using ‘arima’. The d* values are -3.06 and -2.53 respectively.

    At what point do we say that IPCC’s models at least do NOT model major PDO/AMO decadal oscillations?
    When can we say that the IPCC models systemically overestimate anthropogenic warming? Do ALL the global temperature measures need to confirm this trend? How well must we trust surface measurements in light of the Urban Heat Island effect?
    Re: Katzav test:

    Data capturing the positive global mean surface temperature trend during the late 1930s and early 1940s are not well simulated by the ensemble relied on in the fourth IPCC report. These data will thus better serve to test severely models used in the fifth IPCC report.

    Does this data since 2000 constitute a “severe” test of the IPCC’s Ex-ante predictions? i.e., where “Data . . .are not well simulated”?
    Or must we require 20 or 30 years of similar global “non-warming” before we say that maybe, just possibly, the models might not be quite accurate?

    How do we handle observational evidence that short term climate sensitivity is much lower than IPCC’s models? e.g. See New paper from Lindzen and Choi implies that the models are exaggerating climate sensitivity.
    Lindzen & Choi, On the Observational Determination of Climate Sensitivity and Its Implications, Asia-Pacific J. Atmos. Sci., 47(4), 377-390, 2011 DOI:10.1007/s13143-011-0023-x

    As a result, the climate sensitivity for a doubling of CO2 is estimated to be 0.7 K (with the confidence interval 0.5K – 1.3 K at 99% levels). This observational result shows that model sensitivities indicated by the IPCC AR4 are likely greater than than the possibilities estimated from the observations.

    http://www-eaps.mit.edu/faculty/lindzen/236-Lindzen-Choi-2011.pdf

    Especially in light of the strong politicization of climate science and consequent bias in government research funding?

    • The Lindzen-Choi study has been discussed here previously.

      For why their study wasn’t published where they wanted it published, see

      http://www.masterresource.org/wp-content/uploads/2011/06/Attach3.pdf

      • M. carey
        Lindzen-Choi addressed the previous criticisms and extended their analysis in their new paper. Please address the substance, not throw in a political red herring.

    • PROPOSED SEVERE TEST
      I propose the global temperature and anthropogenic CO2 trends from 2005 to 2030 as a “severe test” of accelerated anthropogenic warming per IPCC’s Global Warming Models versus Null Cyclic Warming Hypothesis of ~ a 60 year oscillation on the warming trend from the Little Ice Age.

      1) Surface Temperatures
      The IPCC projects +0.5 C warming (0.2 Deg C/decade x 2.5 decades)
      The Null Cyclic Warming predicts ~ -0.35 C decline (from 0.48 C to 0.13 C.)

      This is a difference of about 0.85 C over 25 years between IPCC’s anthropogenic warming predictions and a Null Oscillating Trend.

      e.g. see Girma Orssengo Interpretation of the Global Mean Temperature Data as a Pendulum and his temperature projections
      See similar projections by Easterbrook, Tsonis, Loehle, Scafetta etc.

      Hybrid models with part anthropogenic part nature predict intermediate results. e.g. Loehle & Scafetta (2011) predict an average warming of 0.066 C/decade. With cyclic variation, they predict about no change from 2005 to 2030. See Climate Change Attribution Using Empirical Decomposition of Climatic Data, The Open Atmospheric Science Journal, 2011, 5, 74-86 Figure 5 p 82.

      Period: Propose a 25 year or preferably 30 year period from about the top to the bottom of the ~ 60 year natural temperature cycle as a “severe test” while being less than 1 generation long. I propose 2005 for the start date as providing relaxation from the major 1998 El Nino fluctuation while being near the 60 year natural peak. I then propose 2030 or 2035 as 25 to 30 years later. This period also includes Solar Cycles 24 & 25 which are now projected to be unusually low.

      Temperatures: Propose satellite surface temperatures or rural or class 1 surface sites as having less UHI controversy than urban ground measurements.
      Data A common reference data set of CO2 measurements, surface & tropospheric temperatures, El Nino/La Ninya, PDO/AMO, and solar cycle parameters (TSI, Solar UV), Neutron Count & LOD, would be helpful against which to compare all models.

      2) Tropical Tropospheric Temperatures
      The Tropical Tropospheric Temperature will likely provide a stronger “severe test” with IPCC’s models predicting even greater warming while natural null models predict level or cooling e.g. These could be averaged over ~200 hPa to 350 hPa, from 30 N to 30 S.

      Model results and observed temperature trends are in disagreement in most of the tropical troposphere, being separated by more than twice the uncertainty of the model mean. In layers near 5 km, the modeled trend is 100 to 300% higher than observed, and, above 8 km, modeled and observed trends have opposite signs.

      Ross McKitrick bases his T3 Tax on this Tropical Tropospheric Temperature.

      I recommend a technical post compiling all projections generally grouped as Natural models, Hybrid models, and Anthropogenic models (with corrections but no comments.)

      Then provide a post compiling reports and blogs providing statistical comparisons against these predictions. e.g. Lucia’s “Data Comparisons” at the Blackboard. e.g. with front page links to these posts so readers can add to them.

      • David, I would suggest only considering decadal average temperatures to avoid undue effects of ENSO in a single year. So your test should compare 2000-2009 with 2025-2034, or if you want a result by 2030, use 2020-2029. This is a 20 year window and requires a modification of your verification parameters.

      • Jim D
        Interesting observation on ENSO averaging.

        The difficulty is that statistician William Briggs admonishes:
        Do not smooth times series, you hockey puck!

        Unless the data is measured with error, you never, ever, for no reason, under no threat, SMOOTH the series! And if for some bizarre reason you do smooth it, you absolutely on pain of death do NOT use the smoothed series as input for other analyses!

        ENSO’s period varies between 2 and 7 yr, with the
        average being quite robust around 4 yr.

        Factors Affecting ENSO’s Period MacMynowski, 2008 DOI: 10.1175/2007JAS2520.1

        So 4 years would provide about one period, 8 years about two periods.
        That suggests we might be able to use five year averages, e.g., 2005-2010 vs 2025-2030 with a 25 year span.

        Noise Lucia includes red noise as well as using other tests

        For statisticians, would averaging over one or two ENSO periods benefit? See Briggs on Global warming fallacies
        Homogenization of time series I-V etc.

      • ENSOs are irregular enough to be unpredictable. I would not assume anything about ENSO frequencies in the future. I chose 10 years to be objective and we can see that ten-year running means do remove ENSOs very effectively. We could choose solar cycles, but those seem to be quite variable these days too, but I would not use 5 years because of the solar cycles.

      • Jim D.
        An alternative testing approach is to use hindcasting using part of the data for tuning and the other part for testing.
        In On the credibility of climate predictions compare the output of various models to temperature and precipitation observations from eight stations with long (over 100 years) records from around the globe. The results show that models perform poorly, even at a climatic (30-year) scale. Thus local model projections cannot be credible, whereas a common argument that models can perform better at larger spatial scales is unsupported.
        While this is not as “severe”, the models still do not perform well.

      • David—why do you reject the type of tests that all other models have to pass before their results are believed?? Why is there the rush to believe the results of unvalidated models?

      • Rob Starkey
        I don’t follow why you think I am rushing to “believe the results of unvalidated models”. I am trying to propose some “severe tests” of those models – on a climatic scale of about 30 years.
        I am not rejecting other tests. Just proposing a “severe test” of the greatest natural variation vs anthropogenic models.
        If you have a stronger tests – please do so.

      • That would only be a relevant test if the global models were going to be used at station sites to provide guidance. National scales would be slightly more relevant, but even there, if someone uses a global projection to guide their national policy they need to base their trust on hindcasts, and I don’t know how those have done on that scale. The mean global surface temperature, and its large-scale variation, is the most important output to validate.

      • Jim D Says:
        The mean global surface temperature, and its large-scale variation, is the most important output to validate.

        Large scale variation? When did that happen? The global average temperature seems to have varied by 2K or so over the last 10000 years. That’s about 0.75%

      • I am referring to large spatial scales, northern versus southern hemisphere, arctic versus tropics, land versus ocean, etc. These can be obtained from projection ensembles with some confidence level.

      • Propose: Extreme Precipitation Tests
        Models should reproduce historic extreme precipitation and runoff frequency distributions.

        e.g. See: Statistical comparison of observed temperature and rainfall extremes with climate model outputs, Tsaknias et al. April 2011

        The current suite of GCMs is not developed to provide the level of accuracy required for hydrological applications, and this is quite apparent from our results in daily rainfall extremes. . . . Climate model outputs should not be used extensively and injudiciously for hydrological and water management applications.
        • GCMs have been found to perform poorly on monthly to climatic scales (Anagnostopoulos et al., 2010, Nyego-Origamoi et al., 2010), . . .

        Test models against the ~21 year Hale cycle pattern.
        WJR Alexander et al. show strong variations in precipitation and runoff linked to the 21 year Hale cycle. To be credible, climate models need to be able to reproduce this observed such precipitation/runoff distribution statistics.

        Validate Drought Models by Hindcasting
        Hindcast test climate models against historic drought frequency with part of the data. e.g. see:
        Tests of Regional Climate Model Validity in the Drought Exceptional Circumstances Report David R.B. Stockwell August 5, 2008 Niche Modeling (http://landshape.org/enm)
        The CSIRO’s model got Australia’s droughts backwards.

      • Regional Climate Trends & Statistics
        Climate models should reproduce actual regional historical temperature and precipitation trends and statistics. e.g. see:
        Anagnostopoulos, G. G., D. Koutsoyiannis, A. Christofides, A. Efstratiadis, and N. Mamassis, A comparison of local and aggregated climate model outputs with observed data, Hydrological Sciences Journal, 55 (7), 1094–1110, 2010. http://dx.doi.org/10.1080/02626667.2010.513518

        Examining the local performance of the models at 55 points, we found that local projections do not correlate well with observed measurements. Furthermore, we found that the correlation
        at a large spatial scale, i.e. the contiguous USA, is worse than at the local scale. . . . we think that the most important question is not whether GCMs can produce credible estimates of future climate, but whether climate is at all predictable in deterministic terms. . . . stochastic descriptions of hydroclimatic processes should incorporate what is known about the driving physical mechanisms of the processes. . . . stochastics is an indispensable, advanced and powerful part of physics

        Credibility of climate predictions revisited G.G. Anagnostopoulos et al. EU GUG 2009 presentation

        None of the examined models reproduces the over year fluctuations of the areal temperature of USA (gradual increase before 1940, falling trend until the early 1970’s, slight upward trend thereafter); most overestimate the annual mean (by up to 4°C) and predict a rise more intense than reality during the later 20th century.
        • On the climatic scale, the model whose results for temperature are closest to reality (PCM 20C3M) has an efficiency of 0.05, virtually equivalent to an elementary prediction based on the historical mean; its predictive capacity against other indicators (e.g. maximum and minimum monthly temperature) is worse.
        • The predictive capacity of GCMs against the areal precipitation is even poorer (overestimation by about 100 to 300 mm). All efficiency values at all time scales are strongly negative, while correlations vary from negative to slightly positive.
        • Contrary to the common practice of climate modellers and IPCC, here comparisons are made in terms of actual values and not departures from means (“anomalies”). The enormous differences from reality (up to 6°C in minimum temperature and 300 mm in annual precipitation) would have been concealed if departures from mean had been taken.,

  5. Alexander Harvey

    I believe that there have been for some time at least three well known hurdles that have severely tested the models.

    Failure to meet them was and for some may still be cause for concern. The best of the models do now pass these tests:

    Stability without flux adjustment,
    ENSO type variability,
    Blocking.

    Perhaps you don’t here much about the last of these but I did hear that some of the models that went into CMIP3 could not simulate blocking and that this did raise some eyebrows.

    Alex

  6. John Vetterling

    From what I’ve read, the $10,000 (or maybe $10 trillion) question is, what would a severe test look like?

    The problem is that we have an extremely limited data set to build and calibrate our model around. Once it is calibrated – what data is still available to test the model against?

  7. There’s a severe test coming up within the next 5-10 years isn’t there?

    Skeptics are largely on the global cooling boat.

    Under AGW we should see quite a bit of warming within the next few years. Really it is a wait for HadCRUT now, all the other records, including UAH are already suggesting a jump upward in global temperature in underway.

  8. Hmm? I guess my first computer programming instructor was a bit of a philosopher.

  9. If we address the question of whether GCMs can be subjected to a severe test of their skills in estimating long term global temperature trends, I see that as a challenge. We know the models are inaccurate, and we have evidence that they have nevertheless performed fairly well under some circumstances, where “fairly well” is a judgment that will differ from one person to another. To date, the most obvious tests have not been severe – models have simulated trends in hindcasts, and in the case of the Hansen models, have exhibited predictive skill on a multidecadal basis that exceeds their skill at shorter intervals (on a percentage deviation basis) and which has led to much debate about how good a simulation must be to deserve the adjective “skilled”. That debate has been repeated too many times to do it again here.

    One reason why even this modest ability to simulate the trends is not a severe test is that the gradual trends exhibit no extraordinary features that would test skill – in fact, many of the same trends could be predicted simply on the basis of the principle that the future is likely to resemble the past. However, a more severe test has been possible by adding an additional challenge – a competition between the models run with known climate input and those run with inputs known to contradict observed data. The ability of simulations to make hindcasts when forced with both anthropogenic and natural inputs but not with natural inputs alone has been seen as an addition measure of skill.

    I don’t believe this challenge has been applied as thoroughly as it might be. Ideally, one might start with an ensemble of simulations in which the models have been parametrized within known real world constraints and appropriately tuned to a starting climate, and then run with the prescribed forcings (without further tuning regardless of results). Models that perform well would then need to prove that they cannot be made to simulate observed climate trends without the forcings, regardless of what unforced inputs are applied until the latter clearly exceed any realistic limits. It would presumably require some ingenuity to manipulate all these test inputs into many different patterns. To the extent that they all fail to match forced inputs, I think we might conclude that some skill has been exhibited.

    • Fred, for results like this you must be using FED PR-O 6.66…
      “We know the models are inaccurate, and we have evidence that they have nevertheless performed fairly well under some circumstances, where “fairly well” is a judgment that will differ from one person to another. “

    • John Vetterling

      While Katzov might not consider it a severe test, a Monte Carlo test might be appropriate. A model is fed random input data and we test to see how the random inputs diverge from the true inputs. This is essentially what McIntyre did to disprove the hocky stick.

      However, there doesn’t seem to be much motivation to do these tests that I can see.

      • monte carlo simulations are routinely done, varying the initial conditions. For each forecast, the ECMWF weather prediction model conducts 51 simulations with slightly different initial conditions.

      • John Vetterling

        Maybe its semantics, but I would call those sensitivity tests – how does the model respond to changes in the inputs. A very important part of the process no doubt. But that does not assure that the results are not merely artifacts of the model.

        A Monte Carlo simulation feeds random data and looks to see if the results are also random. It’s a form of validation by exception. If the results are not random then they may be an artifact of the model.

      • In the case of coupled climate models, the main external input is the forcing (anthropogenic CO2, aerosol, volcanoes, solar). Tests are done by varying these (e.g. removing the CO2 forcing) and do show different results (non-rising temperatures). Random forcing would also produce temperatures consistent with that forcing, and if a model had a rising temperature anyway it would fail such a test. I am fairly sure steady forcing is a routine test that is done to look for such biases or drifts before moving on to realistic forcing.

      • Steve Garcia

        Your 2nd and 3rd sentences don’t prove anything at all. They would only show that positive CO2 values (or aerosols, etc.) are programmed to GIVE a positive (or negative) output. Duh. If it didn’t give the expected output, and they assume the model isn’t set up right, they might “correct” the code – or in Latsav’s words, they would “accommodate” it, which is essentially fudging the code. Either way, it still doesn’t show that the model models reality.

        I hate the concept of someone just saying something like, “Well let’s change the co2adj factor a bit and see what happens.” YES, I know it is complex and BIG. But that is no excuse for comparing it to reality – taking it out for a road test.

      • The problem with all hindcasting tests is that every modeler knows quite a lot about the past. Obvious deviations are removed during the development phase before the tests are performed. Choosing repeatedly the better performing alternatives may lead to models that have a fundamental flaw hidden by a number of patches. That may be made to work in the past, but fail in the future. Only those details of the past provide real tests that the modelers cannot take into account directly or indirectly. Usually they are not important tests for other reasons being too small details for the model to really predict at all or irrelevant for the most important features of the model.

        Again it’s clear that the modelers know the issue and try to counteract it. They choose often the alternative that is based on more correct equations of physics rather than the one that performs better in a set of validity tests trusting that the result will ultimately be a more correct and better model. Comparison of the results of different models helps, when the errors are in some models, but not in all. Most troublesome are those generic issues of computational methods or physical knowledge that affect all models built using presently known methods in the same way.

  10. This paper was posted at http://climatesci.org on May 5, 2009.
    Falling Ocean Heat Falsifies Global Warming Hypothesis
    By William DiPuccio

    The Global Warming Hypothesis Albert Einstein once said, “No amount of experimentation can ever prove me right; a single experiment can prove me wrong.” Einstein’s words express a foundational principle of science intoned by the logician, Karl Popper: Falsifiability. In order to verify a hypothesis there must be a test by which it can be proved false. A thousand observations may appear to verify a hypothesis, but one critical failure could result in its demise. The history of science is littered with such examples…

    • Albert Einstein once said, “No amount of experimentation can ever prove me right; a single experiment can prove me wrong.”

      1. Not if that experiment turns out to be not repeatable.

      This is easily fixed by inserting “repeatable” after “single” in Einstein’s statement.

      2. Not if that experiment overlooks relevant phenomena.

      It is impossible to fix this. Newton struggled with whether light was a wave or a particle and eventually came down on the side of the latter. Young’s double-slit experiment in 1801 convinced physics for a century that Newton was wrong. Einstein’s photoelectric effect experiment in 1905, for which he won the Nobel prize, proved that Newton was right, though this was not accepted as such until Schroedinger in 1926 and von Neumann later that decade reconciled the two viewpoints satisfactorily, showing counterintuitively that all three of Huygens, Newton, and Young were right.

      Science is the development of perspective on nature and the language to express that perspective. Science cannot be viewed as the deciding of propositions, the premise of Einstein’s statement, because the language for expressing the relevant proposition is often not available until the relevant perspective has been developed to the point where the relevant language can be formulated coherently to express that perspective.

      How do you decide a proposition that you can’t even state, let alone comprehend, let alone imagine?

  11. Isn’t the problem with model verification one of time.

    It is easy to have a model predict the weather five days hence and then wait five days, see if the prediction was correct, rinse and repeat.

    It is not so easy (or nobody is willing to wait) to wait 10 or 100 years, see if the model is correct, rinse and repeat.

    We know how to statistically verify models – the problem is really that 1) nobody wants to wait long enough to do it (because we cannot wait to be sure, we have to take action now) and 2) they will probably fail – as the models are failing over the extremely short term and they are being changed almost monthly to take into account entire new concepts which impact on the climate which we are still learning about.

    Hasn’t a hurricane prediction model been statistically verified?

    It would be interesting to review the paper and see what they did to verify their model.

    Of course, that can be tested annually – which is still a huge advantage when compared to a 10, 30 or 100 year prediction time frame.

  12. In terms of Karl Popper’s “Tentative Theory,” formulating the hypothesis is the second step in scientific Method. However, we know according to Roy Dr. Spencer, “The fact is that the ‘null hypothesis’ of global warming has never been rejected: That natural climate variability can explain everything we see in the climate system.”

    Accordingly, of the three possible outcomes in Poppers ‘Error Elimination’ process, we see early on that AGW as a Tentative theory cannot be proven. And that is why belief in AGW has become a phenomenon that in the last few years drawn an incredible amount of interest and the attention of those in the fields of philosophy, psychology, sociology, religion, economics and ethics.

    We are just now beginning to understand this malignant neurosis. Scientists are beginning to better understand the psycho-cybernetics underlying this as a mass mania.

    We may never fully understand all of the reasons why and how Western civilization gave birth to the self-defeating neurosis of global warming alarmism. But, we know that for a long time now, it has never been about the science or the about the truth: AGW is now a political movement.

    The correlation between belief in AGW and a host of self-defeating personality disorders has become all too obvious. Examples are the fear mongering, the Hot World Syndrome, the fears about over-population, the skewering and outright hatred of conservative politicians, the blindness of secular-socialists to the evils of communism and the falling down of dead and dying Old Europe—these psycho/socio-economic issues are all related to a decline of the culture and society and to the fall of Western civilization.

    • “But, we know that for a long time now, it has never been about the science or the about the truth: AGW is now a political movement.”

      It’s more obvious today but really it was a leftist academic enclave from day one. The IPCC took years of lobby efforts and I’m sure those that went along for the ride never thought it would take off to the excesses of Al Gore. On the other hand it’s a Trojan Horse for the left as well, te eco-left was completely invested and now it can’t get away from the commitment.

      Sadly, there isn’t much social and political punishment for being completely wrong. The Cold War taught that lesson, years of advocating exactly the wrong appeasement policy for decades and objectively the country began it’s suicide cult tilt to the left around 1990. A long slow and choppy tilt but AGW was a perfect indicator. The recent left’s peak was about 2006 with the 2008 follow-through. It should regress from here as people face reality.

      AGW will become more of a backwater than currently, traditional “rationing think” wrapped under “bio-diversity” and “sustainability” nonsense will become the center stage. If they roll the dice again on junk science claims it will be “ocean acidifcation” as another spinnable carbon claim. Having struck out on general climate.

      • However, fearmongering about ‘acidification’ would be even more ludicrous than demonizing American SUV-drivers for causing global climate to change when you consider that the ocean is infinitely buffered.

      • And America is the whole world.

      • and a long way from the chevin!

      • Of course, Pew is a left of center polling center and they focused on the AAAS which is overweighted both in government funded science and people in “education” which is another enclave for similar reasons. Of course it is sad the much of what might be called “science” has a deep partisan subculture but do we have to look further than the eco-left movements to figure this out?

        The pompus, arrogant demeanor of the clip reflects poorly on the premise which is the ancient “progressive” self-association that there is an intellectual advantage in all things to the left of arguments and people. The liberals are attracted to unionized or otherwise similarly protected jobs and culture found in research and education is worth more of your time to research. Then again it was spin part of poll from a usual spin suspect; Pew.

        The attitude is reflected in you as well SS, ad homs, shouting arguments down, claiming superior understanding and knowledge; “I’m the smartest guy in the room”. Kinda like Big Al himself.

        It’s a weak argument, you completely miss the point of the poll results that are designed to exclude engineers and those in the private science areas but keep telling yourself what you like. What next, a list of how smart Democrat Presidents are compared to Republican’s?? You are a rube if you buy into such nonsense.

        You have self-identified yourself to the lowest end of the minion left. There is a liberal elite whose job it is to peddle such nonsense and distribute it in the way Pew and the media arm link has intended. It’s worth noting that the more the public identifies the very partisan nature of the science subcultures the trust in science overall has declined; read the whole poll SS. Of course Pew can’t connect that dot given its agenda. The again Dr. Curry is careful to never connect similar dots or topics all the time which is more significant than a garden variety troll such as yourself.

      • All data you don’t like are due to leftist conspiracies.

        Or, THE leftist conspiracy?

      • I’m sure there are conservative enclaves as well SS, the law of roughly even numbers over the past 30 years demands it. The Pew Poll was a joke since it targeting one obviously tilted left of center enclave but pretends it’s even handed. Tell us something about the media that most people don’t already know?

        Do you think it’s an accident that engineers have cultural contempt for the abstract unaccountable areas of academics that flood political areas with nonsense that is sadly common in AGW hustling?

        How enclaves grow and contract is an interesting topic. It’s worth noting the Pew would never dare offend the orthodox by making a similar poll of say the IPCC and the lockstep “consensus”? How come?

        Because the make believe it’s about “science” not political cultures from the start. Sure, some might have figured out you only get paid the better money to support the consensus, which is politically driven but there is good deal of group think politics and culture identification in the AGW support channel. Dr. Curry only admits it indirectly by identifying cultures common to many skeptics. How is that right or objective?

        The left loves AGW because it fits a statist solution and authority that much of the consensus shares. Rather than “no labels” why doesn’t Dr. Curry just state this obvious truth?

        As for you another delusional quip since you have been refuted with your idiotic “6% of scientist…..tripe”. I’m sure most of the leftist AGW supporters would like you to leave the thread as well since you blow the cover of “it’s about science” in a big way.

      • Settled:

        What am I supposed to learn from this video?

  13. Harold H Doiron

    To help answer the lead question of this thread, “Should we assess climate model predictions in light of severe tests?”, I recommend the following article regarding complex economic models: http://progcontra.blogspot.com/2011/08/i-love-my-model.html

    As a modeler of complex systems for 48 years in the aerospace industry, where the models are used for manned launch vehicle and spacecraft design and operational decisions involving life or death consequences, I agree wholeheartedly with the article at the above link. If models are going to be useful at all, then they must be validated with rigorous testing. Also, if they aren’t reasonably accurate in predicting outcomes of future events with high confidence, then they should not be used for decisions of any importance or economic impact. If the models can’t meet this standard, then it is appropriate to continue work to improve the models, but professional ethics should require that the model owner not let the model results be used for critical decision-making by those with less knowledge of the science and model prediction uncertainty. It is in this area that I believe that some climate modelers have failed most grievously.

    • Have you ever tried to design an aircraft that could stay aloft with thrust that varied randomly over a range comparable to ENSO?

      • ENSO has a thrust?

      • Steve Garcia

        Who knows? It might. I would love it if anyone here could tell me what causes El Niño – where is the extra energy coming from, and why isn’t it there all the time? What makes the heat plume move westward, then back? With movement there must be something akin to thrust.

      • Chief Hydrologist

        ENSO originates in more or less upwelling of cold and nutrient rich in the region of the Humboldt Current. I have a review here -http://www.earthandocean.robertellison.com.au/

        The thermal evolution of the Humboldt Current is best understood in terms of ENSO. ENSO is an oscillation between El Niño and La Niña states over a 2 to 7 year cycle. An El Niño is defined as sustained SST anomalies greater than 0.5O C (in the Nino 3 region) over the central pacific. Conversely, a La Niña is defined as sustained SST anomalies less than -0.5O C. The oscillations (more correctly chaotic bifurcation – but we will come to that) are driven by complex interactions of cloud, wind, sea level pressure, sea surface temperature, planetary rotation and surface and subsurface currents. The short explanation is that the Pacific trade winds set up conditions for a La Niña. Trade winds, south-easterly in the Southern Hemisphere and north-easterly in the Northern Hemisphere, pile up warm surface water against Australia and Indonesia. Water vapour rises in the western Pacific creating low pressure cells that strengthen the trade winds piling yet more warm water up in the western Pacific. Cool, subsurface water rises in the eastern Pacific and spreads westward. At some point the trade winds falter and warm water spreads out westward across the Pacific.

        In the region of the Humboldt Current – there is a balance between upwelling where deep ocean currents emerge and suppression of those currents by a warm surface layer.

        The reasons for more or less upwelling may hinge on the state of the Southern Annular Mode. Here is some up to date info on SAM – http://www.antarctica.ac.uk/met/gjma/sam.html

        Here we are very at the edge on the known. I went looking for some connection between UV and SAM – and found myself quoted at – http://forum.weatherzone.com.au/ubbthreads.php/topics/28657/3/The_Southern_Annular_Mode_SAM. There is a good discussion there.

        ‘Here is an SST anomaly thermally enhanced satellite image from October last year.

        http://www.osdpd.noaa.gov/data/sst/anomaly/2010/anomnight.10.4.2010.gif

        You can see the PDO in the north Pacific and La Nina in full swing in the central Pacific. You can also see the potential for cold water being pushed up from the Southern Ocean and onto the western coast of South America in the area of the Humboldt Current. The region of the Humboldt Current is the most biologically productive area on Earth because the cold southern water is joined there by upwelling frigid water. The upwelling (or not) in turn determines the thermal evolution of ENSO. ENSO is many things but starts in upwelling – or not – in the eastern Pacific.

        There is a direct physical link between UV and the SAM in the ozone layer of the middle atmosphere – and thus in storm tracks spinning of the Southern Ocean, pooling cold water off the western coast of South America and diluting the warm surface layer that suppresses upwelling in the eastern Pacific.’

    • Here is a simple financial market model, which essentially demonstrates the effects of implied correlation. When stock’s move in unison, then market swings become larger. When stocks differentiate themselves, some up some down, then the swings are smaller because the individual movements can cancel. We are entering a phase where the implied correlation is high and the stock market no longer shows useful differentiation. 400 to 600 point swings will become the norm.

      How would you test this hypothesis?

  14. Newton’s model of gravity is verified by the fact that we can use
    F = ma
    g = 9.8m/s^s
    plus wind resistance to predict the trajectory of a ball thrown or struck or catapulted by a known force, but only by knowing the force and the mass of the object, and wind speed if any. With climate, we only know relevant variables after the fact. Gravity would not be falsified if we could not predict the trajectory of an object of unknown mass, and climate models are not falsified by not predicting solar fluctuations, ENSO, AMO, etc.

    In fact, it is dishonest to say that the models should “work” without known values of each of those inputs. No scientific model can do that.

    • s^2, obviously

    • I think you misunderstand. A scientific theory is about being able to predict a set of dependent variables given different values of a set of independent variables.

      The kind of testing used on GCMs tend to be on sets of independent variables that are within limited ranges. This may be all one can do in practice but this doesn’t mean testing shouldn’t be done at the extremes of that available range – and this is where the most information about the robustness of the theory is likely to lie.

      Newton’s law can and presumably was tested under increasingly extreme conditions. First at points where gravity was likely to change (up mountains etc) and ultimately at increasingly micro and macroscopic dimensions.

      • What do you think I misunderstand, exactly?

        Of course, success at the extremes is the strongest support of the theory. But there are too many possible scenarios to test them all, so the main publicized projections must be based on averages of the random variables, with margins of error.

        My point is that when extreme “independent variables” present themselves in Nature, correctly hindcasting passes “severe testing,” but climate deniers cry that not predicting ENSO, or not predicting that both insolation and La Nina would be abnormally cold after 1998, disprove something. Horse feathers. The slowdown in warming, or whatever one chooses to call it, is hindcast accurately once the correct independent variables are input. That’s a pass, not a fail for climate science. Katzav seems to fail to understand that, and clearly imagines that climate scientists do not currently use severe testing. They already do. It just isn’t as easy as growing bacteria in two different petri dishes with different food or lighting, and thus not obvious enough for a philosopher to understand, apparently.

      • HAS

        None of what you have written indicates that climate models should not be able to meet specific measureable criteria in order to be considered as validated.

        Why can we see these criteria and evaluate the effectiveness of the models??

        I am not requesting to evaluate the code as that is proprietary.

      • As Mrs Beaton prefaced her rabbit recipes: “First catch your rabbit.”

        You can’t talk about validation until you’ve decided what you wanted your model to do. Much of the discussion above seems indirectly from commentators not getting that clear first.

        Perhaps that’s what you too are obliquely suggesting.

    • Stress testing a model should be like idiot proofing a program. While the model may not be able to accurately predict certain fluctuations, the programmer should be aware that they may have a larger impact on the results than expected. So the models should try to emulate the uncertainty in test runs. The modelers tweak elements they have more confidence in their uncertainty than elements that may be more uncertain than they expect, like natural variability. They are not anticipating the idiots.

      In a past post, there was a discussion on a model that did well until 2005, I think it was, then the results ran higher than the observed. With all the discussion on natural variability during the tweaking of the model period, the impact of natural variability on the results should have been more seriously considered as a larger element of uncertainty.

      • Ideally, sure, but unfortunately, in numerical models of complex systems it isn’t quite as simple as just not giving inventory clerks permissions to, for example
        SELECT * FROM PaymentInfo.Customers;
        In particular, the models have to be recursive, otherwise they violate causality. But that also means numerical values can “blow up” unless they’re constrained, by methods that are a lot more challenging than run-of-the-mill “idiot-proofing” of consumer software.

      • That’s why they make the big bucks. The parallel with idiot proofing is testing as completely as possible. There are a lot of issues, but I am sure there are improvements that can be made with stress testing as a goal.

  15. Joachim Seifert

    Dear Dr. Curry, I found an interesting remark in the works of Mr. Mojib Latif,
    well-known in the media in Germany: “CPMs get more difficult with increase in forecast time. It is easier to forecast for the coming 10-20 years than for
    those decades 80 or 100 years ahead”…
    I propose: CPMs have to be correct at least for a short time (1-2 decades),
    if not, they have to be rigorously weeded out. For longer time periods, one should give it a tolerance bonus, because CPMs for decades further away are more difficult to make…..
    Yours JSei.

  16. A severe test for the various models would be simply to use each other’s input data sets. The article noted “If, for example, a simulation’s agreement with data results from accommodation of the data, the agreement will not be unlikely, and therefore the data will not severely test the suitability of the model that generated the simulation for making any predictions.”

    Jeffrey T. Kiehl, 2007. Twentieth century climate model response and climate sensitivity. GEOPHYSICAL RESEARCH LETTERS, VOL. 34, L22710, doi:10.1029/2007GL031383, 2007
    http://www.atmos.washington.edu/twiki/pub/Main/ClimateModelingClass/kiehl_2007GL031383.pdf
    showed that there is a wide variation in the level of forcings fed into the models as inputs, and that generally low sensitivity models assume a high net forcing and vice versa.

  17. Dr. Curry:

    Another new paper:

    Improving conveyance of uncertainties in the findings of the IPCC

    http://www.springerlink.com/content/a1835xx1285t26g7/

    has sparked an on going debate on Pielke Jr’s blog, in two different threads, about “probability” and projections from GCM’s between Pielke Jr and James Annan.

    http://rogerpielkejr.blogspot.com/2011/08/how-many-findings-of-ipcc-ar4-wg-i-are.html

    http://rogerpielkejr.blogspot.com/2011/08/academic-exercises-and-real-world.html

    Reading through the back and forth Annan is taking the position that the GCM’s can never be wrong no matter what the real world does.

  18. Willis Eschenbach

    Naw, we’re only betting multi-billions of dollars and people’s lives on the models, so let’s use only the easier tests.

    What the heck, lets just let all models in and average them all without any tests at all. That’s worked so well in the past.

    Besides, asking the models to pass severe tests might result in low modeler self-esteem, and here in California that’s against the law.

    w.

    • Most important, keep the input assumptions and model codes secret. Really, what would a Mayan High Priest do if crop results fell short after the proscribed met their fate?

    • w
      “severe tests might result in low modeler self-esteem . . against the law”
      That lack of accountability has almost destroyed US education. See
      International Test Scores Poor U.S. Test Results
      Tied To Weak Curriculum

      US 12 grade math students only managed to get ahead of <Cyprus and South Africa. Too many such students then became "climate scientists" with comparable lack of skills.

      It's time to get back to basics. Quit codling and teach. Give F's for failure. Send back to the previous grade to relearn. Stop fooling students and destroying their futures by saying they are "ok" and "graduating" illiterates who can't balance a checkbook.
      See Do Hard Things – Rebel against low expectations. at TheRebelution.com
      If anyone objects – fire them. Time to renew not destroy.

      • David – American students that attend schools with poverty rates at less than 20% do well by way of international comparisons. Having students repeat grades more often than not only increases the likelihood of failure for the students who repeat. Students who excel are not being negatively affected by concerns about the self-esteem of students who are perennially told that they don’t match the model of a “successful” student — because students that excel are not being “fooled” by being told that they’re “ok.” So then, if failing students who do poorly doesn’t improve their performance on average, and students who do well are not being move along according to invalid criteria – how does having students who do poorly bring any advantages?

        Cross-national educational systems should not be compared in a facile manner. There are myriad relevant variables. One important factor is that many cultures value education, and teachers, more highly than we do in this culture – and pointing the finger at schools is not likely to help in that regard. Keep in mind that school systems in many other countries are not tasked with educating the same demographic cross-section of students to the extent that schools are in the U.S.

        I’m afraid that while your suggested solutions should certainly be examined (along with many other ideas about needed reform), in themselves they are are a far cry from the massive effort that is needed if we’re going to significantly improve the overall % of students who can balance a checkbook. I would suggest a more comprehensive approach to solving the problems than what you described in your post.

    • Your climate forecasts back in the 1980’s are on record where?

  19. Willis Eschenbach

    Charlie A | August 18, 2011 at 7:14 pm | Reply

    A severe test for the various models would be simply to use each other’s input data sets. The article noted “If, for example, a simulation’s agreement with data results from accommodation of the data, the agreement will not be unlikely, and therefore the data will not severely test the suitability of the model that generated the simulation for making any predictions.”

    Jeffrey T. Kiehl, 2007. Twentieth century climate model response and climate sensitivity. GEOPHYSICAL RESEARCH LETTERS, VOL. 34, L22710, doi:10.1029/2007GL031383, 2007

    showed that there is a wide variation in the level of forcings fed into the models as inputs, and that generally low sensitivity models assume a high net forcing and vice versa.

    Ooooh, I like that idea, Charlie, I like it a whole lot. We’d get to look at the models two ways.

    First would be the comparison of the various model results for each individual input dataset.

    Second would be the comparison of each single model’s varying results when fed each of the varying input datasets.

    Anyone have any leverage with the CMIP folks? I’d love to see those results from some of the models.

    w.

  20. How should we assess climate predictions? By prefixing them with “Here are the best guesses of the finest minds money can buy”. And then go on to explain whose money bought them.

    • The climate “guesses” you made 20-30 years ago are on record and have turned out pretty good?

  21. I rarely comment here on philosophical or policy issues, feeling more comfortable with the technical elements of climate behavior, but I thought it might be worthwhile exploring a sentiment that frequently emerges in these discussions. Harold Doiron raised it explicitly upthread. It holds that because the political, social, health-related, and economic consequences are so enormous, decision making on the basis of current climate models can’t be justified in light of their known inadequacies. An analogy is drawn between the use of reliable engineering models utilized in the design of aerospace vehicles that must meet narrow tolerances to ensure safety, and the lack of anything resembling that level of reliability in climate models.

    There is no question that reliability is an issue that must be addressed to make the climate models more useful for policy decisions. However, the analogy, in my view, overlooks an important difference between the situations in which the two types of models are employed. In the case of an aircraft, we are not forced to fly it if we are unsure of its behavior. In the case of climate change, however, we face a forced choice, because engaging in major actions or refraining from those actions both have potentially enormous consequences. To focus on the most relevant element in this situation, CO2, the salient feature is the exceedingly long lifetime of any atmospheric excess we generate from anthropogenic emissions – there is no single decay curve, but the trajectory of decline toward equilibrium concentrations can be expressed as a rough average in the range of about 100 years, with a long tail lasting hundreds of millennia. In other words, the CO2 we emit tomorrow, or refrain from emitting, is not something we can take back if we later decide we shouldn’t have put it up there. It will warm us for centuries. The analogy, in fact, may be closer to a scenario in a hospital emergency room trying to manage a patient whose condition is deteriorating without all the diagnostic information that we would consider ideal. In that circumstance, the choice between waiting for more information and acting before it’s too late can be a daunting one.

    I don’t want to belabor the medical analogy except to say that in many medical emergency scenarios, an extreme position on this issue is dangerous. We start to act on the basis of the limited knowledge available, and at the same time continue to gather information so that we can modify our actions as the situation evolves. In the case of climate change, it turns out, fortuitously I believe, that there is no practical way we can accomplish everything envisioned in a complete mitigation/adaptation scenario over the course of only a few years, and perhaps not within a few decades. The true choice is whether or not to start, and if so, start with what steps. To me, this is where legitimate debate should be focused, including the possibility of “no regrets” actions of the type Dr. Curry has mentioned in the past. The utility of current models should probably be evaluated with that perspective in mind.

    • Fred
      Added CO2 “will warm us for centuries.”
      With a CO2 half life of 5 years in the atmosphere?
      When there current solar cycle 24 trends suggest we are be heading into a Dalton solar minimum?
      When we don’t know if we will see a Little Ice Age again within this next century?
      That is extrapolating too far.

      • David – Both of those items are misconceptions that have been addressed in detail in earlier threads. There is no single CO2 half life, but rather a series of decay curves for chemical equilibration in the upper ocean, mixing into the deeper ocean, buffering from carbonate sediments, and restoration of carbonates from weathering of terrestrial silicate and carbonate rocks. The fastest of these involves decades, the subsequent steps centuries, and the weathering process hundreds of thousands of years. The “5-year half life” is not a concentration decay half life but simply the exchange rate of CO2 molecules between the atmosphere and terrestrial and oceanic sinks. That misconception has been refuted many times here and elsewhere, but it seems to come up again and again, so I suppose we will have to deal with it in the future, but it should not be thought of as reflecting the rate at which CO2 concentrations decline – that rate is very slow.

        The LIA threat has also been shown to be false, based on the minor forcing change that would result from a “Dalton Minimum”. Whether such a minimum is imminent is unclear, but it probably is not.

      • Fred,
        That is a circular, or at best incomplete, explanation.

    • “In other words, the CO2 we emit tomorrow, or refrain from emitting, is not something we can take back if we later decide we shouldn’t have put it up there. It will warm us for centuries”

      This is eco-green presumption. Neither the amount that the sink absorbs or even the relative impact of co2 itself is well understood.

      The amount we restrict growth over phantom science fears kills real people in real time. We as a country and globe are promoting a Pol Pot Eutopianism that has a predictable negative outcome. Carbon fuels improve peoples lives regardless of negative outputs that are considered. AGW was manufactured only to rationalize regulatory expansion as technology improved and limited carbon faults.

      With no hard proof, the consensus wings it for the “common good” and many who share your cultural/politcal views see no problem with the fascist inclination of selling it. No regrets like no labels is pandering to a false logic
      based on subset political wiring that should be completely refuted.

      • Judith – I haven’t done this in a while, and I think that I know the answer but it can’t hurt to check…

        So when you read the comment above from cwon’s – who is a frequent contributor to your blog, or the comment below on education posted by David – who is also a frequent Climate etc. participant, does it even give you pause as to your assertion that there is some legitimate distinction (a vast asymmetry no less) between the degree of political influence on the opposing sides of the climate debate?

      • Joshua –
        does it even give you pause as to your assertion that there is some legitimate distinction (a vast asymmetry no less) between the degree of political influence on the opposing sides of the climate debate?

        Why should it?

        Why would you believe that cwon or David or anyone else here has political influence that can match that of the IPCC – or NCDC – or the EPA – or the rest of the alarmist arganizations?

        You’ve been banging this drum for a long time, Josh. And frankly, it’s a foolish queston. But keep doing your thing.

      • It isn’t that Dr. Curry doesn’t admit the “influence” it’s the duplicity in the way she does it that I point out time and again. Go to her media, it’s ok to describe skeptics on her blog as “white, conservative or libertarian” but she falls back to “no labels” when she considers her own root culture. A total cop out and politically useful to whom?

        Somewhere on this thread is the idiotic Pew poll which was designed in part to feed a traditional liberal narrative. Most “scientists” are not Republican and they choose a highly partisan association (The AAAS) to sample which is filled with government funded and teaching types. Quite frankly many of then are not really “scientists” if you drilled down. Regardless is was a total hack job brought up for this affect by hack on this board, “liberals are smarter, better informed”. This is an ancient pejorative talking point that can be tracked to newspapers since before Woodrow Wilson and the Pew poll is just another feeder tool in such an effort to advance a common tribal cause. It’s directly connected to the “settled science” mantra shout down of any dissent; “we really are the authortiy, it’s been decided; agw is real and our policy is correct”.

        Of course there is no such poll of the IPCC and especially the summary committees. Of course it’s a global organization and Dem vs. GOP would have to be adjusted but why is the topic only discussed by skeptics? A total wall of silence exists when the political inclinations of the “consensus” by the “consensus” is requested. We might get whines about “McCarthyism” while of course ignoring that regardless of excess plenty of communists were called before the committee and obfuscated their agendas and cultures. Scientist are offended that their objectivity is questioned by their politics. Yet many condescend toward skeptics as “Fox News Viewers” to pick a common coded liberal lexicon.

        Whatever Dr. Curry has endured for just talking with skeptics will pale in comparison if she descibed the “consensus” correctly as “mostly academic/government based cultures, left-wing and supportive of increases taxes/regulations regardless of CO2 claims”. Of course it’s true, I’m sure it hurts since my impression of Dr. Curry is that she is by in large and academic liberal. She gave money of Obama’s 08′ campaign which was her right to do but of course it tells us what political tribes she identifies with. Worse, we will continue the roundabout that skeptics are “white, conservatives etc.” while scientists (referring to climate scientists) should avoid labels and politics as a cover story when we know they are a highly charged group very linked to eco-agenda setting, statist regulatory promoters. The consensus will just choose not to poll itself or disclose. More make believe.

        I’m sure that many people regardless of their tribes try to be objective about their work. That really isn’t the point. There is total passion involved for many (the consensus players often undisclosed) and it should be openly discussed. You don’t have to look at any site other than real Climate to quickly figure out that any finding or study that might have skeptical attributes is meet with hostile ad hom attacks and vindictiveness. The same warlike pattern can be found at skeptic sites. To me it’s a reflection of just how poor a field it really is that conclusions fly out in a matter of minutes. Will I really need to see what RC is going to say about the next paper just passed by Dr. Lindzen for example?

        All I’ve asked Dr. Curry is to own up, the “consensus” such as it exists was a process of cultural self-selection with many people who share a common outlook on eco-regulation, general politics and tribal culture. Just as members of the AAAS, Greenpeace or Sierra Club might self-identify. There should be an acknowledgment but while many skeptics are encouraged by the communication with Dr. Curry of the consensus I take a more reserved outlook while the contradictions remain obvious.

    • Harold H Doiron

      Fred, I understand your medical triage analogy, that in some urgent situations we must make a decision with less than complete information. Other climate researchers active at Climate, Etc. have indicated to me on previous threads long ago, that they agonize about the uncertainty in their models vs. the potentially catastrophic consequences of non-action. I can feel their pain and fretting over what to do about the situation. Perhaps they need to confer with others in our nation experienced and successful in dealing with these kind of difficult decisions. (Not me, but I could recommend some experienced individuals that could be consulted). I also agree with your point regarding “The true choice is whether or not to start, and if so, what steps?”

      Based on what I have been able to understand about AGW and uncertainty in the models that predict CAGW, I would disagree that our planet, and specifically the USA, was in a triage situation when the US House of Representatives passed high economic impact Cap and Trade Legislation and the President was ready to sign it, if only a few more Senators could have been convinced to vote for it. When doubling CO2 in the atmosphere is predicted to cause less than 1 deg. C global warming in 50-100 years (humans have adapted to larger temperature swings in the past), and when what the USA alone could do about it couldn’t make much difference, we clearly were not in a triage situation that justified such potentially harmful changes in public policy that had such slim chances of success. I just don’t think Climate Scientists who alerted us to the “crisis” gave US public policy decision makers the best overall advice on this decision. Somehow this issue got hijacked by profit seeking special interest groups without adequate vetting of the issue. Where was the formal, objective, and broad scientific and economic review on the urgent need for the US to make such a drastic policy decision, when the high economic growth rate areas of the planet clearly signaled they had no intention of limiting their CO2 emissions, ….and the economic analysis clearly indicated that higher energy costs in the USA would drive more economic growth to countries that had no intention of reducing their CO2 emissions?

      Given what we know today, with a couple of years of hindsight regarding climate science, our economy, high national unemployment, and the national debt issues, could the climate scientist community agree that Cap and Trade Legislation was a critical life saving public policy decision that had to be made, and that today we are worse off for not having made it? I suspect not when reading the healthy debate at Climate, Etc.

      When adverse consequences are on the line, and models are used for decision making, model validation status becomes more critical. If the model is not validated and the decision needs to be made without its predictions, what would the decision be? What would the urgency of the decision be?

      Some reasonable “first steps” to me that you have called for would be:
      1. Convene a national high-level scientific commission review of the CAGW issue (somewhat like the commissions convened to review the Space Shuttle Challenger and Columbia accidents, with some participation from objective scientists, engineers, economists, etc . outside of the climate change research community) to perform a Situation Appraisal (using a Situation Appraisal process from Kepner and Tregoe’s book, The New Rational Manager)
      2. Have the commission develop a long range plan to deal with the Situation, much like was done for the Apollo Program when our nation identified a crisis of being behind in the space race (some old timers tell me we really weren’t behind, but it sure seemed like it to me at the time). The plan might break down the Situation into required separate action plans for Root Cause Analysis, Decision Analysis and Potential Problem Analysis processes along the lines of example first steps such as:
      2.1 Establish a global average temperature change that would be “catastrophic” so that we would know how much global temperature change we can stand before the catastrophe occurs. What would be worse, a 2 deg C temperature increase or decrease? How sure are we that our actions in an attempt to control temperature would have the intended consequence?
      2.2 Confidently establish (if possible) Root Cause(s) of global climate temperature changes over the last 150 years and their relative importance
      2.3 Identify action that must be taken immediately, if any, to avert a potential impending catastrophe….this is the triage decision that must be made
      2.4 Identify what critical climate change data needs to be monitored, and if we don’t have it, identify it and establish how and cost to go get it.
      2.5 Identify actions that should be taken at certain decision gates in the future should current CAGW climate model predictions prove to be accurate over a 10 year period into the future (seems like they missed trends of the last 10 years that casts further doubt on the triage situation)
      2.6 Identify the research funding effort and waiting time needed to improve and validate climate models for higher confidence predictions
      2.7 Identify climate change adaptation and mitigation strategies that might be employed and weigh those costs and probability of success against CO2 emission control stragegies
      2.8 Identify what we might release into the atmosphere to reduce CO2 levels, its effectiveness and cost to do it, if we ever concluded CO2 levels were getting too high (they have been much higher in the past history of the planet and somehow, through natural causes they were reduced to current levels). There are strong natural sources and sinks for CO2 in our atmosphere that seem to swamp the anthropogenic signal. With a better understanding of the natural feedback mechanisms, how might we attempt to control the GHG effect with control mechanisms other than CO2 that might be more effective?
      2.9 Make those decisions now that the well-thought-out plan shows must be made now, and establish decision gates for the future, with appropriate data collected to support future decsions that must be made.

      I’m sure there are many other possible “first steps” that could be considered. Why is everyone so focused on CO2, which isn’t the biggest driver to the GHG effect? Has CO2 emission control been proved to be the most efficient way to attempt to control the GHG effect? If we have a CAGW crisis (I still think this is a question for disciplined Situation Appraisal processes to establish), what is the role of climate scientists as well as other science, engineering and economist disciplines to save our planet and nation? Does the climate change research community have all the answers, or do they need some outside help? Who should be selected to head-up the US effort in this crisis management problem? Are we going to defer to the UN for leadership on this issue? Why would that be in our national interest?

      As an engineer, it seems to me that way too much emphasis and research funding has been placed on the minor GHG, CO2, before Root Cause of recent climate trends has been firmly established, and that there is not enough emphasis and research on other important factors in climate change. I think we need to understand regional climate change much more critically than “global average” temperature change which may be an ill-posed metric. I want to see a better process for our country’s reaction to the potential problem of climate change by giving the problem consideration by a broader range of technical talent and crisis management experience at our nation’s disposal.

    • I addressed this above. Segalstad has been responsible for promoting the myth of a 5-year half life. Why he misunderstands it is unclear, but he has probably confused data based on C14 that describes exchange rates with the rate at which excess CO2 declines. The latter is very slow as I indicated above.

      • I’ve since checked the site you linked to and it confirms the nature of their confusion between exchange rates and concentration decline rates.

      • Fred Moulton

        I further came across SOURCES AND SINKS OF CARBON DIOXIDE by Tom Quirk
        His arguments seem to me persuasive that natural causes are a better explanation for the hemispheric and global variations than fossil fuels.

        Then see Fred Haynie’s detailed analysis of CO2 driven by natural causes, not anthropogenic. His analysis of the different shapes between Arctic, tropics and Antarctic is thought provoking as the primary CO2 drivers. http://www.kidswincom.net/climate.pdf

        See Roy Spencer on how Oceans are Driving CO2

        Richard Courtney found similar results earlier.

        Murry Selby noted major correlations of CO2 with soil moisture and temperature. I look forward to his papers/presentations.

        Bruggemann et al. review: Carbon allocation and carbon isotope fluxes in the plant-soil-atmosphere continuum: a review
        Biogeosciences Discuss., 8, 3619–3695, 2011
        http://www.biogeosciences-discuss.net/8/3619/2011/
        doi:10.5194/bgd-8-3619-2011
        “The flux of carbon dioxide between the atmosphere and the terrestrial biosphere and back is 15–20 times larger than the anthropogenic release of CO2 (IPCC, 2007). This large bidirectional biogenic CO2 flux has a significant imprint on the carbon isotope signature of atmospheric CO2 (Randerson et al., 2002), . . . there is still a lack of understanding of the fate of newly assimilated C allocated within plants and to the soil, stored within ecosystems and lost to the 5 atmosphere.”

        In reviewing such developments, I am not persuaded that CO2 fluxes are sufficiently well known, or that anthropogenic emissions dominate CO2 fluxes. It appears the science is not “settled”, but that we see further ongoing developments to better understand and model the variations and their causes.

        Any competent reviews to the contrary?

      • It comes down to whether you believe there is a net CO2 flux into the atmosphere, and a net flux into the ocean (acidification), and if yes to both these, where is it coming from?

      • David – The excerpt you quoted reveals the same confusion between exchange rates and concentration decline rates I discussed above. The Salby arguments have been examined extensively in a previous thread, and I believe their inadequacy has been well documented, but you can review that thread for details. Thus, from the only evidence you actually cite, there seems little reason to doubt the basic principles underlying the anthropogenic CO2 concentrations and their rate of decline as a function of concentration.

        Since you have only given links to other articles, but neither data nor the logic of their thinking, I don’t yet see any value in pursuing each of them until their evidence is shown here, but I think that the very long atmospheric lifetime of excess CO2 is now well established. I doubt that this is an area where our understanding is going to be overturned by some completely unexpected new evidence.

      • I subsequently looked at the Haynie site, which is not a published paper but a series of slides. I didn’t see anything about concentration decline rates, the topic of these recent comments. His conclusions about CO2 sources are immediately refuted by observational data on ocean accumulation of DIC and the pH reduction, as discussed in the Salby thread. His analysis is therefore wrong. I probably won’t review the other sites at this point, because nothing yet appears to challenge the large quantity of data on anthropogenic CO2.

      • Fred
        The issue is about the residual difference of the major sinks/sources.
        Since fossil emissions are about double the average CO2 increases, there must be natural sinks at least as large as the difference. With natural sources and sinks 20-30 times larger than fossil combustion, that indicates we would need to know the magnitude of each of those natural sources and sinks to a small fraction of the average CO2 increase to be able to quantify the average CO2 change as well as the annual and semi annual variations. e.g. that needs accuracies preferably ~ 1% in the major sources and sinks.Until you can quantify those rates, you cannot determine whether the anthropogenic emissions are a major or minor cause etc.

      • David

        The fundamental argument of those fearing CO2 on this issue is that we have not identified another natural emission source that has changed the amount of CO2 emissions during the period that humans have started emitting CO2—therefore, the rise is due to humanity.

        It is a pretty simple agrument, and probably correct

      • Actually, we have seeral other suspect sources of increased CO2 emission.

        1. Land use. Agricultural practices both increasing the emission of GHG’s and reducing the sequestration of carbon in soil and biomass, from tendency to plant less deep-rooted species to turning former composting material into biofuel to raising more methane-belching ruminants (and the methane, a GHG itself, giving rise to CO2 when it breaks down) to changes in microbial composition of soils tending to favor CO2 emission over absorbtion..

        2. Temperature rise leading to reduced CO2 absorbtion and increased CO2 emission from common CO2 sinks; warm ocean waters hold less CO2, for example.

        3. Desertification and urbanization, paving roads and disrupting habitats recklessly in ways that lead to die-off of carbon-sequestering plants and microbes (wasting as distinct from actually using the land, per se).

      • Bart R,
        Check out limnology, the study of fresh water bodies.
        There is credible talk that I am investigating that the impact of fresh water bodies on the carbon cycle has been vastly under stated. It appears that huge amounts of carbon are sequestered by freshewater eco-systems.

      • Bart

        I have previously posted several articles noting studies showing natural variations in CO2 emissions levels. We do not have decent longer term data on natural emissions levels or variances over time.

        About the only thing we know is that humans are emitting CO2. We really do not know how much, or if we are the cause of the overall rise. We suspect we are a major contributor

      • Rob seems to luv the “agrument” from ignorance, to the point of insisting on magnifying it.

  22. It would be foolish to expect a climate model to predict future changes in global temperature with certainty. So what degree of forecast accuracy should we expect from climate models? How accurate should a forecast be to be useful?

    • Like playing horse shoes or being a little bit pregnant? You can’t be serious. Science is accurate and repeatable or it is just a wrong guess about what the data means. What next – do overs?

      • John Carpenter

        dp,

        “Science is accurate and repeatable or it is just a wrong guess about what the data means.”

        Knowledge is accurate and repeatable…. science is the way to get there.

      • Knowledge is a changing gradient. Results are repeatable if the science is done right. Results are not knowledge. Your knowledge of knowledge just changed in this thread, indicating is not guaranteed to be accurate, or that degree of accuracy is even a characteristic of knowledge. Knowledge is simply acquired information. I am aware that there are people believe the moon landings were faked. My knowledge is correct, but is it accurate? I’ve no way to know because I don’t personally know anyone who believes the moon landings were faked. The accuracy is suspect.

        It does not matter to the puzzle above if the landings were faked or not.

      • I’ve never been more serious. If you believe guesses are just as good as climate model forecasts, cite the verifiable guesses made at the time the first IPCC forecasts were made and we will compare them.

  23. How’s this for settled science?;

    http://www.guardian.co.uk/science/2011/aug/18/aliens-destroy-humanity-protect-civilisations

    NASA and Penn State, together again, speculate about “green aliens” (of course they would be left-wing and assumed much higher intelligence!) would destroy humans to save the planet.

    Our tax dollars working hard.

    • A fascinating study! It explores all possible advantages and disadvantages of contact with intelligent aliens. Guardian has a link to the paper.

      • exactly it is a fascinating study, despite attempts by the “burn the books” movement to pretend it’s all about them.

    • cwon14,
      “The Day the Earth Stood Still”- both the original and the remake- dealt with this better and with no tax payer money.
      Notice that in the 1951 version, the aliens were annoyed at our violent bad behavior.
      http://en.wikipedia.org/wiki/The_Day_the_Earth_Stood_Still
      Fresh off of WWII and at the start of the Cold War and the ramp up into nuclear weapons including thermonuclear H bombs, the big fears of self destruction were centered around war.

      In the 2008 remake, the aliens were here deal with our wickedness regarding the environmental damage we wicked humans are causing Earth.
      http://en.wikipedia.org/wiki/The_Day_the_Earth_Stood_Still_(2008_film)
      Both movies depend on a savior to intervene and bring the plain truth of how wicked we humans are and to demand those in charge reform or face extinction. Of course enlightened humans recognize the savior’s gospel and seek to protect the savior from the evil violence of the unenlightened with lots of plt twists and special effects.
      Now we have scientists who have figureed out how to write a grant proposal to pay them to put a veneer of science over this B movie story plot.
      How cynical and derivative are these ‘scientists’?
      How lazy and ignorant are those who actually funded this faux research.

  24. Dr. Curry,
    I asked a question up thread, and re-stated it based on your feedback with a specific quote from the link I posted.
    I was hoping for a clear answer.
    Here is another try at a question, in an attempt at salon-style communications:
    Why would anyone in climate science not want severe tests of a theory that many proponents of believe represents the most profound challenge to humanity?

    • Let me give you the cynical answer you (and I) believe is actually applicable: they know they can’t possibly pass such tests, and don’t want to suffer the loss of status and resources acknowledging that would cause.

      • Brian H,
        It is disappointing that when a reasonable question is asked in a reasonable fashion it is often ignored.
        My understanding of the Salon culture is to increase the tone of civil discussion. That would imply engaging on questions that are asked civilly.

  25. Reality check: Maoists can send all Americans to the farm for reeducation and academia can add Hunting and Gathering to its Basket Weaving curriculum and what is that going to change?

    • GOOD GOD ! Someone should put a lock on the medicine cabinet.

      • It is using global warming. politics and fearmongering to take over the industrial base. What is Liberal Fascism?

  26. John Carpenter

    Judith,

    Here is a severe test for you to take… can you explain what this sentence is supposed to mean?

    “Only then might the potential falsity of an implication of the conjunction of the prediction and the additional assumptions constitute a real potential challenge to assuming the truth of the prediction, as opposed merely to a challenge to the conjunction of the prediction and the additional assumptions.”

    Yikes!

    ps I’m not really expecting an answer

    • Ya, I thought that was cute. Here’s my stab: if you want to verify your hypothesis about particular variables, you first have to be sure that all the wider theoretical assumptions are valid so that you aren’t dealing with an indecipherable mash-moosh. ;)

  27. I lliked the simplicity with which he outlines the justification for “severe testing” of climate models, and indeed for most assertions about the way the world works. My guess is that if climate science had not become so politicized modelers would probably be quite comfortable stating conditions where the models had clearly “failed”. I sure hope so, because as Katsav notes this type of testing is a cornerstone of good science.

  28. It’s easy to get a denier’s goat when he starts criticizing long-term global temperature forecast from climate models. All you have to do is ask one simple question. “You had something better?”

    • And you know somehow there is not something better to have? Please share. For those who wonder, M. Carey asserts he has found the upper limit of what can be known and beyond which there is nay aught. I do hope he explains it cogently.

      • If you had something better or someone else had something better, show it. Otherwise, I got your goat.

      • Chief Hydrologist

        Having consulted Tim Palmer’s Lorenzian Meteorological Office

        ‘Prediction of weather and climate are necessarily uncertain: our observations of weather and climate are uncertain, the models into which we assimilate this data and predict the future are uncertain, and external effects such as volcanoes and anthropogenic greenhouse emissions are also uncertain. Fundamentally, therefore, therefore we should think of weather and climate predictions in terms of equations whose basic prognostic variables are probability densities ρ(X,t) where X denotes some climatic variable and t denoted time. In this way, ρ(X,t)dV represents the probability that, at time t, the true value of X lies in some small volume dV of state space. Prognostic equations for ρ, the Liouville and Fokker-Plank equation are described by Ehrendorfer (this volume). In practice these equations are solved by ensemble techniques, as described in Buizza (this volume).’ (Predicting Weather and Climate – Palmer and Hagedorn eds – 2006)

      • That’s why they use ensembles of climate runs. Point being…?

      • I don’t think ensembles of climate runs count as ensemble techniques in Buizza’s terms.

      • Climate is chaotic and prediction as such is impossible – other than as a probability density function. You mistake the opportunistic ensemble of the IPCC for systematically designed model families.

        Which bit of uncertain didn’t you understand Jim. Perhaps it was the phase space? The latter is the language of dynamical complexity – which you don’t understand despite the many quotes and references I have provided for you.

        The point being that you make continue to make claims about things you don’t understand.

      • So McCarey – got nothin’ to say about this quote from the head of the European Centre for Mid Range Forecasting? Thought not – you got nothin’ at all.

        This compares to the IPCC guesses…

      • No, you compare numbers to numbers. The IPCC forecasts are numbers. Your quote has lots of words, but no numbers, and thus can be compared with the IPCC forecasts only if you like making meaningless comparisons. If you do, I can suggest all kinds of meaningless comparisons for your consideration. For example, what’s prettier, a chicken or a duck?

      • Carey,

        First with the goats and now we’ve got you checkin’ out the chickens and the ducks, too? Maybe what we have here is all just a big misunderstanding, ol’ buddy. I’m sure that’s it. Carey, ol’ sport, this is a Climate Science blog–not Ol’ MacDonald’s farm? See the difference? Got it? At’s a-boy!

      • Chief Hydrologist

        A duck of course – in an orange sauce. Why the f… should I give you a number when it has just been explained to you by the head of the The European Centre for Medium-Range Weather Forecasts (ECMWF) that it is utterly impossible to do more than provide probabilities. Do you want me to make it up as the IPCC has done?

        You lack any fundamental understanding of the AOS – and insist that the range of numbers selected subjectively by modellers – plausibility determined as an ‘a posteriori solution behaviour’. There are a radically different answers possible in the range of plauisible formulations – well known by the modellers if you’d care to do more than swallow the green PR holus bolus. To continue to insist that the IPCC numbers are something meaningful is to show that either you are a liar or a dupe.

      • Chief, the IPCC’s forecasted numbers are far more meaningful than your wishy-washy predictions that climate probably will change. Useless are predictions that do little more than say global temperature probably will go up or down, unless it remains the same. Moreover, the ….. wait a minute … what I am doing trying to explain this to someone who thinks ducks taste pretty.

    • “You had something better?” Well yeah, Mr./Ms. M. Carey, I have had and continue to have something better. I have a life! That’s why I’m not a pitiful greenshirt, blogospheric pest.

      And, oh by the way, I don’t get my Ho-Ho’s off on Dorito’s Ding-Dongs, either. Can you say the same, Mr./Ms. M. Carey? Can you?!

      Twinkie-binger!!!!

      • mike, I’m glad to hear you are pleased with your self, but the subject is climate projections.

        Maybe I should return dp’s goat.

      • Wow, Carey, you’re really into goats, aren’t you? Little too weird for me, but I don’t want to be judgemental. I understand it’s your thing. And since you’re into goats you might want to check out Eliphas Levi’s illustration of the Goat of Mendes. Tell me what you think–a remarkable resemblance to a certain smut-novelist party-animal who currently guides the destiny of the leading greenshirt boondoggle–no?

      • Hardly at Patch on the real thing.

      • Try again:
        Hardly a Patch on the real thing. Or my first name isn’t Rajendra. Which it isn’t.

    • So your assumption is that a bad guess is better than no guess at all?

  29. Assess climate model predictions by so severe tests as you will, provided first the tests themselves have passed somewhat more severe tests.

    Pretty pointless measuring the length of a plank with an elastic tape.

    Any test itself is only another model; calibrating one poorly understood (by Rob Starkey, for example until 4:12 pm, thanks to David Wojick’s excellent communication skills, and then apparently lost track of again by 8:00 pm, sorry David, valiant effort) against a worse understood one, there’s hubris in that.

    There do, or may, exist specific, measurable, apt, relevant, timely criteria by which models of chaotic — even spatiotemporally chaotic — models could be tested.

    Predictive power is emphatically not one such criteria, and would rather suggest were such a test to too regularly succeed on scales over which the system is chaotic that the understanding of the system is poor.

    Indeed, for some economic models, too good a record of predictive success is a certain indication of something seriously amiss (Madoff).

    One valid test of a climate model might be like a Turing Test: a large compilation of model runs is collected and the real data tossed into the mix; if no statistical method could determine better than random chance which set of figures were the real dataset based only on the data provided, regardless of the degree of interrogation by statistics.

    Contrary to what some who deprecate computer models claim, it’s far likelier someone studying such models is performing science than anyone who trots out a projection based on a sum of sine curves and offers to place wagers on what the actual outcomes will be.

    Prediction of future weather is not a severe test of a climate model. To borrow a term from a wiser man than me, it’s not even a test.

    Applying weather prediction to climate models is akin to testing how well an aircraft flies through solid ground or backwards through time. Sure, it’d be really impressive, but it’s not (except in science fiction) what aircraft do.

    That said, I doubt the climate models of this generation would pass a severe test, and it’s unlikely they’ll get to that point for some generations yet to come.

    When they do gain such skill, then all sorts of climate predictions can be made and examined by observation and experiment on that ensemble of climates indistinguishable from the real thing.

    Or does anyone still claim that they have Solomon’s wit to divine a valid distinction between a computer model and a benchtop model, such as some use in laboratories to experiment by analogy?

    Many such bench experiments, too, fail severe tests.

    Einstein, Mendeleev and Newton have been proven wrong far more times by new first year laboratory students than any computer simulation has delivered wrong output.

    • That’s really ripe stuff. Whatever.
      Valid models are thus, you opine, even less useful for forming policy (which necessarily involves anticipating the future) than invalid ones.
      Good one.

      • Brian H

        It appears Scott Adams read your comment today, and has this to say about your objection: http://www.dilbert.com/2011-08-19

        Not to sound defensive.

        Come up with a valid test, and I’m only too glad to see a validation.

        Invalidate the test, and the validation fails to have any value.

        Policy ought — I opine — be formed taking into account the quality of the validation.

        Rob Starkey demonstrates why polcy ought give some (in)validation the most sceptical possible appraisal, as clearly even people purporting expertise wil often propose ludicrous test measures.

    • The testing of models is done all the time-it is the normal process.

      I am not suggesting that climate models be subjected to some special types of tests that are especially difficult to pass- I am simply asking that climate models follow the process that all other models follow during development and validation. State the criteria that you are trying to be able to accurately forecast over what timeframes and then measure how your model did against those criteria.

      I have assumed that GCM followed this same process, but they were simply not very accurate as of yet. It seems I was wrong and per Judith, the models have been developed based upon philosophy and not engineering principles. You can’t really validate philosophy very well and it seems pretty silly to decide to change the course of an economy based on some groups particular philosophy.

      • Rob Starkey

        I think you should, if you accept the testing of models to be the normal process, suggest that climate models be subjected to tests that are difficult to pass.

        Difficult to pass implies a scale of measure correlating success with some objective meaning. It means the validation test stands for something.

        One criteria for models of chaotic systems over their scale of chaotic behaviour — and we know climate systems are spatio-temporal chaotic as regards weather by manifold characteristics over multiple scales — is that they are no more forecastable than the systems they model.

        If one could forecast the outputs of a model of a chaotic system by any other means than re-running the system from identical initial conditions, they would have proven one of two things:

        1.) The ‘chaotic’ system isn’t actually chaotic, and the forecasting technique should work as well on the actual system as on the model;
        2.) The model isn’t a very good representation of the chaos in the actual system, and while it may be good for something, it will fail for many interesting questions.

        Chaos Theory is extremely simple mathematics. Unpredictability is one of its most straightforward elements. Please find some resource or teacher that can explain why, “not very accurate yet” is meaningless in this context.

        Popper’s philosophy is obviously flawed as some seek to employ it, as it cannot withstand application to situations where prediction is not a characteristic of a system.

        Test the reliability of weather predictions within climate models over the same span of time as weather predictions are reliable in the real world — perhaps up to two weeks, at some percentage. If the reliability on such time scales of the model weather predictions of the model climate is indistinguishable from the reliability of the same prediction methods in the real world, you’ve gone a long way toward validating the climate model.

        And it still won’t predict real weather.

        It seems awfully silly to deny policymakers the valuable information that can be delivered by science simply because you lack the imagination to come up with applicable tests, or are so blinded by the trope of inappropriately extrapolating weather prediction over longer and longer spans of time that you can’t see it’s not a valid method.

        Unless you prefer bad policy.

      • Bart
        I suggest fair tests that evaluate the integrity of the model for measuring different criteria over different timescales. The principles are very well established.

        In regards to model validation, the passing of tests are not hard or easy, as that implies different amounts of effort is expended. It is just math—did the model succeed in predicting certain criteria, repeatedly, over what timescales.

        To believe the results of anything called a model before ensuring the model has passed such tests if having faith similar to one’s belief in religion.

      • So we’re agreed, climate models ought be validated by severe tests, such tests themselves validated by severe tests too, but such tests would not include weather forecasts?

  30. Lindzen and Choi Part II
    No red herring. I misinterpreted the last sentence of the following quote from the June 10 thread:

    “The new paper (dubbed here “Part II”) addresses the published criticisms.

    Citation: On the Observational Determination of Climate Sensitivity and Its Implications. Asian Pacific Journal of the Atmospheric Sciences, in press. [link to complete manuscript]

    The manuscript addresses many of the concerns raised by Part I, some of these concerns were addressed more satisfactorily than others. And a whole host of new issues are raised by the paper. The PNAS reviews of the paper can be found here.”

    The “here” was what I linked.

  31. Deniers go silent or silly when asked to cite long-term forecasts of global temperature change made by deniers 20 or 30 years ago.

    • Chief Hydrologist

      “Study hard what interests you the most in the most undisciplined, irreverent and original manner possible.” Richard Feymann

      http://www.americanthinker.com/2007/11/enso_variation_and_global_warm.html

      ‘Well when events change, I change my mind. What do you do?’ Paul Samuelson.

      • ?? I was pretty sure that quote came from Keynes. And that he was lying through his Van Dyke.

      • This is a direct quote from Sameulson. Keynes later said something similar.

      • Larry Goldberg

        Samuelsen made that remark on TV on December 20, 1970 – http://quoteinvestigator.com/2011/07/22/keynes-change-mind
        Keynes died in 1946. Later, in 1978, Samuelsen attributed a slightly different version of that same remark which he attributed to Keynes – “When my information changes,” he remembered that Keynes had said, “I change my mind. What do you do?”
        However, there is no definitive way to track down where or when he Keynes have used that expression – or any one of the variations that have been quoted through the years.

      • Yes – I should have said that it was later credited to Keynes.

        ‘Legend says that while conferring with Roosevelt at Quebec, Churchill sent Keynes a cable reading, “Am coming around to your point of view.” His Lordship replied, “Sorry to hear it. Have started to change my mind.”

        I think I will go back to crediting it to Keynes.

      • Steve Garcia

        I love that quote from Feynman, who I admire a lot. Not heard it before. It was easy for him to say, though – he had a helluvan education and ended up on the forefront at Los Alamos, getting to play with ideas to his heart’s content. But to do the “undisciplined” thing right, he had to have a LOT of discipline under his belt as a foundation to work from.

        Many get a fire lit under them years after leaving school (read: disciplined learning), and a great many of those never learned any thinking discipline while in school. So they come to a subject with varying levels of sloppiness to their thinking, and they are fairly “easy marks” for half-baked ideas from people like von Daniken.

        Feynman was in the right discipline at the right time, and fell into a bed of roses, from which the world was his oyster. HE was able to play with ideas and make a living at it. (Yes, it helped that is IQ was probably up around 200.)

        I’ve seen SO many “sloppy thinkers” out there, and I’d hate to see that quote make it even more sloppy out there.

        At the same time, climate science skepticism is NOT really one of those sloppy thinking venues/side shows. Yes, there are many skeptics who are to some degree sloppy thinkers. I know I am not 100% non-sloppy, though I try to be as un-sloppy as possible. But from what I’ve seen of the non-academic, non-credentialed supporters of CAGW, I think there is quite a bit more discipline on the skeptical side of the aisle. VERY few “amateur scientist” skeptics ask dumb questions. And skeptics would even be BETTER informed if the journals didn’t have pay walls up that block our access to papers.

        Skeptics are seriously, seriously TRYING to learn as much as they can – in as disciplined, yet irreverent and original ways as possible. You will find a LOT of science being discussed at WUWT and ClimateAudit, and a lot of learning going on. To be irreverent is to be skeptical. Does anyone think that Feynman ever accepted anything he was taught, without running it past his skeptical filter? Even though he was working with the best minds of his time, he doubted everything they said – though he did allow that they were probably right – but he accepted none of it until he verified it himself.

        I think every skeptic in every field does the same, and each science has enough holes in it to feed their irreverence. I would NOT include in “skeptics” such touted professional “skeptics” as Shermer, whose sole purpose in life seems to be to defend the scientific establishment against people who have the temerity to be “undisciplined, irreverent and original.”

      • Feynman made a distinction between learning by rote as most people fo and learning by understanding.

      • Steve Garcia

        Rote isn’t learning. It is just pumping data in.

        Science needs more guys like him and Kerry Mullis.

    • M.Carey,
      That eco-comedian tried to play the same game.
      He failed as well.
      The correlation between believers losing an argument and their increased use of ‘denier’ is amazingly strong.

    • –>”Deniers go silent or silly when asked to cite long-term forecasts of global temperature change made by deniers 20 or 30 years ago.”

      “Deniers” are not deniers of climate change they are deniers of schoolteachers who in their hubris believe they can predict the future and save the world from productive while achieving their socialist Utopia by pushing climate porn on taxpayers’ children.

  32. JC

    Here is a question for you.

    In the following graph, why has the global mean temperature (GMT) touched its upper boundary line (blue) that has a warming rate of 0.06 deg C per decade only 3-times, every 60-years, but has never crossed it for long in the last 130 years?

    http://bit.ly/ps8Vw1

    From the above graph, is it possible for IPCC’s projection of 0.2 deg C per decade in the next two decades?

    I say that is impossible. The GMT must move towards its lower boundary line (pink) in the next two decades. As the upper boundary line is an upper limit, the global warming rate for the next two decades can not exceed 0.06 deg C per decade (the slope of the upper boundary line).

  33. Peter Davies

    JimS

    You said
    “The observation of the climate is the observation of trends. The data (the historical temperature record) upon which climate models are based are not “data” in the sense that it can be replicated under strict guidelines. The data is a one-time historical occurrance never to be repeated in the same sense that the performance of a certain grade of steel “repeats” itself under the same test by independant laboratories. This is the fundamental difference between science and non-science. Science is based upon data that can be replicated, as well as “predictions” that can be replicated.”

    This statement deserves further discussion in its own right. While there are obviously more things other than just temperature being measured, the fact of the matter is that the system that yields this data is NOT ergodic by nature and hence not capable of prediction.

    • But are there boundary conditions? Based on the record, the climate stays within rather narrow bounds, except when it doesn’t.

      The proper policy response to the above is to make hay while the sun shines and prepare for anything.

      • Peter Davies

        I agree that climate generally operates between bounds which are narrower in the short term than in the much longer term. It would then be in these conditions that ice ages and extended warming periods come and go.

        In relation to policy responses however, I disagree that there can be much that we can do to mitigate the effects of climate trends that can take many decades to materialise.

        The current drive against the consumption of fossil fuels in the western economies would not only have negligible effect on world climate but would also have a most detrimental effect on these economies – all with no proper scientific or logical basis.

      • Yes, when I said “make hay” I meant accumulate wealth and resources and keep all options open by improving adaptive capacity and technology. I certainly did NOT mean implement pointless “mitigations”!

  34. Considering model testing, McKitrick et al 2010 might also be interesting

    http://rossmckitrick.weebly.com/uploads/4/8/0/8/4808045/mmh_asl2010.pdf
    published here
    http://onlinelibrary.wiley.com/doi/10.1002/asl.290/abstract

    quote from the discussion and conclusion:

    “In our example on temperatures in the tropical troposphere, on data ending in 1999 we find the trend differences between models and observations are only marginally significant, partially confirming the view of Santer et al. (2008) against Douglass et al. (2007). The observed temperature trends themselves are statistically insignificant. Over the 1979 to 2009 interval, in the LT layer, observed trends are jointly significant and three of four data sets have individually significant trends. In the MT layer two of four data sets have individually significant trends and the trends are jointly insignificant or marginal depending on the test used. Over the interval 1979 to 2009, model-projected temperature trends are two to four times larger than observed trends in both the lower and mid-troposphere and the differences are
    statistically significant at the 99% level.”

    • I.e., the models are reliably unreliable.

    • Andre,
      One way to interpret the summary, “Over the interval 1979 to 2009, model-projected temperature trends are two to four times larger than observed trends in both the lower and mid-troposphere and the differences are
      statistically significant at the 99% level.”
      Is that people who want to see something bad enough will see it, no matter if it is there or not.
      The social movement of AGW is supported by people willing themselves to see a claimte crisis where there is none.

      • The crisis you refer to isn’t based on tropical tropospheric temperature trends.

      • lolwot,
        The troposphere, until the studies were made, was the subject of specific predictions by the AGW community.
        Those preditions were incorrect.
        Sort of like the claims about extreme weather:
        Specfic and powerful until they are forgotten or explained away when reality declines to cooperate.

      • The AGW community predicted the lower troposphere and middle troposphere would warm globally. 10 years ago skeptics were citing the UAH satellite record and radiosonde records to argue that it hadn’t warmed. Skeptics used that to question the warming in the surface record. Turned out the satellite and radiosonde data was wrong. Now both show warming.

        10 years before that the skeptics were arguing that the surface temperature record didn’t show any warming.

        And 10 years before that there’s Lindzen questioning whether the Earth had even warmed since 1900.

        The interesting pattern here is that skeptics are retreating to more and more specific discrepancies. Comparing warming trends of the troposphere and surface – with emphasis on the tropics, rather than arguing for no warming at all.

        So forgive me if I am rather cautious about concluding the tropospheric hotspot is missing.

        Second it’s not clear what effect a lack of a tropospheric hotspot would have on AGW. It isn’t obviously related to climate sensitivity. Some of the models with the largest tropical tropospheric hotspot have the lowest climate sensitivities.

      • lolwot,
        The skeptics are being vindicated: The AGW theory is not holding up to the predictions made about sensitivity.
        But the believer response is, to borrow a phrase, that AGW is ‘false but true’.
        It is not, as you imply, that one or the other model of AGW is true. It is that they are demonstrating no predictive ability: None of them are useful.
        But the AGW community clings to things like the sky dragon to distract themselves from that.
        But you are forgiven.

  35. Judith,

    Climate science has generated a great deal of confusion for the public by manipulation of the issues.
    CO2 itself has been tested in countless ways by itself.
    Yet in many literatures the BTU’s and CO2 are combined as straight CO2 generating mans problems of heat generation of overheating the planet.
    Is that ethical science?

  36. I have trouble with these highly theoretical considerations. If we are to test climate models, then, surely, in the end we must rely on measurements of global mean temperatures (GMT), assuming that these have a meaning. We need to detect a CO2 signal against a background of natural noise. This is, surely, a classic signal to noise ratio problem.

    The signal we are trying to detect, is, according to CAGW, around +10 C per century, or 0.1 C per year. Again, I must assume that measurements of GMT can be made with an accuracy of 0.01 C. So each day this year, should, on average, be 0.1 C warmer that every day last year.

    One of the classic problems with noise is that the more frequently we can make the measurements, the greater the noise. Prior to satellites, we got data on a monthly basis. Now thanks to the UAH at
    http://discover.itsc.uah.edu/amsutemps/execute.csh?amsutemps+002
    we have daily measurements. If we look at the data for August 2011, we find that on 6th the difference between 2010 and 2011 was 0.09 C. By 16th August the difference was 0.54 C. This is the wrong way round for CAGW, since 2011 is cooler that 2010.

    How on earth can we ever hope to detect, in any reasonable length of time, a signal of 0.1 C per year, when we have this level of noise? Particularly as, so far as I can make out, no-one has any idea what is causing the noise.

    In point of fact, as Girma points out, the data indicates that far from GMTs rising at 0.1 C per year, they are at best standing still, or maybe even falling.

    • Under the influence of the proper pharmaceuticals, you can hear cautionary and warning voices in the noise.

    • How on earth can we ever hope to detect, in any reasonable length of time, a signal of 0.1 C per year, when we have this level of noise?

      What’s a reasonable length of time?

      In point of fact, as Girma points out, the data indicates that far from GMTs rising at 0.1 C per year,

      What is the length of measurement period you used to make that assessment? Is it reasonable?

      • Joshua writes “What’s a reasonable length of time?”

        I define this in political terms. We need to set budgets to combat CAGW in the reasonably short term; say the next 5 years. So in order to justify putting taxpayer money into mitigating the effects of CAGW, we need to detect a CO2 signal within the next year or so. If we cannot, then it may be much more difficult for politicians to budget for CAGW.

      • Actually, Jim – I’m more interested in your answer of the two questions in relationship to each other.

    • “The signal we are trying to detect, is, according to CAGW, around +10 C per century, or 0.1 C per year.”

      Uh…what? So if man causes the globe to warm 9C in 100 years that isn’t AGW?

      • lolwat writes “Uh…what? So if man causes the globe to warm 9C in 100 years that isn’t AGW?”

        I have absolutely no idea what you are talking about. What I wrote is what I meant. The proponen ts of CAGW claim that GMTs will rise by up to 9 C by 2100. This is a rate of rise of about 10 C per century.

      • “The proponents of CAGW claim that GMTs will rise by up to 9 C by 2100.”

        Which IPCC scenario shows that?

    • Jim Cripwell

      You seem a practical person, so I have to ask, are you more interested in CAGW, or CCC (catastrophic climate change)?

      As Tomas and Chief have explained, CCC is inevitable on a long enough scale of time. It absolutely has to happen, it isn’t in itself certain to be predictable, and it isn’t in all cases avoidable.

      Whether it’s a 9C/century temperature rise, or fall, or galloping glaciation or desertification of subtantial portions of current fertile land or sea, some form of CCC is certain. I don’t say this to be alarming, as far as I know the onset of the next CCC is just as likely to be 500,000 years from now as tomorrow, so even a glass-half-empty thinker would expect 250,000 years of non-CCC likely. (And we have no way of knowing the distribution function.)

      I do say it to ask, if CACC is distinct from CCC, and CAGW has a demonstrable risk, by how much are you comfortable seeing that 250,000 year figure reduced?

      If CAGW reduced it to 25,000 years, I’d still be pretty comfortable. That’s a tenfold increase in Risk.

      What’s your Risk tolerance for CCC?

      I mean, I agree with what it appears you’re suggesting, that the probability of CAGW at the 9C/century level is low.

      Suppose we take 0.05C/decade as the ‘Natural Variability’ rate of CCC.

      That’d be a catastrophe in 1800 years certainly if the trend continued.

      Perhaps a 50% chance of certain catastrophe in 900 years?

      How much increase in the Risk are you willing to tolerate by multiplying the chances of natural CCC by the chances of man-made CCC?

      How willing are you to impose that increased Risk on all the other people in the world, whether they consent to it or not?

      • Bart R

        The “what if” and “how willing” arguments are both straw men.

        Max

      • Max Manacker

        Indeed, they would be straw men, were I saying they were Jim Cripwell’s argument (clearly, they aren’t), and then were I to try to knock them down, which clearly I’m not.

        I’ve proposed that the question Jim Cripwell poses is uninteresting to me, and asked a different one.

        It’s clear Jim was only talking about a world where only CAGW +/- matters to climate, and all other things are held independent or unimportant. Which is all well and good, but a bit dull and oversimplified for my tastes or anyone’s uses.

        In Jim’s made up world, his logic is pretty much unarguable, and I see no reason to discuss that.

        I’m asking about my made up world, and have no intention of applying anything concluded from cases in my world to cases in his.

  37. Norm Kalmanovitch

    According to the latest offering from Trenberth and Kiehl the back radiation is 333 Watts/m^2 representing the so called greenhouse effect from the atmosphere. since over 90% of this is from clouds and water vapour by default only 33.3 Watts/m^2 can be from CO2.
    About a quarter of this resulted from a doubling of CO2 from 20ppmv to 40ppmvwhich is roughly 8.3 Watts/m^2 If this were to double from the current 390ppmv concentration to 780ppmv the additional “forcing would be well under one watt/m^2.
    According to the input for CO2 increases into the climate models of 5.35ln(2) for a doubling of CO2 a doubling from 20 to 40ppmv should produce the same forcing as a doubling from 390 to 780ppmv; and this is simply not the case.
    This parameter is completely without physical basis resulting in model output that also has no physical basis. No amount of discussion about the philosophy of science can change the fact that the input forcing parameter of the climate models is nothing more than a contrived fabrication or the fact that the output from models using this input are also contrived fabrications

    • A doubling of CO2 provides a forcing of about 3.7wm-2, not less than 1wm-2. The formula 5.35ln(CO2_change) only applies to near current levels of CO2. I don’t see any documentation saying it can be applied to a change of 20ppm to 40ppm and suspect it can’t be applied to concentrations that low.

      The problem is that your statements like “since over 90% of this is from clouds and water vapour” and “About a quarter of this resulted from a doubling of CO2 from 20ppmv to 40ppmv” are information that must have come from the same models that show 3.7wm-2 per doubling. There is nowhere else that those statements could be derived from.

      In which case logically models that show 3.7wm-2 forcing per doubling of CO2 cannot be claimed to support less than 1wm-2 forcing per doubling of CO2.

      • lolweot

        You missed the key part of Norm’s statement and got into debating figures{

        No amount of discussion about the philosophy of science can change the fact that the input forcing parameter of the climate models is nothing more than a contrived fabrication or the fact that the output from models using this input are also contrived fabrications.

        Max

  38. Joachim Seifert

    Severe testing? Why not take the test of time?
    Forecasts have to be (1) specific, at least in the easy part, the first 2 decades. (2) Forecasts are then set on “Hold” for a decade and
    (3) at the end of each decade, validation studies are made of whether
    the forecast materialized or not and if they correspond to measurements
    a decade later.
    Very simple, we then keep the grain and throw out the shells…..just
    have to wait one decade to know….. now its time to assess IPCC-AR3
    from 2001…. and we know now.

  39. Falsifiability is at the heart of my skepticism of GCMs. They certainly have not been put through the test of falsifiability yet. Doing hindcasts with the model isn’t a proper test – you already have the answer in hand when you design the model, so there’s an element of cheating there. GCMs ARE falsifiable in principle, but you have to wait decades for the results. Since decades (plural) haven’t passed yet, we have no completed tests. And since we’re told that we have to ‘do something’ before the ultimate GCM forecasts are ever reached, in that sense they are not falsifiable. And thus they are not scientific. So how can there be a scientific consensus about a non-scientific hypothesis?

    • GCM’s are NOT falsifiable, and that is their problem. They are based upon historical trends (the temperature record) i.e. non-repoducible data. Both the prediction AND the data have to be replicable for scientific models to have any value.

      If a GCM “fails” you will have no understanding why, because the data is linear, non-repeating. And you will never be able to “run” the model again to determine why it failed.

      It really is as simple as that. Popper states it over and over again.

  40. Joachim Seifert

    Reply to MarkB: Why wait for several decades? One is enough, since
    the very first decade is the most easy in prediction and models are
    done with (1) utmost care plus (2) use of venered computer technology
    which can take us to the moon….. One decade should be the measure,
    which do not pass, are not correct anyway…..
    Before reaching credibility, the model has to be put on ice for a
    decade and then scrutinized.
    Whoever tells me about the GMT long after my death in 2090-2100,
    should prove that he was correct 2000-2010 or 2010-2020, waiting
    another 9 years……

  41. Why must models be tuned?

  42. Willis Eschenbach

    Fred Moolten | August 18, 2011 at 10:43 pm | Reply

    … There is no question that reliability is an issue that must be addressed to make the climate models more useful for policy decisions. However, the analogy, in my view, overlooks an important difference between the situations in which the two types of models are employed. In the case of an aircraft, we are not forced to fly it if we are unsure of its behavior. In the case of climate change, however, we face a forced choice, because engaging in major actions or refraining from those actions both have potentially enormous consequences.

    Fred, that makes no sense. It would make sense if we knew the “major consequences” from “engaging in major actions”, or from “refraining from those actions”.

    But we have no such knowledge. We don’t know what will happen if we do nothing, and we also don’t know what will happen if we do something.

    If we do nothing, the possible outcomes range from beneficial to catastrophic. And if we do something, the possible outcomes range from a beneficial to catastrophic.

    As a result, I fail to see how a crappy, untested model prediction, which represents only the modeler’s fantasy of what will happen in fifty years, helps with that decision.

    Really, that has to be the height of misunderstanding of computer models. People don’t have enough information to make a decision … and your solution is for those people to make a really bad model, don’t test it, and then use it because you say we have to, because of your claimed “forced choice”?

    Do you really think a computer model made by those same people, people without enough information to make a decision, will help the people who made the model make a decision? That’s the computer bootstrap fantasy, and I’m sorry to report, a computer can’t do any better than the people who designed it … it can only make their mistakes really, really fast.

    We simply don’t have the information necessary to take informed action … and in that situation, replacing human judgement with a really poor computer model made by those same uninformed humans seems unbearably stupid to me.

    In other words, I take the opposite side of the question. I think that in situations like this we need more reliability from our models, not less reliability as you claim. The idea that our models can save us from our own stupidity is a non-starter on my planet.

    Because if there’s one thing I’ve learned in almost fifty years of computing, it’s that the computer is dumber than the programmer. If you want to take direction for your actions from a computer, Fred, that’s up to you.

    Me, I’ll trust my judgement until someone shows me that a given computer model is better at something than I am … and given the current crop of modelers and their extreme reluctance to give even the most trivial test to their little snowflake models, that might be a while.

    Severe tests? How about we start with ANY tests and work up from there, the majority of models have never taken a test in their lives. These days the climate modelers run from any kind of real test like vampires running from garlic, or like cockroaches running from the light … good luck with the “severe” tests, we won’t even see trivial tests for a while yet.

    w.

    PS – There is no “forced choice”, you’ve been suckered by those who want you to be very, very scared. The existence of a possible, unproven future catastrophe doesn’t force a choice on us as you claim. because if so, we face a “forced choice” about meteor strikes, runaway black holes, the moon crashing into the earth, and the return of the dinosaurs. Hey, they’re all possible, and you’d have to admit that the return of the dinosaurs would have “potentially enormous consequences” …

    Fred, the mere fact that you and others choose to hyperventilate over some alarmist’s wet dream of a hypothesized catastrophe doesn’t mean that we have to join in with your mental onanism. Your chaims don’t force a choice on anyone … unless “ignore the monster raving loony party” counts as a choice.

    • Fred, that makes no sense.

      Fred – in light of our discussion yesterday, I’ll ask you to consider whether opening a comment with a statement like that could, in any way, be an effective method for a fruitful sharing of ideas?

      Now maybe as you spoke about, Willis is addressing bystanders more than he’s really addressing you, and so he’s hoping to impress upon people that your ideas can so easily be dismissed by virtue of a categorical dismissal from the authoritative voice of Willis Eschenbach – but I would suggest that anyone that might be so persuaded already has their mind made up. Personally, when I read such rhetoric, it works to the disadvantage to whoever used it, whether it be you or Willis or anyone else at Climate etc.

      • Willis Eschenbach

        Joshua, I see you disapprove of what I wrote. This is always a good sign in my world. The more upset you get, the more I know I’m on the right course.

        Thanks for serving as my lodestar,

        w.

        PS – Fred made a claim that seems senseless to me. I said that, and I have said exactly why. I didn’t give a “categorical dismissal from the authoritative voice” as you fatuously claim. I gave a series of reasoned arguments as to why Fred’s ideas were not sensible.

        You could have actually dealt with the issues discussed and the arguments raised, Joshua. You could have spoken about whether inchoate fears are enough of an issue to make us have to face a “forced choice” as Fred claims.

        But I understand that you always have to be careful, and I’m sympathetic with your condition, with your allergy to facts you’re always at risk of anaphylactic shock if you get too close to a real discussion. So instead of discussing the issues, you’ve focused on my tone … and like I said, that’s good news, it ups the odds that my facts are correct.

      • Willis – I’m sorry to disappoint you, but I’m not the least bit upset.

        Every time you interject your personally demeaning remarks into one of your posts, it helps to underscore my points that: (1) interjecting personally demeaning remarks diminishes rhetorical power and, (2) tribalism is very apparent among many “skeptics” who seek a high profile in the blogosphere.

      • So instead of discussing the issues, you’ve focused on my tone

        Willis – I discuss a variety of issues here at Climate etc. And one of the issues that I discuss is how tone, on a number of levels, overlaps with the scientific debate.

        The tone of your opening sentence well illustrated a very similar point that I made to Fred about the tone of some of his posts here. I thought that since your post was directed to him, it would be a particularly good example.

        As I see it, introducing a post with “Your comment make no sense” only marks the opinion of someone who is willing to subordinate a goal of reasoned debate to other goals – such as a futile attempt to prove some political point, to establish a false sense of superiority, etc.

        As to my impression of your “discussion of the issues” that followed your opening sentence, I see nothing other than rhetoric that is consistent with my interpretation of your opening sentence. You state unfalsifiable claims as fact (for example, that Fred has been “suckered”). IMO – that is not a discussion, but a monologue that cannot serve to advance a goal of shared insight.

        Even a slight modification of your opening sentence, such as “In my opinion, the ideas you expressed makes no sense” demonstrate an openness to a meaningful exchange of ideas – from the perspective that your mind is not already completely made up and that you are willing to engage those with a different view point for the purpose of shared edification.

        But your opening sentence, as it stands, are the words of someone who confuses their own opinion with fact. As such, Willis, although I assume that at some deep level you are interested in your own intellectual development and that of others, at this point your method of communicating in the climate debate does little more than demonstrate that your perspective is deeply, tribalistically entrenched.

    • Your chaims don’t force a choice on anyone … unless “ignore the monster raving loony party” counts as a choice.

      Irony (unintentional irony, Chief) runs amok at Climate etc. yet again.

    • Willis,

      Policy choices are done all the time. They are always based on some arguments, but the factual knowledge on the consequences is often very much lacking. Not acting at all, when somebody has proposed otherwise, is also a policy choice. As far as I can see, Fred’s formulation of “forced choice” is nothing more than a statement of this obvious and inescapable fact.

      You may argue that the consequences of the alternatives are very poorly known, and I do agree basically on that. That’s, however, not the only logical view, and there’s nothing illogical or exceptional in having differing views. While I agree, that the consequences of alternative policies are very poorly known, I don’t agree that nothing should be done. There are policy alternatives that are not excessively costly, but do still improve our future possibilities to react, if and when the improved knowledge on the risks of the climate change and on the consequences of alternative policies make more drastic policy decisions justified.

      No logic can tell, whether an alarmist or skeptic is more right on policies. That all depends on the total evidence available including the reliable knowledge as well as the much less certain estimates of risks and consequences. It’s the responsibility of the political system to decide based on that all.

      • Willis Eschenbach

        Pekka Pirilä | August 20, 2011 at 2:51 pm

        Willis,

        Policy choices are done all the time. They are always based on some arguments, but the factual knowledge on the consequences is often very much lacking. Not acting at all, when somebody has proposed otherwise, is also a policy choice. As far as I can see, Fred’s formulation of “forced choice” is nothing more than a statement of this obvious and inescapable fact.

        Thanks, Pekka. If that is the meaning, it is so trivial as to be meaningless. Yes, choosing not to fortify the world against a threatened invasion by UFOs in the year 2050 is a policy choice … but so what? How is that meaningful, how does that help us? However, I don’t think that was Fred’s meaning. Here are his words again:

        In the case of an aircraft, we are not forced to fly it if we are unsure of its behavior. In the case of climate change, however, we face a forced choice, because engaging in major actions or refraining from those actions both have potentially enormous consequences.

        So he’s not just saying that not doing anything about his fantasized Thermageddon is a policy choice. He’s saying that delaying action will have “potentially enormous consequences” … but he has no more evidence of that than he has of “potentially enormous consequences” from choosing not to fortify the world against a UFO attack. Both UFOs and a couple of degrees warming do have the “potential” to be costly, and UFOs are the bigger danger, potentially they could destroy the whole planet and not just warm it a couple of degrees … but again, so what? Making a decision based on the undeniable fact that a UFO attack could have “potentially enormous consequences” makes no more sense than Fred’s plan.

        Not only does Fred not have evidence, but what evidence there is points the other way. The planet has warmed a couple of degrees in the last few centuries, and a half degree in the most recent century, and the warming has generally been beneficial, not a Thermageddon of any kind.

        So while it is true as you say that in politics we often make decisions on partial information, he’s urging us to spend billions based on … well … approximately zero.

        That’s my problem with his claim, and why I say it doesn’t make sense. I discuss this kind of thinking further in my post “Climate, Caution, and Precaution“.

        w.

      • Pekka

        You may not like his conclusions, but Willis’ logic is impeccable.

        Max

      • Willis certainly has a valid approach to wait for bad things to happen before preparing for anything. It is cheap for sure. What could possibly go wrong? We wouldn’t blame it on the politicians that took that approach, would we?

      • Willis Eschenbach

        Jim D | August 21, 2011 at 10:37 pm | Reply

        Willis certainly has a valid approach to wait for bad things to happen before preparing for anything. It is cheap for sure. What could possibly go wrong? We wouldn’t blame it on the politicians that took that approach, would we?

        See, Jim, this is why I ask people to quote my words. I said nothing about it being a “valid approach to wait for bad things to happen”. Nothing like that at all. That is your fantasy, Jim, because it is not my words and not my thoughts and not my belief. I have long advocated a “no-regrets approach”, your claim that I advocate otherwise is a straw man.

        w.

      • You may not like his conclusions, but Willis’ logic is impeccable.

        Really?

        The logic of “y[Your post] makes no sense,” and ” you’ve been suckered by those who want you to be very, very scared.” is impeccable?

        That is the logic of someone who confuses his opinion with fact – an inherently illogical premise. Now if you accept that illogical premise – as I assume that you do, then I suppose one could come up with a determination that his conclusions are logically supported (although I doubt it, my guess that his post is riddled with confusions between opinion and fact, but I’m not going to bother reading it to find), but in my book, if you begin with an illogical premise than your conclusions will not be logical.

      • Willis Eschenbach

        Joshua | August 22, 2011 at 8:06 am | Reply

        You may not like his conclusions, but Willis’ logic is impeccable.

        Really?

        The logic of “y[Your post] makes no sense,” and ” you’ve been suckered by those who want you to be very, very scared.” is impeccable?

        Joshua, your habit of taking quotations out of context is not serving you well.

        I started out by saying that the logic of his post made no sense. And if I had stopped there (as your truncated quote implies), your protest would have been meaningful. But I didn’t stop there. I went on to say exactly why it doesn’t make sense:

        In other words, I made a claim, and I substantiated it. There is no logic in his claim that because he has a fear about Thermageddon, that this means that not deciding on it today will have large consequences.

        I don’t know if you’ve noticed, Joshua, but lack of logic is a valid objection to a claim. If it’s illogical, I don’t know any way to say it other than to say it lacks logic, or it doesn’t make sense, or something like that.

        Nor is your other quote any better. Here’s my full quote:

        Now certainly I could have put that nicer, Joshua. However, here’s the difference between you and me:

        You seem to be passionate about using the right words, and about utilizing the good communications methods, and about not saying anything that might upset anybody.

        I, on the other hand, am passionate about good science, and I don’t care too much about your approved communications methods if I can lobby effectively for that.

        To date, I’m satisfied with my results. I haven’t taken this path by chance. I tried being nice and asking politely, and I got blown off and abused for my troubles …

        It was only when we skeptical type folks stood up and howled that anything has happened, Joshua … and now your advice is that I should sit down and use my indoor voice?

        Not gonna happen, my friend, been there, tried that … I’ll continue to tell the truth as I see it.

        w.

        PS – as to whether friend Fred has been “suckered”, Fred shouldn’t feel bad, most people were. it started with Jimmy Hansen turning off the air conditioner and opening the windows before his Senate testimony in 1988, went on to Michael Mann hiding his adverse results and lying about his R^2, escalated with things like the Jesus Paper, led to the un-indicted co-conspirators destroying evidence, and it has continued ever since with a string of unabashedly alarmist “scientific” papers. There’s no shame in being suckered by professionals who set out to deceive.

      • Willis Eschenbach

        Well, the “full quote” got messed up. Here it is:

        There is no “forced choice”, you’ve been suckered by those who want you to be very, very scared. The existence of a possible, unproven future catastrophe doesn’t force a choice on us as you claim. because if so, we face a “forced choice” about meteor strikes, runaway black holes, the moon crashing into the earth, and the return of the dinosaurs. Hey, they’re all possible, and you’d have to admit that the return of the dinosaurs would have “potentially enormous consequences” …

        Fred, the mere fact that you and others choose to hyperventilate over some alarmist’s wet dream of a hypothesized catastrophe doesn’t mean that we have to join in with your mental onanism. Your chaims don’t force a choice on anyone … unless “ignore the monster raving loony party” counts as a choice.

        That quotation should have been located where it says “here’s my full quote” above.

        w.

    • Willis wrote: Fred, the mere fact that you and others choose to hyperventilate over some alarmist’s wet dream of a hypothesized catastrophe doesn’t mean that we have to join in with your mental onanism. Your chaims don’t force a choice on anyone … unless “ignore the monster raving loony party” counts as a choice.

      You should be ashamed of that.

      • Willis Eschenbach

        Actually, MattStat, I am ashamed, I didn’t give a link to the Monster Raving Loony Party’s official website. I always ask others to cite their claims, and I’m embarrassed that I didn’t cite mine.

        w.

      • Willis, the website that you linked is Judith Curry’s website. Is that a mistake?

      • Willis Eschenbach

        My bad, MattStat, the MRLP’s website is here.

  43. Willis wrote:
    If we do nothing, the possible outcomes range from beneficial to catastrophic. And if we do something, the possible outcomes range from a beneficial to catastrophic.

    As a result, I fail to see how a crappy, untested model prediction, which represents only the modeler’s fantasy of what will happen in fifty years, helps with that decision.

    Well said.

    Judith asked: Should we assess climate model predictions in light of severe tests?

    Yes. Popper was monomaniacal about testing, but scientists have always tested everything possible before making things or otherwise depending on their theories. The production processes for the microprocessors in your computers were tested much more than the climate models have been tested, to select just one of millions of examples from technology development.

    I propose that every published prediction be stored exactly as it was made, and that we keep these predictions in a protected but publicly viewable virtual vault, and that reviews of the accuracies of all models be published periodically. Right now this is done haphazardly, afaik (Ii hope to be corrected), but I think I perceive a trend toward a more systematic approach. After perhaps 3 more decades we’ll have a model that deserves credence.

    Getting a public policy out of present knowledge, well summarized by Willis, is a mess. We sort of have to be preparing for a variety of mutually exclusive (and some possibly mutually reinforcing) catastrophes. It’s like trying to steer in all directions as once, or steering a chaotic loop.

    • You say:
      I propose that every published prediction be stored exactly as it was made, and that we keep these predictions in a protected but publicly viewable virtual vault, and that reviews of the accuracies of all models be published periodically.

      Here is my prediction. All the temperatures during the past ten thousand years have been stable within plus or minus two degrees C. Most of the temperatures have been stable within plus or minus one degree C. My prediction is that for the next ten thousand years, temperature will be stable within the same plus or minus 2 degrees C.

      A manmade fraction of a trace gas, CO2, cannot have possibly changed this extremely stable system.

  44. Joachim Seifert

    Reply to David L. Hagen: To credibility of climate predictions and
    to the demand for severe model testing:
    The bottom line is: Bad models have bad empirical results – who
    denies that? And good models produce good empirical results.
    Which now is good? For example mine, in the booklet: Joachim Seifert “Das Ende der globalen Erwaermung – Berechnung des Klimawandels” ISBN 978-3-86805-604-4 (2010). It will withstand perfectly the severest
    testing; the proposed model even achieves precise calculation of
    glacial events such as Dansgaard-Oeschger-events as well as temperature swings of the last interglacial, 120 kY BP. No other model
    is capable of this performance. Proposed calculations are transparent
    for all of you, reciprocable and without any simulations.
    All this moaning about climate sensitivity Lambda is waste of time, we
    need solutions as proposed instead of “know-it all-remarks”…..
    The book author ……….JSei.

    • Joachim Seifert

      I haven’t had anything but second-hand reports of your paper and calculations.

      Please correct if I misapprehend.

      Weather is determined strictly by the distance of the planet from the sun, due its elliptical orbit, in 400 year cycles?

      And your predictions are reliable at a level of statistical significance?

      You forecast the weather by the zodiac?

      • Joachim Seifert

        Bart R
        Weather is the weather, which predicts the weatherman,
        not me. Whether he uses the Zodiac, its his business….
        Climate is different, there is indeed a new astronomical
        cycle, until now overlooked, due to the AGW-hype…half a
        cycle, as you write, 395 years…..(790 years complete)
        Predictions are utmost reliable, exact for past 50,000
        years paleoclimate. Based on this, a future 10,000 years
        can easily be forecast, down to decadal time series….
        No problem at all……
        JSei.

      • Joachim Seifert

        Thank you for providing additional clarification.

        Please provide more information on this idea, if you are willing.

        I am interested to learn more of such astrological climatology studies, which I expect is not very different from Scafetta’s zodiacal concepts.

        Are your methods purely based on curve-fitting, or do you also have a detailed mecnaism proposed that works within the full body of knowledge of physics?

      • Joachim Seifert

        Bart R:
        I have nothing to do with either Astrology, Zodiacs and curve
        fitting to match the past 150 years into a Warmist AGW-concept or
        even into Scafettas concept of statistical analysis of 150 years, from
        which then conclusions about a climate concept are made…. (0.66 C
        of AGW/100 years).
        My model/straighforward calculations, transparent to all,
        analyzes the Earth’s orbit, which shows so-called Libration
        movements of the planet , see google: , an
        animated picture for the lunar libration is there for your
        impression about the subject. The planet Earth
        does the same libration movement, which sideward movements
        towards and away from the Sun, lengthen or shorten the
        Sun-Earth distance in 790 year astronomic cycles…..
        This is proved by over 50,000 years of paleoclimate.
        Any climate model has to work as well in paleoclimatic times and
        not only over the past 150 years, as AGW, from 1850 on
        since the industrial revolution, claimes.
        Climate is climate, a good model has to work in our times
        as in paleotimes as well……..

        Yes, the core analysis is a detailed mechanism, working
        within astronomical physics……
        A statistical analysis and empirical values are matched
        AFTER the model is completely described and are only
        used to verify the model. Booklet only about 15$US and
        ISBN 978-3-86805-604-4.
        JSei.

      • JSei

        A very fair and courteous reply; it sounds like a scientific discussion I wish I could hear more about.

        My first question would be how you overcome the poor resolution of the paleo record?

  45. Willis Eschenbach

    Joshua | August 20, 2011 at 3:00 pm | Reply

    So instead of discussing the issues, you’ve focused on my tone

    Willis – I discuss a variety of issues here at Climate etc. And one of the issues that I discuss is how tone, on a number of levels, overlaps with the scientific debate.

    The tone of your opening sentence well illustrated a very similar point that I made to Fred about the tone of some of his posts here. I thought that since your post was directed to him, it would be a particularly good example.

    As I see it, introducing a post with “Your comment make no sense” only marks the opinion of someone who is willing to subordinate a goal of reasoned debate to other goals – such as a futile attempt to prove some political point, to establish a false sense of superiority, etc.

    Joshua, as I see it, Fred’s comment that we are forced to make a choice made no sense. In other words, I saw no logical connection between his “if-then” clauses. Should I have said it does make sense?

    I went on to show why it doesn’t make sense—because there is no logical connection between some fancied future catastrophe and having to do something about it now. There are a host of possible future catastrophes out there, I can provide you with dozens. By coincidence the only one that Fred says forces a choice is Fred’s pet catastrophe, Thermageddon™.

    Perhaps you can find the logic in that, and if so, explain it to me. Until then, I will continue to say that Fred’s claim makes no sense. Why? Because of the lack of logic in his claim, which is that we are forced to make a choice because Fred sees future danger.

    Fred is trying to get everyone alarmed. He thinks that we face a “forced choice”. Based on that, he thinks we should spend billions of dollars on a some line of attack on a problem no one has shown to exist.

    So … should I be all polite and tranquil about that? As far as I’m concerned, he is trying to raise specious alarms in order to get people to agree with him. He’s talking about picking my pocket to assuage his inchoate fears.

    And no, I generally am not polite when people are trying to remake the planetary economy and insisting that we spend billions on the process of wrecking it. I view that as a direct attack on the poor people of the world, and it is no laughing matter for me. So I’m sorry I can’t be all sweetness and light about it, Joshua. I don’t view this as a theoretical discussion. I see folks on your side doing everything they can to trash the economy, and make energy much more expensive for the poor, in the pursuit of something that even the advocates say will make no measurable difference.

    As a result of folks like Fred and you, Joshua, the EPA is engaged in a process that will cost the US billions and billions of dollars, and by their own estimate will cool the planet by a whopping 0.03 degrees by 2030 … and you think I should be calm and cool about that and take the right tone? What is the right tone when someone is spending billions to cool the US by 0.03°C?

    So I will fight with all I have against that kind of nonsense, Joshua. You may think that paying billions for a cooling of 0.03°C in thirty years is a brilliant plan, and that we should talk about it in a collegiate fashion. I see it as organized crime which inter alia is crippling our economic rebuilding.

    As a result, I don’t see you guys as my collegiate associates, Joshua. I see you as thieves stealing from the people of the planet, with no comprehension of the damage you are doing. Be nice to you? I will when I can, but I am opposed to you and I will do what I can to stop you in your mad quest to spend us into the dust.

    So you are welcome to invite others to be nice to you while you are destroying the economy, Joshua, but trying it with me won’t work. I’m not in the habit of making nice with people who are trying to steal my money and drive energy costs up for the poor.

    And if it looks like your proposals don’t make sense, that’s exactly what I’ll say. I don’t see that as a bad thing. I didn’t say Fred’s proposal was stupidity of the highest order, although I do think that, because that’s over the top.

    I thought about saying something like that, and I decided to just go with a simple statement of fact, which was that Fred’s claim made no sense. I listed the reasons it makes no sense, and did my best to explain why it makes no sense.

    Your response is to ignore the underlying questions (do fears of a possible catastrophe in a hundred years really “force” us to make a decision today), and to subordinate a goal of reasoned debate about future fears and forced decisions to other goals – such as a futile attempt to prove some political point, to establish a false sense of superiority, to bust me for having the wrong tone, etc.

    w.

    • I went on to show why it doesn’t make sense—because there is no logical connection between some fancied future catastrophe and having to do something about it now. There are a host of possible future catastrophes out there, I can provide you with dozens. By coincidence the only one that Fred says forces a choice is Fred’s pet catastrophe, Thermageddon™.

      You ought to have used “possible” (for parallel structure) or “theoretical” (because it is based on a theory) in place of “fancied”.

      Except for that, the point is well enough expressed. Or maybe I just agree with it.

      The whole rest of this post by you dramatically undermines your point. The EPA has costly policies which may not have the diverse economic benefits that they claim for them, but it isn’t a criminal conspiracy.

      • It is when it dodges or breaks laws, and directs massive resources towards those who would otherwise not have a prayer of accessing or earning them.
        RICO was formulated to address this sort of thing. Unfortunately, the DoJ has been maneuvered onto the wrong side of the fence, by the simple expedient of appointing a deeply committed social justicizer as its head.

    • The EROI of the mitigation proposals is minute compared to that of the most obvious alternatives: do nothing, and/or prepare for major adaptation efforts in the event of significant warming or cooling by building up resources and relevant “applied science”.

      Actually, as you imply above, the “EROI” of massive mitigation is negative under all scenarios, since CO2 reduction is so ineffectual even by Warmist calculations. It’s quite insane to even consider it.

  46. Joshua

    It is a cop-out to concentrate on “tone” versus “logic”.

    Willis has presented a well-reasoned argument to counter the premise by Fred Moolten that we face a forced choice to make a climate policy decision.

    See if you can counter the logic, rather than concentrating on the tone.

    Max

  47. Regarding some of the above exchanges, which I have now seen after a few days’ absence, I have only a little to add to my original comments.

    First, the forced choice results from the long atmospheric residence time of CO2 concentrations above equilibrium levels. In other words, “inaction” in the sense of no attempts to address CO2 emissions is a form of “action” to in the sense of continuing those emissions and experiencing their long term effects. This contrasts with potential threats that are unaffected by a delay in decision making.

    Second, the magnitude of these effects of continued CO2 emissions is a legitimate topic, but I thoroughly disagree with Willis’s contention that there is no evidence bearing on this. As someone familiar with much of evidence, I conclude that we have compelling reasons to be believe that the probability of some adverse effects is great, and of very serious adverse effects is non-trivial. This is of course a topic of continuing discussion and disagreement here and elsewhere, but it is not the same as a claim for no evidence. It is also not the same as a claim – inaccurate in my view – that the evidence comes exclusively from the types of modeling (GCMs) that are often discussed as though they represented all forms of modeling used in climate analysis.

    My earlier point was that there is less information than ideal, but enough to start to make decisions as to which form of action we prefer – action to continue adding CO2 to the atmosphere or action to reduce the rate at which we do that. In my opinion, it would be a mistake not to make a start toward the latter goal, while continuing to gather information and improve our ability to interpret it via GCM type models and other modalities.

    • My apologies for some of the poor editing that left a few extra words in that should have been deleted. I think the meanings are clear though.

    • Fred Moolten

      You reiterate your position that we face a forced choice to make a climate policy decision regarding future CO2 emissions, but have failed to address Willis Eschenbach’s reasoning that this is all a contrived dilemma.

      Can you counter Willis’ logic?

      If so, why don’t you do so?

      If not, I can understand why you would prefer to remain silent.

      Max

  48. Here is global mean temperature prediction of IPCC compared to that by a skeptic.

    http://bit.ly/n1S1Jf

    The above graph is the extension of the following graph.

    http://bit.ly/njBdvW

  49. A very heated set of exchanges above- bit late to this party, but here’s my two pence:

    I’m still very confused over the type of validations and testing performed on these models. I’d like to see a list of pass criteria with success/fails. This also needs to be standardised across the models, ESCPECIALLY those used for policy.
    IN reality, the screening, testing and validation of these models is exceptionally straight forward, it bemuses me that so many here are finding this such a difficult subject. Additionally, it is quite clear that most (if not all) the models would probably fail this testing procedure.
    On the specific topic surely the harsher the test the better? We’re making very serious decisions based off these models, so we’d best be damned sure that they’re right, or we may end up doing more harm than good.

    Finally, I’ve seen the point raised above that the issue over the models is not due to the moddlers/scientists themselves, but to the interpretations made on them by the policy makers. Well I call BS on that. If someone is misrepresenting or grossly misunderstanding your work and this work is in turn being used further (for science, policy or whatever), then it is your duty to put them straight. There’s no defence for simply saying ‘it’s not me doing it’ while you sit back and bask in the extra attention.

    They’re just, if not more, guilty than the policy makers.

  50. First, the forced choice results from the long atmospheric residence time of CO2 concentrations above equilibrium levels. In other words, “inaction” in the sense of no attempts to address CO2 emissions is a form of “action” to in the sense of continuing those emissions and experiencing their long term effects. This contrasts with potential threats that are unaffected by a delay in decision making.

    My points are based on the global mean temperature (GMT) data shown below:

    http://bit.ly/qGcD9M

    The above result shows the GMT touches the upper GMT boundary line every 60 years, and it has done so three times in the last 160 years. The above result also shows that the GMT never crosses the upper boundary line for long.

    What does the above data shows you? Do you see any change in the GMT pattern in the above graph when you consider the whole data, not just the recent warming one?

    Fred the “potential threat” is only inside the minds of AGW advocates. It does not exist in reality.

  51. Unlike a lot of people here, I don’t have a degree so maybe I see the concept of “severe test” differently, but a simple one would be this.

    Take your model and make a prediction for 5 years out. If you model arctic ice, then the ice extent in August 2016 would do. If you are out by more than 50,000 square kilometres either way, you personally are fired and your department is shut down with all funding removed. Similar tests and constraints for any other model.

    The bottom line is that if a modeller won’t bet his job on the outcome of his model over 5 years, then why on Earth should the planet bet its economic future and development on the models outcome over 100 years?

    This is the true difference between engineering and scientific modelling, if the engineers model fails, then people get the sack. There are real consequences for getting it wrong, this is not true with climate modelling.

  52. Willis Eschenbach

    Fred Moolten | August 21, 2011 at 11:24 pm | Reply

    … First, the forced choice results from the long atmospheric residence time of CO2 concentrations above equilibrium levels. In other words, “inaction” in the sense of no attempts to address CO2 emissions is a form of “action” to in the sense of continuing those emissions and experiencing their long term effects. This contrasts with potential threats that are unaffected by a delay in decision making.

    Second, the magnitude of these effects of continued CO2 emissions is a legitimate topic, but I thoroughly disagree with Willis’s contention that there is no evidence bearing on this. As someone familiar with much of evidence, I conclude that we have compelling reasons to be believe that the probability of some adverse effects is great, and of very serious adverse effects is non-trivial. This is of course a topic of continuing discussion and disagreement here and elsewhere, but it is not the same as a claim for no evidence. It is also not the same as a claim – inaccurate in my view – that the evidence comes exclusively from the types of modeling (GCMs) that are often discussed as though they represented all forms of modeling used in climate analysis.

    Fred, thanks for your reply. I’m glad that you have “plenty of evidence” that a degree or two of warming will be other than beneficial. Please bring some out, since the historical evidence is on the other side—the warming since the Little Ice Age has been generally beneficial, near as I can tell.

    I’m also glad that you have evidence that increasing CO2 will lead to increasing warming … please bring some of that out as well, because the last 15 years or so haven’t panned out like the climate models said.

    Perhaps you also have evidence for the claimed acceleration in the rate of sea level rise that the AGW folks have been telling us for years is right around the corner … gonna be hard, though, since the rate at present is decreasing …

    Lastly, you say that “we have compelling reasons to be believe that the probability of some adverse effects is great, and of very serious adverse effects is non-trivial.”

    I have yet to see any serious estimation of the “probability” of any of the claimed CO2 effects, so I’d like a citation to those probabilities as well. I doubt you’ll find them, AGW scientists are allergic to probabilities. Heck, we don’t do well with the “probabilities” involved in predicting next month’s weather, and yet you claim that there are serious estimations of the probabilities for the climate in 100 years? Citations, please, and while you are at it, read up on the Drake Equation.

    (N.B. — Computer model results are evidence … but they are only evidence of the beliefs of the programmers. Don’t bother citing any, they are not evidence about the real world in any sense of the word.)

    Finally, please don’t make Jim D.’s foolish mistake of thinking that I advocate inaction. Read my post on Climate, Caution and Precaution before stepping off that cliff. I advocate a “no-regrets” path which doesn’t depend on imagined probabilities of improbable future events.

    The “precautionary principle” is, to me, one of the worst-abused “principles” on the planet. It is used as an excuse to do something regarding every wild-eyed alarmists pet fear, whether it’s Thermageddon or UFO invasion. And every single one of these alarmists adduces your reasoning, that doing nothing exposes us to huge dangers, that we must act now, that the probability of a UFO attack is “non-trivial”, and all of the rest of your claims …

    Finally, Fred, you must understand why many people don’t believe the AGW claims. It is for three very good reasons:

    1. A large number of the predictions of the AGW alarmists haven’t come true, and

    2. A number of the leading lights of the AGW movement have been shown by their own words to be willing to lie, cheat, and steal to push their view of events.

    3. Despite trying for a quarter of a century, the AGW supporters haven’t even been able to falsify the null hypothesis, much less establish an alternative hypothesis …

    Taking these things together, any reasonable person would have to conclude that there is something seriously wrong in the AGW ranks … but you continue to believe the predictions and the “evidence” that these known reprobates bring forth to substantiate their wild claims. At least some caution regarding knee-jerk belief in their idea would certainly be prudent, given their past actions …

    It seems clear to the man on the street, and to me as well, that people who truly believe in what they are saying don’t lie about their results and refuse to archive their data. People on the street know that if you are hiding your data, or if you are telling people to erase emails, that there is a reason for that and it is not a pretty reason.

    That simple truth never seems to affect the AGW folks, though, they just go on as though their leaders were honest, decent, transparent scientists.

    Are all AGW supporters liars and crooks? Of course not … but almost every one of them (except our gracious host) refuses to speak out against liars and crooks. And all of them (including our gracious host) refuses to say even one bad word against any other AGW scientist. Oh, they’ll call Steve McIntyre or myself nasty names, no problem with that.

    But speak out against Lonnie Thompson for not archiving his results? Oh, no, they take Joshua’s preferred path, they speak softly and never for attribution or publication, they won’t say “Hey, Lonnie, about that missing data?”

    Now you might want to take instruction from folks who won’t say a word when their leaders or their associates are shown to be scientific malfeasants … me, I like to take my information from people who are willing to speak out for scientific transparency, archiving, and publication of data and codes.

    Because if Judith won’t speak out against Lonnie Thompson’s failure to archive his data, how can I truly trust her? She has demonstrated that she won’t speak out against any given scientist, but she seems to forget that when she does that, the next obvious question is:

    What else is she conveniently neglecting to mention?

    The answer may well be “nothing”, but if she won’t speak out when individuals practice bad science, how can we trust her to do good science?

    Fortunately, science doesn’t run on trust, so I judge her science on the basis of her science. But the niggling question is always there … what else isn’t she saying?

    In my experience, academics are the worst in this regard. Jewelers and fishermen and accountants and most every profession, they’ll tell you if someone in their line of work is a scammer. They know it hurts the reputation of every honest person when there’s a con man in the field.

    But cops and doctors and climate scientists? Trying to get one of them to say a single bad word about another one is really, really hard. And so we get cops on the take and doctors who should never practice but continue to do so and climate scientists who should be shunned but are invited to deliver addresses at conferences.

    Not naming names doesn’t work. It leads in the wrong direction, towards malfeasance and continued bad actions and public mistrust.

    Anyhow, Fred, thank you for your response, that’s my take. Please quote any sections you find objectionable.

    w.

    w.

    • W

      Good luck getting Fred to acknowledge the weakness or illogic of his position. I have been trying for several years now without success.

    • Willis – I’m glad you subscribe at least in principle to the value of some types of action, although the modest steps you describe in your linked site are only small, and only adaptive in nature. We should also begin steps to mitigate the threat by reducing carbon emissions.

      The evidence related to justifying this, which you state you haven’t seen, has occupied hundreds of web pages in this blog and probably hundreds of thousands of total pages online and in print over the past 200 years or so. I therefore conclude not that you haven’t seen it, but that you don’t accept it. There is no point in my trying to recapitulate it here in one thread. Rather, individual readers will have to review the material on their own, if they haven’t already, to make their own judgments. I’m glad to answer questions on very specific items to the best of my ability, but I won’t try to change the minds of people who have already decided on contrary views in the face of the available evidence. I have never been very good at changing people’s minds, and so it would be a waste of my time and theirs for me to try it here.

  53. Willis Eschenbach

    Fred Moolten | August 24, 2011 at 11:31 am

    Willis – I’m glad you subscribe at least in principle to the value of some types of action, although the modest steps you describe in your linked site are only small, and only adaptive in nature. We should also begin steps to mitigate the threat by reducing carbon emissions.

    When you are mired in ignorance, that’s not the time for big steps. Big steps are for when you are certain of where you are going and how to get there. When you don’t know where you’re going or where the path is, big steps are for fools …

    The evidence related to justifying this, which you state you haven’t seen, has occupied hundreds of web pages in this blog and probably hundreds of thousands of total pages online and in print over the past 200 years or so. I therefore conclude not that you haven’t seen it, but that you don’t accept it. There is no point in my trying to recapitulate it here in one thread. Rather, individual readers will have to review the material on their own, if they haven’t already, to make their own judgments. I’m glad to answer questions on very specific items to the best of my ability, but I won’t try to change the minds of people who have already decided on contrary views in the face of the available evidence. I have never been very good at changing people’s minds, and so it would be a waste of my time and theirs for me to try it here.

    While there is certainly something that fills up the “hundreds of web pages in this blog and probably hundreds of thousands of total pages online and in print”, surely you don’t think all of that is evidence, do you? Evidence is a rare bird in climate science, you’re probably referring to the hundreds of thousands of total pages of claims, exaggerations, alarmist predictions, model results, and outright lies that have been published. You really need to sharpen your pencil, Fred, those “hundreds of thousands of pages” are not pages of evidence.

    In other words, you don’t have any evidence. And by your claim about “hundreds of thousands of pages” you’ve made it crystal clear that you are not sure what evidence actually is. So instead, you are waving your hands and claiming you have trunks full of the stuff. What is this, the Fermat defense, that you have the proof but the margins of this email are too small to contain it?

    Heck, if I were you I’d start by pulling out the evidence from 200 years ago. If it’s lasted this long, it must be rock-solid. So let’s do that as a test case of your trunks full of evidence. Evidence is like fine wine, it improves with age, so bring out the old stuff, Fred. Let’s see the 200 year old evidence, the stuff from 1811.

    Because despite your puerile claim above that I’m just ignoring the evidence, I’ve never seen any evidence from 1811. So bring it on as a test case, and we can get a sense of whether you’re just blowing smoke about the trunks full of evidence you have stored in your attic.

    w.

    PS – you say you have “never been very good at changing people’s minds”. This is not unrelated to your lack of evidence.

    • I think Fourier was around 1824, not 1811. That’s why I said the evidence started 200 years ago or so, rather than exactly 200 years ago.

      There’s been much since.

      • Fred

        Can you cite three or four mitigation actions that you believe make sense for the US to implement and perhaps show your analysis? Maybe even one good idea that makes sense as a start (with your analysis).

        I believe you do realize this is where people who share your view always seem to leave the conversation

      • Rob – Your question has two parts. The first is the type of action, and the second is the implementation mechanism. Because discussing the second will inevitably lead to arguments that have been seen here many times and are not going to be resolved here, I won’t address it.

        As to the first part, we should begin to substantially reduce CO2 emissions through whatever combination of policies will best achieve this. We should also expedite the development and scaleup of alternative energy resources. We should incentivize and facilitate energy efficiency improvements, as well as the separate but also important process of conservation within the realm of choices that do not harm vulnerable populations. Finally, we should similarly begin the many adaptive measures that will be needed to address the degree of climate change that we can’t mitigate. This will probably be easier for the threats imposed by warming than those posed by ocean acidification, but we can also begin to address all the human excesses that damage marine life in addition to acidification, and this may help.

        Can we do all those things without significant short term harm, and with substantial long term benefit? Absolutely in my view. Do I have policies that I think would be useful in this regard? I do, and I will be glad to discuss them in any venue that is not politically and ideologically charged to the point that no-one seems to have any latitude to move away from entrenched views. That means not here. If you wish to conclude from my decision that I don’t have anything that would work, you are free to do so, but I believe those who know me know that I try not to arrive at conclusions that can’t be well defended.

      • Just to add one more item to the carbon reduction process, I believe we should rapidly work to minimize the environmental damage from natural gas drilling (including fracking) and also to minimize the escape of fugitive methane emissions to space. That would permit us to begin a very large scale transition from oil and coal toward gas over coming decades as an interim measure until such time as all fossil fuel consumption can be greatly reduced. Substituting gas for coal can reduce CO emissions by 40% for the same energy yield, and substituting gas for oil can also reduce CO2 significantly although to a lesser extent.

      • Fred,
        Do you have any actual evidence of environmental hazards from frakking, or is this going to be another of your tedious OA excursions?

      • Oops,
        Fred,
        With all due respect,your answer “As to the first part, we should begin to substantially reduce CO2 emissions through whatever combination of policies will best achieve this. ”
        Is content free. It offers no policies, technologies or processes to reduce CO2.
        And all using natural gas does is slow the increase.
        So, again, the question remains:
        Can you name one actual mitigation policy that will work?
        And let’s defin ‘work’, as ‘something that when implemented will reduce CO2 so much as to measurably reduce the dangers claimed for CO2.”

  54. hunter

    Do not expect a specific answer from Fred.

    He does not have any.

    There have been no actionable proposals for specific projects or programs, which would reduce atmospheric CO2 sufficiently to have a perceptible impact on our global climate by 2100. None. Zero. Nada.

    Max

    • manacker,
      Fred is sort oa steroetypical believer of the more sophisticated sort:
      he thinks if he simply repeats himself nicely he will win.
      That he cannot show any actual change in pH, or answer the points about how complex ocean pH is in reality he just skips over.
      Now we see that when it comes to mitigation he has nothing as well.
      And, not suprisingly, he will only have the phonied up faux frakking report and the vaccuous NYT editorial to support his position on that, as well.
      I almost prefer the mean spirited trash talk of Romm or the death trains talk of Hansen to the nice Fred. At least they do not pretend to be doig anythign other than cramming bs into the public square.

  55. Joachim Seifert

    To Hunter and Manacker:
    This is a psychological category, to which the hopeless Fred belongs to:
    To the Obstinate type, which does not want to learn and just keeps
    endlessly repeating, what has sublimed in his gray cells: Of the same
    type of people, imagine: Berliners stumble April 1945 over their flattened
    ruins, in absolute depair and the wicked Joseph Goebbels shows up
    and everybody in their rags happily greets him and swears allegiance!
    Not a single one threw a stone….. Nazis to their grave…. and Fred is AGW to his deathbed…
    These guys never learn from history…. how can a person get rid of
    psychological obstinance? This attitude will stay until his grave, does
    not want to learn, to bad to have to deal with such people…..
    I am pretty convinced that Fred origins from this area north of
    the “gemuetliche” Saxonian coffee table line in Germany……