The culture of building confidence in climate models

by Judith Curry

As climate models become increasingly relevant to policy makers,  they are being criticized  for not undergoing a formal verification and validation (V&V) process analogous to that used in engineering and regulatory applications. Further, claims are being made that climate models have been falsified by failing to predict specific future events.

To date, establishing confidence in climate models has targeted the scientific community that develops and uses the models.  As the climate models become increasingly policy relevant, it is critically important to address the public need for high-quality models for decision making and to establish public confidence in these models.  An important element in establishing such confidence is to make the models as accessible as possible to the broader public and stakeholder community.

An overview of uncertainties associated with climate models was provided in last week’s post. Why do climate scientists  have  confidence in climate models?  Is their confidence justified?  With climate models being increasingly used to provide policy-relevant information, how should we proceed in building public confidence in them?

All models are imperfect; we don’t need a perfect model, just one that serves its purpose.  Airplanes are designed using models that are inadequate in their ability to simulate turbulent flow.  Financial models based upon crude assumptions about human behavior have been used for decades to manage risk. In the decision making process, models are used more or less depending on a variety of factors, one of which is the credibility of the model.  Climate model simulations are being used as the basis for international climate and energy policy, so it is important to assess the adequacy of climate models for this purpose.

Confidence in weather prediction models

Some issues surrounding the culture of establishing confidence in climate models can be illuminated first by considering numerical weather prediction models.  To my knowledge, nobody is clamoring for V&V of weather models; why not?  Roger Pielke Jr. provides an interesting perspective on this in the Climate Fix:

Decision makers, including most of us as individuals, have enough experience with weather forecasts to be able to reliably characterize their uncertainty and make decisions in the context of that uncertainty.  In the U.S., the National Weather Service issues millions of forecasts every year.  This provides an extremely valuable body of information experience for calibrating forecasts in the context of decisions that depend on them.  The remarkable reduction in loss of life from weather events over the past century is due in part to improved predictive capabilities, but just as important has been our ability to use predictions effectively despite their uncertainties. 

The beginnings of numerical weather forecasting involved some of the giants of 20th century science, including John von Neumann.  Numerical weather predictions began in 1955 in both the U.S. and Europe, based upon recent theoretical advances in dynamical meteorology, computational science, and computing capabilities.  At this point, the public is interested in better weather forecast models, and weather forecast modeling centers are actively engaged in continuing model development.  Model evaluation against observations and comparison of the skill scores of different model versions is a key element of such development.

This strategy for developing confidence is being extended to seasonal climate prediction models, which are based on coupled atmosphere/ocean models.  A unified treatment of weather and climate models (i.e. the same dynamical cores for the atmosphere and ocean are used for models across the range of time scales) transfers confidence from the weather and seasonal climate forecast models to the climate models used in century scale simulations. Confidence established in the atmospheric core as a result of the extensive cycles of evaluation and improvement of weather forecast models is important; however caution is needed since other factors become significant in climate models that have less import in weather models, such as mass conservation and cloud and water vapor feedback processes.

Owing to their imperfections and the need for better models, models that predict weather, seasonal climate, and longer  climate change are under continued development and evaluation.

Climate scientists’ perspectives on confidence in climate models

Before discussing confidence, I first bring up the issue of “comfort.”  Comfort as used here is related to the sense that the model developers themselves have about their model, which includes the history of model development and the individuals that contributed to its development, the reputations of the various modeling groups, and the location of the model simulations in the spectrum of simulations made by competing models. In this context, comfort is a form of “truthiness” that does not translate into user confidence in the model, other than via an appeal to the authority of the modelers.

Scientists that evaluate climate models, develop physical process parameterizations, and utilize climate model results are convinced (at least to some degree) of the usefulness of climate models for their research. They are convinced by a “consilience of evidence” (Oreske’s phrase) that includes  the model’s relation to theory and physical understanding of the processes involved, consistency of the simulated responses among different models and different model versions, and the ability of the model and model components to simulate historical observations.  Particularly for scientists that use the models rather than participate in model development/evaluation, the reputation of the modeling centers at leading government labs is also an important factor. Another factor is the “sanctioning” of these models by the IPCC (with its statements about model results having a high confidence level), which can lead to building confidence in climate models through circular reasoning.

Knutti 2008 describes the reasons for having confidence in climate models as follows:

  • Models are based on physical principles such as conservation of energy, mass and angular momentum.
  • Model results are consistent with our understanding of processes based on simpler models, conceptual or theoretical frameworks.
  • Models reproduce the mean state and variability in many variables reasonably well, and continue to improve in simulating smaller-scale features
  • Models reproduce observed global trends and patterns in many variables.
  • Models are tested on case studies such as volcanic eruptions and more distant past climate states
  • Multiple models agree on large scales, which is implicitly or explicitly interpreted as increasing our confidence
  • Projections from newer models are consistent with older ones (e.g. for temperature patterns and trends), indicating a certain robustness.

Paul Barton Levenson (h/t to Bart Verheggen)  provides a useful summary with journal citations of climate model verification.

Challenges to building confidence in climate models

Model validation strategies depend on the intended application of the model.  The application of greatest public relevance is projection of 21st century climate variability and change and the role of anthropogenic greenhouse gases.  So how can we assess whether climate models are useful for this application?  The discussion below is based upon Smith (2002) and Knutti (2008).

User confidence in a forecast model depends critically on the confirmation of forecasts, both using historical data (hindcasts, in-sample) and out-of-sample observations (forecasts).  Confirmation with out-of-sample observations is possible for forecasts have a short time horizon that can be compared  to  out-of-sample observations (e.g. weather forecasts).   Unless the model can capture or bound a phenomena in hindcasts and previous forecasts, there is no expectation that the model can quantify the same phenomena in subsequent forecasts.  However, capturing the phenomena in hindcasts and previous forecasts does not in any way guarantee the ability of the model to capture the phenomena  in the future, but it is a necessary condition. If the distance of future simulations from the established range of model validity is small, it reasonable to extend established confidence in the model to the perturbed future state.  Extending such confidence requires that  no crucial feedback mechanisms are missing from the model.

Leonard Smith has stated that “There are many more ways to be wrong in a 106 dimensional space than there are ways to be right.”  However, since they make millions of predictions, models will invariably get something right. Confirmation of climate models using historical data is relative. Different climate models (and different parameter choices within a climate model) simulate some aspects of the climate system well and others not so well.  Some models are arguably better than others in some sort of overall sense, but falsification of such complex models is not really a meaningful concept.

Further, agreement between model and data does not imply that the model gets the correct answer for the right reasons; rather, such agreement merely indicates that the model is empirically adequate. For example, all of the coupled climate models used in the IPCC AR4 reproduce the time series for the 20th century of globally averaged surface temperature anomalies; yet they have different feedbacks and sensitivities and produce markedly different simulations of the 21st century climate.

Even for in-sample validation, there is no straightforward definition of model performance for complex non-deterministic models having millions of degrees of freedom.  Because the models are not deterministic, multiple simulations are needed to compare with observations, and the number of simulations conducted by modeling centers are insufficient to create a pdf with a robust mean; hence bounding box approaches (assessing whether the range of the ensembles bounds the observations) are arguably a better way to establish empirical adequacy.

The climate community has been lax in grappling with the issue of establishing formal metrics of model performance and reference datasets to use in the evaluation.  In comparing climate models with observations, issues to address include: space/time averaging and which variables to use and which statistics should be compared (mean, pdf, correlations).  A critical issue in such comparisons is having reliable global observational datasets with well characterized error statistics, which is a separate challenge in itself (which will be subject of a series of posts after the new year).  A further complication arises if datasets used in the model evaluation process are the same as those used for calibration, which gives rise to circular reasoning (confirming the antecedent) in the evaluation process.

An important element in evaluating models is the performance of its component parts.  Lenhard and Winsberg (2008, draft) argues that climate models suffer from a particularly severe form of confirmation holism.  Confirmation holism in the context of a complex model implies that a single element of the model cannot be tested in isolation since each element depends on the other elements, and hence it is impossible to determine if the underlying theories are false by reference to the evidence.  Owing to inadequacies in the observational data and confirmation holism, assessing empirical adequacy should not be the only method for judging a model.  Winsberg points out that models should be justified internally, based on their own internal form, and not solely on the basis of what they produce.  Each element of the model that is not properly understood and managed represents a potential threat to the simulation results.

Each climate modeling group evaluates its own model against certain observations.  All of the global climate modeling groups participate in climate model intercomparison projects (MIPs).  Knutti states:  “So the best we can hope for is to demonstrate that the model does not violate our theoretical understanding of the system and that it is consistent with the available data within the observational uncertainty.”  Expert judgment plays a large role in the assessment of confidence in climate models.

Standards for establishing confidence in simulation models

For models that are used in engineering and regulatory applications, standards have been developed for establishing confidence in simulation models.   Models used to design an engineered system have different requirements and challenges from predictive models used in environmental regulation and resource management.

Verification and validation

In high-consequence decision making including regulatory compliance, credibility in numerical simulations models is established by undergoing a validation and verification (V&V) process.  Formal V&V procedures have been articulated by several government agencies and engineering professional societies.  A lucid description of model V&V is given in this presentation by Charles Macall.

The goal of model verification is to make the model useful in the sense that the model addresses the right problem and provides accurate information about the system being modeled.  Verification addresses issues such as implementation of the model in the computer, selection of input parameters, and logical structure of the model.  Verification activities include code checking, flow diagrams of model logic, code documentation, and checking of each component.

Validation is concerned with whether the model is an accurate representation of the real system.  Both model assumptions and model output undergo validation. Validation is usually achieved through model calibration that is an iterative process of comparing the model to the actual system.  Model validation depends on the purpose of the model and its intended use.  Pathways to validation include model evaluation of case studies against observations, comparing multiple models, utilization of maximally diverse model ensembles, and assessment of subject matter experts. Challenges to model validation occur if controlled experiments cannot be performed on the system (e.g. there is one historic time series) or the model is not deterministic; each of these conditions imply that the model cannot be falsified.

Oreskes et al. (1994) claim that model V&V is impossible and logically precluded for open-ended systems such as natural systems.   Oreskes (1998) argues for model evaluation (not validation), whereby model quality can be evaluated on the basis of the underlying scientific principles, quantity and quality of input parameters, the ability of a model to reproduce independent empirical data. Much of the debate between validation/verification versus evaluation seems to me to be semantic (and Oreskes does not use the V&V terms in the practical sense employed by engineers).

Steve Easterbrook has a superb post on what a V&V process might look like for climate models, and includes reference to an excellent paper by Pope and Davies. Easterbrook states “Verification and Validation for [Earth System Models] is hard because running the models is an expensive proposition (a fully coupled simulation run can take weeks to complete), and because there is rarely a “correct” result – expert judgment is needed to assess the model outputs.”  But he rises to the challenge and makes some very interesting and valuable suggestions (I won’t attempt to summarize them here, read Easterbrook’s post, in fact read it twice).

Building confidence in environmental models

The U.S. National Research Council (NRC) has published a report in 2007 entitled “Models in Environmental Regulatory Decision Making,” which addresses models of particular relevance to the U.S. Environmental Protection Agency (EPA).    In light of EPA’s policy on greenhouse gas emissions, there is little question that climate models should be included under this rubric. The main issue addressed in the NRC report  is summarized in this statement:

“Evaluation of regulatory models also must address a more complex set of trade-offs than evaluation of research models for the same class of models. Regulatory model evaluation must consider how accurately a particular model application represents the system of interest while being reproducible, transparent, and useful for the regulatory decision at hand. Meeting these needs may require different forms of peer review, uncertainty analysis, and extrapolation methods. It also implies that regulatory models should be managed in a way to enhance models in a timely manner and assist users and others to understand a model’s conceptual basis, assumptions, input data requirements, and life history.”

The report describes standards for model evaluation; principles for model development, selection, and application; and model management.  And finally the report states:

EPA should continue to develop initiatives to ensure that its regulatory models are as accessible as possible to the broader public and stakeholder community. . . The level of effort should be commensurate with the impact of the model use. It is most important to highlight the critical model assumptions, particularly the conceptual basis for a model and the sources of significant uncertainty. . . The committee anticipates that its recommendations will be met with some resistance because of the potentially substantial resources needed for implementing life-cycle model evaluation. However, given the critical importance of having high-quality models for decision making, such investments are essential if environmental regulatory modeling is to meet challenges now and in the future.

These are standards that should apply to climate models that are being used in a predictive sense.

It’s time for a culture change

Climate model development has followed a pathway mostly driven by scientific curiosity and computational limitations.  As they have matured, climate models are being increasingly used to provide decision-relevant information to end users and policy makers, whose needs are helping define the focus of model development in terms of increasing prediction skill on regional and decadal time scales.

It seems that now is the time for climate models to begin transitioning to a formal evaluation process that is suitable the nature of climate models and the applications for which they are increasingly being designed for.  Ideas for validation strategies suitable for climate models should be sought from the computer science and engineering fields.  The climate community needs to grapple with the issue of model validation in a much more serious way, including the establishment of global satellite climate data records.  Knutti states that ” Only a handful of people think about how to make the best use of the enormous amounts of data, how to synthesize data for the non-expert, how to effectively communicate the results and how to characterize uncertainty.”

The issue of V&V is not just needed for public relations and public credibility, but to quantify progress and uncertainty and so to guide the allocation of resources in continued model improvement.

So, where do we go from here?  Expecting internal V&V from the climate modeling centers would require convincing resource managers that this is needed.  Not easy, but in the U.S., the EPA could become a driver for this.  Many of the climate modeling centers are making their code publicly accessible, which is a step in the right direction.  Independent V&V in an open environment (enabled by the internet and the blogosphere) may be the way to go (I know there are some efforts along these lines that are out there, please provide links).  I look forward to your ideas and suggestions.


260 responses to “The culture of building confidence in climate models

  1. I have read what you have written, Judith, snd I cannot say that I have really understood it all. The impression I get, is that you are saying that climate models, to date, have never been validated. To take just one quote from you. ” Ideas for validation strategies suitable for climate models should be sought from the computer science and engineering fields.”

    If it is true, as I believe, that climate models have never yet been validated, why should we believe anything the supporters of CAGW claim, when so many of their analyses are based on the output of these models?

    • Jim, the models have been validated in some sense, but improvements are needed, and the documentation of many of the procedures and tests that constitute the validation are not organized in a way that is accessible.

      • David L. Hagen

        How well do models incorporate improvements in food production caused by higher CO2, more rainfall and higher temperatures? e.g. see:
        Warmer, wetter climate helping U.S. farmers grow more crops USA Today

        DES MOINES (AP) — Warmer and wetter weather in large swaths of the country have helped farmers grow corn, soybeans and other crops in some regions that only a few decades ago were too dry or cold, experts who are studying the change said. . . .The change is due in part to a 7% increase in average U.S. rainfall in the past 50 years, said Jay Lawrimore, chief of climatic analysis for the Asheville, N.C.-based National Climactic Data Center. . . .Brad Rippey, a U.S. Department of Agriculture meteorologist, said warming temperatures have made a big difference for crops such as corn and soybeans. . . .For example, data from the National Agricultural Statistics Service show that in 1980, about 210,000 soybean acres were planted in North Dakota. That has gradually increased to more than 3 million acres in recent years. . . .But USDA meteorologist Eric Luebenhusen said others are doing well. He noted Nebraska and Illinois were especially wet this year, and he said Iowa has “almost become the tropical rain forest of Middle America.” For the most part, Luebenhusen said, that’s good for farmers.

        How well were these changes predicted?

    • Because you don’t need computer models to examine global warming, catastrophic or not, and the idea that GCMs are absolutely essential to the AGW case is a strawman that is endlessly repeated on many ‘skeptic’ blogs.

      Furthermore, you are implying that claims are being made (‘catastrophic’, I believe), yet not bothering to cite any examples of such claims. If you think that catastrophic predictions are being made as a result of the use of GCMs, then I would like to see either some concrete examples of such predictions or a withdrawal/clarification of your statement. It is somewhat tedious for people to use ‘CAGW’ as a labelling term without ever telling us what the ‘C’ actually means. Which is somewhat vexing, as I cannot apparently label someone a denialist even when there is clear evidence that they are following the denialist playbook, which is defined…

      The other huge strawman present is that policy is been driven by the output of GCMs. As far as I am aware, outside of Europe, policy on CO2 emissions is minimal in the extreme, and inside of Europe it is merely completely ineffectual.

      • Andrew Dobbs writes “Because you don’t need computer models to examine global warming, catastrophic or not, and the idea that GCMs are absolutely essential to the AGW case is a strawman that is endlessly repeated on many ‘skeptic’ blogs.”

        I am not necessarily talking about GCMs. For example, I go further back in the claims that CO2 causes global warming, to the use of radiative transfer models to estimate the change in radiative forcing for a doubling of CO2. I have noted on many occasions that no-one has shown that radiative forcing models are capable of estimating radiative forcing.

        What I am claiming is that none of the models used by the supporters of AGW have been shown to be suitable to obtain the results that are claimed. It is not just the GCMs. To recap, when I wrote models 50 years ago, my superiors would not let any paper leave my desk, if I could not show that the models were suitable to answer the problem I was investigating. This idea seems to have been completely ignored by the supporters of AGW. I have never seen, in papers relating to AGW, any author showing why the models used are suitable to solve the problem under investigation.

        Let us ignore the matter of “catastrophic” for the moment, and just keep the discussion to models.

      • Interesting statement:

        I have noted on many occasions that no-one has shown that radiative forcing models are capable of estimating radiative forcing.

        Think you’ll find that estimate radiative forcing is what they do. How well they do it is another matter; although if such a model cannot give a good estimate for pre-industrial temperatures (i.e no AGW) then it would not be fit for purpose.

        I have never seen, in papers relating to AGW, any author showing why the models used are suitable to solve the problem under investigation.

        An example of this would be helpful. How would you go about demonstrating that a model was suitable?

        And I am not sure why I should ignore your use of ‘Catastrophic’. You quite deliberately used it, therefore you should be able to say what it means. If you are unwilling to do this, then I’d suggest you admit it and refrain fro using it again.

      • Jim Hansen’s 1988 prediction that parts of Manhatten would be under water sounds pretty catastrophic. Wasn’t that based on a computer model?

      • Reference?

      • Andrew Dobbs writes “How would you go about demonstrating that a model was suitable?”

        That is the key question. If you cannot demonstrate that a model is suitable to solve the problem at hand, of what use is the model at all?

        It all comes down to a simple question. How do we know that any particular model is giving the correct answer to the problem it is trying to solve? Until this question is answered for every single use models have been put to by supporters of AGW, we have no idea whether these models are of any use at all.

        Of course, if you have a model that has correctly predicted the future on many, many occasions, then this would be a very strong indication that the model was, indeed, useful for solving such problems.

        As to catastrophic, I am not ducking the issue. I want to keep the discussion, for the moment, to models. Then we dont get sidetracked.

      • In terms of climate models, ‘correct’ is not possible even in theory, you will only ever be looking for the best approximation possible.

        And as far as validation goes, you do NOT need to wait 10 years. You can simply set it to the conditions in, for instance, 1970 and see what happens over the next 40 years. If the earth freezes solid or the oceans boil, your model is probably wrong.

        As far as catastrophic goes, I’ll just accept that it was an empty rhetorical gesture, then.

      • “While doing research 12 or 13 years ago, I met Jim Hansen, the scientist who in 1988 predicted the greenhouse effect before Congress. I went over to the window with him and looked out on Broadway in New York City and said, “If what you’re saying about the greenhouse effect is true, is anything going to look different down there in 20 years?” He looked for a while and was quiet and didn’t say anything for a couple seconds. Then he said, “Well, there will be more traffic.” I, of course, didn’t think he heard the question right. Then he explained, “The West Side Highway [which runs along the Hudson River] will be under water. And there will be tape across the windows across the street because of high winds. And the same birds won’t be there. The trees in the median strip will change.” Then he said, “There will be more police cars.” Why? “Well, you know what happens to crime when the heat goes up.” ”

        http://dir.salon.com/books/int/2001/10/23/weather/index.html

      • Fair enough. I would point out, however, that media reports of interviews held a decade ago are not generally considered part of the scientific canon.

        And also that sea level rises of the order of meters are pretty much locked in now, the only issue being how long it takes.

      • I had a feeling you would respond with “it’s not peer reviewed.” It does not matter. Hansen said it. It is documented. He was spouting BS and given his penchant for activism, may have known it was BS. Or maybe he really believed his 1988 model. Who knows?

      • Hi Andrew. I puzzled over your statement that “sea level rises of the order of meters are pretty much locked in now”.

        First, how is this relevant to the discussion about trust in climate models or — to extend the thought — of anthropogenically caused catastrophe, considering that the sea has been rising at approximately the same rate for about 7,000 years? Shall we cry doom because of a trend that dates back beyond the bronze age, or simply accept it as a background fact of life for the interglacial period in which we live?

        Second, where do you get “of the order of meters”? Indeed, as you say, the issue is “how long it takes”. The rate of ocean level rise over the 7,000-9,000 years (depending on who one asks) since the transition into the current interglacial period has been about a steady 2 mm/yr. This rate didn’t even blink when the industrial age began. There is no detectable anthropogenic signal in sea level rise, as admitted (reluctantly) by the IPCC.

        As for how long it would take for ocean rise to be “of the order of meters” — at 2 mm/yr it would take 500 years to see a 1 m rise.

        But given that for the last million years or so the interglacials have generally come and gone like clockwork and lasted no more than 10K years, I’d say there’s a pretty good chance that within 500 years we’ll be experiencing drastic cooling by then and this calculation will have no significance. We’ll be more concerned with calculating the rate of encroachment of massive glaciers on our cities and farmlands.

      • “I would point out, however, that media reports of interviews held a decade ago are not generally considered part of the scientific canon. ”

        But “catastrophic” IS how it is being sold in the media and they are referencing scientists and studies…no?

        Perhaps you should be admonishing these institutions selling it that way if you don’t agree rather than claiming it’s some denialist straw man?

      • Since Andrew Dodds made me go after the reference, it is funny to read the Salon article about the book, “The Coming Storm.” The article features interviews with Jim Hansen and Tom Karl. It makes a good read and recalls so much of the alarmist BS we have had to tolerate over the years.

        http://dir.salon.com/story/books/int/2001/10/23/weather/index.html

      • Andrew,
        Why are you so dodgy about a basic aspect of AGW, catastrophism?
        If you answered any of my multiple prior questions on this, I apologize and would ask you to restate it here.
        So why not take a moment and tell us why AGW is not about catastrophism, but why we need to spend trillions on it?

      • “Met Office: catastrophic climate change could happen with 50 years”

        http://www.telegraph.co.uk/earth/earthnews/6236690/Met-Office-catastrophic-climate-change-could-happen-with-50-years.html

        “The risk of catastrophic climate change is getting worse, according to a new study from scientists involved with the United Nations Intergovernmental Panel on Climate Change (IPCC).”

        http://www.scientificamerican.com/article.cfm?id=risks-of-global-warming-rising

        “A group of 1,700 leading scientists called on the US government yesterday to take the lead in fighting global warming. Citing the “unprecedented and unanticipated” effects of global warming, the scientists, including six Nobel prizewinners, presented a letter calling for an immediate reduction in US carbon emissions.

        The statement came as the Senate prepares to debate a bill next week that would impose economy-wide limits on greenhouse emissions to avert what it describes as “catastrophic climate change”.

        The letter, issued by the non-profit Union of Concerned Scientists, warns: “If emissions continue unabated, our nation and the world will face more sea level rise, heatwaves, droughts, wildfires, snowmelt, flood risk, and public health threats, as well as increased rates of plant and animal species extinctions.”

        http://www.guardian.co.uk/environment/2008/may/30/climatechange.scienceofclimatechange1

        “But all of this is only the beginning. We are already experiencing dangerous climate change…we need to act to avoid catastrophic climate change. While not all regional effects are yet known, here are some likely future effects if we allow current trends to continue:”

        Longer term catastrophic effects if warming continues

        http://www.greenpeace.org/international/en/campaigns/climate-change/impacts/

        Strawman huh?

      • Andrew Dodds wrote: “The other huge strawman present is that policy is been driven by the output of GCMs.” In the US, the Title III of the Waxman-Markey bill (which passed the House) makes it our goal to reduce GHG emissions by 83% by 2050. The 83% figure was chosen following the recommendations of the US Climate Action Partnership. It’s “Call to Action” says:

        “U.S. legislation should be designed to achieve the goal of limiting global atmospheric GHG concentrations to a level that minimizes large-scale adverse climate change impacts to human populations and the natural environment, which will require global GHG concentrations to be stabilized over the long-term at a carbon dioxide equivalent level between 450–550 parts per million.”

        What scientific evidence supports the conclusion that CO2 must be stabilized at 450-550 ppm (limiting temperature rise to 2 degC)? Climate models, of course! Needless to say, policymakers wouldn’t be cutting back carbon emissions by 83% unless they believed that anything less would be would be catastrophic. The idea that such policies are not driven by (misuse) of information from climate models is preposterous.

      • Frank gives voice to a misconception that seems to run prominently through his thread and elsewhere: “What scientific evidence supports the conclusion that CO2 must be stabilized at 450-550 ppm (limiting temperature rise to 2 degC)? Climate models, of course!”

        Perhaps the best scientific evidence for the amount of temperature rise due to increasing CO2 comes from the relation between temperature and CO2 found in ice cores. You can investigate this relation for yourself.

        Go to NOAA’s ice core data sets page at
        http://www.ncdc.noaa.gov/paleo/indexice.html
        and download the temperature (derived from deuteriun) and CO2 time series for, say, the 800,000 year EPICA Dome C record. Import these data sets to your favorite spreadsheet or analysis software and plot temperature (anomaly) vs CO2. Use whatever analysis you like to reassure yourself that the correlation you observe is meaningful. (You can lag CO2 by 800 years if you like, but that won’t make any difference to the result.)

        Now plot the data point that corresponds to today: about zero Delta T for the normalization used in the data (the exact value is not important) and about 390 ppm CO2. Compare with the ice core data. This gives you some idea of how far we have whacked the climate system out of its trajectory of the last million years.

        Now estimate (from your linear regression, or – better – a fit that accounts for the logarithmic effect of increasing CO2) what the Delta T would be, in the long run, for 450 – 550 ppm CO2, assuming that the processes responsible for the temperature-CO2 relation that acted over the last million years still hold. You can divide the result by a few to account for polar amplification. The result shows why the 450 – 550 ppm criterion may be too optimistic. And you have derived it without reference to models.

        This thread is not the proper place to discuss these data and their implications. No doubt creative minds will find ways to discount the result you derive from them. But it’s not easy to do so, and alternative hypotheses, strained as they would be, are not reassuring either.

      • Oh brother. Here Pat Cassen regales us with a magnificent case study in conflating correlation and causation. I am especially impressed with the casual aside “(You can lag CO2 by 800 years if you like, but that won’t make any difference to the result.)”

        Pat, the lag to which you allude, in fact, makes a minor but significant difference to “the result” (meaning the correlation between CO2 and temperature in the record), but whether or not it does is really very much beside the point. Indeed, it is precisely that difference that tells us that there is a lag.

        Further it is absurd to articulate your aside in the form “You can …” as if someone has famously messed with the data, artificially shifting the CO2 timeline. The lag is in the data when it is analysed with sufficiently high resolution. Nobody has put it there!

        Let us leave that as a rhetorical flourish on your part and assume that you understand that changes in CO2 lag temperature in the ice core record by approximately 800 years. You evidently believe that this particular correlation, even when the “effect” lags the “cause” by centuries, amounts to reverse causation — specifically, the future causes the past in this case. This is ludicrous.

        Perhaps you mean to imply that there is some mysterious mechanism that inextricably links these two variables, so if either one goes up so does the other, and if one goes down the other must follow.

        But the ice core-ellation in question is more-or-less completely explainable in terms of the temperature dependence of the constant in Henry’s Law, which dictates how gas (in this case CO2) diffuses in liquids (in this case the oceans) at different partial pressures in the atmosphere. As far as I know it has long been conceded, and is now “received wisdom”, even among AGW-promoting scientists, that this is the strongest signal in this correlation.

        The conclusion one naturally draws from the lovely correlation you instruct us to derive is NOT that you can use atmospheric CO2 at these concentrations to drive temperature, but that you can use temperature to drive atmospheric CO2. How on earth you expect us to derive 400 or 450 ppm as some sort of magic number to stave off climate meltdown is a mystery to me!

      • Hi Craig –

        Right, the lag is in the data; I did not intend to imply otherwise. You do know, I presume, that the lag was predicted by those who understand the physics of CO2 feedback.

        No, we are not dealing with “reverse causation”. We are dealing with feedback.

        “As far as I know it has long been conceded…” I don’t think so.

        “How on earth you expect us to derive 400 or 450 ppm as some sort of magic number to stave off climate meltdown is a mystery to me!” Not what I did.

      • Quite right, Pat — you did not use the word “meltdown”; my inference. Perhaps you’d could illuminate us, then, regarding what you mean by “The result shows why the 450 – 550 ppm criterion may be too optimistic” in reference to the earlier discussion of the USCAP statement that this was “a level that minimizes large-scale adverse climate change impacts to human populations and the natural environment“.

        Did you not mean that you conclude that at even at 450-550 ppm one could infer “adverse climate change impacts”? A remarkable inference from a two-variable ice core time series that shows temperature driving CO2 levels.

        In contrast, climate models, useless as they may be in other respects, can at least pretend to indicate specific impacts on the world, from which direct inferences may be made about effects on humans and the environment.

        I agree with you in one respect: to sort out all the nonsense about climate, one can go a long way just by meditating on ice core data. Here’s a lovely piece on the subject posted last winter at WUWT, in which the central greenland series is examined at increasingly large time scales.

      • Sorry – I meant “Hi R. Craigen – ” … It’s late.

      • R. Craigen –

        I hope the following clarifies what I was trying to say. Recall that my comment was meant to point out that one can estimate a sensitivity of temperature to CO2 without recourse to models.

        The long-term, average global temperature rise due to CO2 at 450-550 ppm that I derive by the method described above exceeds 2 C by a factor of 2 – 4, depending on what you take for polar amplification. (I was hoping you or someone else might do the exercise for yourself to see if you agree with this result.) This is why I said “…the 450 – 550 ppm criterion may be too optimistic”.

        Do you consider that an increase in average global temperature of 4 – 8 C would produce “adverse climate change impacts”? Many researchers do; perhaps you do not.

        You say “A remarkable inference from a two-variable ice core time series that shows temperature driving CO2 levels.” No, it is an inference from a remarkable two-variable ice core time series that shows the 800,000 year history of temperature driven by various factors, with a dominant CO2 feedback. (If you can account for the temperature-CO2 relation by Henry’s Law applied to passive CO2, publish.)

        Thanks for the WUWT link. I don’t go over there very much because too often I find that Anthony is trying to bamboozle me, or he’s just too sloppy. Not so bad this time. You might have noticed that he adds only about .6 C to the ice core record for the instrumental record, whereas he should have added more like 3 C for Greenland:
        (http://141.161.23.43/arctic.pdf)
        Makes a difference to some of his claims.

      • Thanks for the clarification Pat. Please accept my apologies for my opening snark, which is out of keeping with the tone of Dr. Curry’s site. I occasional use blog comments to blow off steam, but I usually try to avoid low cuts like that.

        Can you point to any published analysis that shows CO2 provides the dominant temperature feedback in the ice core record? I realize that this is the stock comeback to the natural interpretation of the 800 year lag, but I’ve never seen any justification of it. I’d be interested to see any that exists.

        In any case, if your point is to have any merit it must not only be established that CO2 provides the dominant feedback, but also that this feedback is large enough to figure significantly in the observed correlation. This is, after all, necessary for your point about 450-550 being “too optimistic”.

        I doubt this. Looking at, for example, the Vostok cores, we repeatedly see the CO2 concentration peak after temperature has begun to drop. Further, after the peaks, CO2 tends to persist — it decays slowly, whereas temperature falls much more precipitously. If CO2 provided significant feedback one would expect temperature to persist or fall less sharply while CO2 remains high. This pattern is too consistent to be plausibly explained by a singular catastrophe that happens to kill temperature after it peaks.

        Indeed, the graph appears to show the opposite: the sharp spikes in temperature appear to indicate catastrophic warming to a brief peak, followed by return to an equilibrium temperature. CO2 increases follow as a lingering effect.

        Even the qualitative behavior of the two time-series is pretty revealing: The CO2 graph rises and falls in a “rounder” shape, whereas the temperature graph is persistently “spiky”. This alone shows that, if CO2 is influencing temperature, the effect must be very small relative to whatever the main temperature drivers may be.

        As for whether Mr. Watt should be adding 0.6 or 3 degrees, this is speculation. The 2006 RMS paper to which you link gives a temperature trend based on a sample period of 24 years, missing entirely the preceding 30-year period of cooling, which is also missing from the ice core. Anyway, in my view one should not tack instrumental data onto proxy data; analysis should cover up to the point where the ice core proxy becomes unreliable. I think Anthony was trying to be generous with his vague assertion that “the blade” evident in the data should continue up “at least another half a degree”.

        I have never seen a precise statement of the Henry’s Law constant’s temperature dependence, so I confess that my assertion that it accounts for the correlation is little more than a statement of faith. But I would wager that it figures much higher than CO2 feedback.

      • Sorry Pat, I neglected to reply to your question, “Do you consider that an increase in average global temperature of 4 – 8 C would produce “adverse climate change impacts”? Many researchers do; perhaps you do not.

        Indeed, I do not. Nor do I believe that the temperature increases you cite are anywhere on the horizon. Still, the historical record consistently shows that so-called climate optima (= high temperatures) are marked by flourishing societies, less extreme weather, better crops and fewer pandemics. Cold periods, conversely, invariably bring far more severe weather, plague, famine and the collapse of civilizations. I trust the historical record far more than prognosticators of fortune. I’m prepared to stake my children’s future on a warmer world. I dread the prospect of a cooler one.

      • David L. Hagen

        R. Craigen
        There is substantial medical evidence supporting your observation that colder temperatures cause far greater deaths than higher temperatures. See the 2009 NIPCC report Climate Change Reconsidered
        9. Human Health Effects
        Some samples:
        9.1.1. Cardiovascular Diseases

        Research conducted by Green et al. (1994) in Israel revealed that between 1976 and 1985, mortality from cardiovascular disease was 50 percent higher in mid-winter than in mid-summer, both in men and women and in different age groups, in spite of the fact that summer temperatures in the Negev, where much of the work was conducted, often exceed 30°C, while winter temperatures typically do not drop below 10°C.

        In a study o f all 222,265 death certificates issued by Los Angeles County for deaths caused by coronary artery disease from 1985 through 1996, Kloner et al. (1999) found that death rates in December and January were 33 percent higher than those observed in the period
        June through September.

        9.1.2. Respiratory Diseases
        As was true of cardiovascular-related mortality, deaths due to respiratory diseases are more likely to be associated with cold conditions in cold countries. In Oslo, where Nafstad et al. (2001) found winter deaths due to cardiovascular problems to be 15 percent more numerous than similar summer deaths, they also determined that deaths due to respiratory diseases were fully 47 percent more numerous in winter than in summer.

        So why are we told we need to avoid global warming? Far more important to seek how to accommodate the likely global cooling as we leave the current interglacial warm period. See Don Easterbrook’s temperature compilations

      • One last short comment, Pat: I want to say that I’ve enjoyed our exchange very much (though it has probably taken too much time for both of us). Thanks. You have challenged me to examine my own leaps of logic in a few places.

        In particular I don’t think the effect of temperature dependence of the Henry’s Law constant explains, as I had inferred, the correlation very well if for no other reason than the temperature being correlated comes from one point of an arctic ice sheet, whereas one would require knowledge of the world-wide ocean surface temperature distribution. Further I confess to being shocked, initially, that the lag was 800 years. I would have expected an immediate discernable response that tapered off over centuries. The “tapering part” is evident but why such a long lag? Perhaps both series should be smoothed and temperature correlated to the derivative of CO2. This I haven’t done.

      • Perhaps both series should be smoothed and temperature correlated to the derivative of CO2.

        don’t do that

      • I get your point, jstults. What Dr. Briggs’ piece comes down to is an informal explanation of the Central Limit Theorem, and yes it makes perfect sense that smoothing would reduce p-values because it is tantamount to using grouped data, which obviously reduces variance.

        If one did as I proposed, the effect on variance ought to be taken into account. But one cannot sensibly speak of the derivative of discrete data, or even continuous data with a lot of local noise (as this data contains) it must necessarily be smoothed for any such analysis to have significance. Unless there is some direct way to de-noise it (I don’t believe there is) — in which case one could use finite differences and avoid the problem altogether.

        However, one need only smooth the CO2 data for my proposal to work. I haven’t thought through it completely but I suspect that leaving one of the two variables unsmoothed actually avoids the problem of reduced variance, or at least significantly lessens it.

        The other thing one can do is rescale confidence intervals according to the breadth of the smoothing interval (which corresponds to “group size”) and apply the relations used in the CLT.

      • R. C. – Feeling is mutual – I too have benefitted.

        And am out of time. So I’ll make some very brief comments which you need not respond to.

        Motivation – It helps to gain perspective by reading the ‘old’ papers, e.g.,
        http://www.atmos.washington.edu/2006Q2/211/articles_required/Lorius90_ice-core.pdf
        and others.

        Lags – It is not hard to make a toy model in which forcing is amplified by a constantly lagging feedback. Try
        DelT = A*sin(wt) + k*CO2
        where
        dCO2/dt = DelT
        (I have not worked it out so I don’t really know if this works.)
        The sin term is, of course, orbital forcing. Is this the climate system? No way, but does it capture something important about the climate system? Maybe.

        Spikiness – Better plot both at the same temporal resolution, which is limited by the CO2 data. Looks different than what Anthony posted.

        Let’s both keep learning.

      • Cheers, friend.

        I have to play with/meditate on your DE to get your point. My field is discrete math, so my intuition isn’t as well developed.

        The graphic at WUWT isn’t Anthony’s. It is grabbed from the original article appearing in Nature, 1999. The article you cite is from 1990, which precedes century-scale time resolution of core data, so it can’t properly address lag, but you’re quite right that it is important to remember how understanding of these things have evolved. I’ll take a look at it.

  2. The only thing that is going to work is to stop trying to sell them.

  3. Another thing that complicate GCM’s V&V is that at least a decades and even more is required before we can attempt to compare their result. Their forecast is based on estimate of a multitude of future value like the concentration of C02, methane, CFC’s, aerosol, etc.

    I think that what would help people be more confident in climate model is if, after a decade or two, modelers reran the model with the known value and then compare the result with observation.

    Steve McIntyre made an excellent analysis of this in one of his post:

    http://climateaudit.org/2008/01/18/hansen-scenarios-a-and-b-revised/

    In conclusion he wrote:

    “As to how Hansen’s model is faring, I need to do some more analysis. But it looks to me like forcings are coming in below even Scenario B projections. So I agree that it’s unfair for Hansen critics to compare Scenario A temperature results to actual outcomes as a test of the model mechanics. On the other hand, Hansen’s supporters have also been far too quick to claim vindication given the hodgepodge of GHG concentration results. If it’s unfair to blame the blame the model for differences between actual and projected if the GHG projections are wrong, then it i equally unfair to credit the model with “success” if it gets a “right” answer using wrong GHG projections. One would really have to re-run the 1988 model with observed GHG concentrations to make an assessment. Given that GISS have changed their models since 1988, GISS would presumably argue that the run is pointless, but the cost of doing the run doesn’t appear to be large and it seems like a reasonable exercise for someone to do. It would be interesting to obtain a listing of the 1988 model to that end.”

    But as I understand it, even if the 1988 code still existed, the evolution of computer and their exploitation system makes it almost impossible to do. Also without being paranoid, how confident could we be that the modeler wouldn’t change the original parameter to help it fit better with the observation. Such a verification process could place a lot of pressure on the credibility of the modelers who usually prefer to promote their new model than revisit their previous one.

    • A lot of old mainframe systems persist despite predictions to the contrary. I’m betting the 1988 model could be run again.

    • Sylvain, one of the main challenges of verifying climate models on a time scale of 1-2 decades is that natural forcing (solar and volcanic) is unknown plus the decadal ocean cycles are not deterministic and will not be simulated in a way that matches observations unless a very large ensemble is used.

      • This is why they should should be rerun after, some times has passed, with the actual observation, but the same parameter. Of course, some parameter that were used in the forecast, like solar forcing and volcanic eruption should be included/modified to represent what was observed.

      • The other problem is that the models, owing to their inadequacy, are under continual development, there substantial modifications on the time scale of a year. So there isn’t much interest in using a model that is 10 years old.

      • That is mainly why I’m not personally confident in climate model.

        On the one hand, climate modeler claim that their model are adequate to base future decision on them. On the other hand, they require always more funding so they can get better at forecasting the future.

        Meanwhile we still don’t know their real value since the are never really validated. I mean that their prediction of hind cast data can always be tuned to fit the observation. But does this tuning help in anyway in forecasting future data.

        That is why I think that a model from which the data is published. I think that for each successful hind cast model run their are several that are rejected. So the code of the one run (or several run) that are published and then used should be saved as it was at the moment these run were made. This way it could be runned again in the future to see there real capacities. It could also help understand what tuning they underwent at different timeframe.

        For sure this wouldn’t be popular in the modelers community, since the way it is going at the moment, the forecast part of their run is long forgotten when they become relevant. Also, they can always claim that disparity with reality are based the erroneous assumptions made at the time of their creation.

      • By this logic, then we should not pay attention to the model outputs of today, because in 10 years nobody will much interested in running them for validation either.

      • But the 10 year old model code still exists – not overwritten ? Then re-iterating that on a graveyard shift run to test its’ predictions against observed metrics over the last 10 years is a no-brainer

        We geological scientists do that all the time

      • Dr. Curry,

        “So there isn’t much interest in using a model that is 10 years old.”

        This statement of yours has stuck with me since I first read it. I feel this is a very big issue in the confidence in climate models. The scientific and political environment today was driven in great part by the ‘projections’ from climate models a decade ago.

        The message that comes across when past models and projections are ignored because of improvements in newer models is that the projections made a decade ago were not valid. That then tells us that the vast effort and resources spent based upon those projections were based upon invalid projections.

        To establish credibility for climate model projections, past projections must be presented along with new projections. Any differences should be explained. Of course, any improvements to new projections that further validate old projections should be mentioned also. Most of all though, differences between old projections and the real world should be explained.

        While the climate science community may be interested in only the latest and greatest models, the rest of the world cares about your track record for accuracy. Discounting past projections destroys confidence in current projections.

  4. As I see it, climate modelers face an essentially hopeless task. The earth is an unimaginably complex “analog computer”, and to expect programmers to simulate enough of its subsystems and their interactions with enough fidelity to produce a credible output is to expect the impossible. The basic belief is that if you can simulate the subsystems correctly, and simulate their interactions, and get the inputs right, and include all the important inputs and subsystems and interactions, then in the end, your model will simulate the emergent behavior of the real system.

    However, when you simulate a chaotic system, the more complex the model grows, the more “brittle” it becomes. That is, every incorrect initial condition or input or internal interaction sends the output off in an unexpedtedly wrong direction. As the complexity increases, it becomes exponentially more difficult and expensive to make each incremental improvement. In the end, you will always have a model that just needs “just one last upgrade” to make it right.

    Until there has been a rigorous V&V process, any model output must be treated as a hypothesis, not as evidence of any kind. In my opinion, any climate model’s V&V MUST include allowing the model to run outside its calibration space, (that is, into the future) and then waiting to see what the real world actually does. After all, the modelers are making statements about the climate a century from now–first, let’s see how they do in the near future. A model can only be considered for validation after it has matched the real world reasonably well over a reasonably long period. And ANY change to the model restarts the clock. The temptation can be overwhelming to make a change after finding a problem, and say, “It’s fixed, now.”

    The time for a culture change is long overdue in climate modeling. It should have happened before the “science was settled” and we started using the doomsday predictions of the current crop of GCMs to influence energy and economic policy. I also think it’s naive to think the EPA will provide any sort of impetus to reform the V&V mentality of the climate science community. They have already started writing regulations based on the IPCC AR4 (or whatever version they’re using as their bible). They aren’t interested in finding any flaws in the models.

    • Ed, thank you for your insights. I agree that brittleness is an issue of substantial concern.

    • Excellent comment Ed! Applying Monte Carlo simulation on a model, applying estimated probability ranges to key input variables, would illustrate your point exactly. That is, the kurtosis of the model output would be flat, meaning pretty much equal probability of any outcome. Which means that the model is not useful as a mechanism for making projections/predictions.

      Judith says: Leonard Smith has stated that “There are many more ways to be wrong in a 106 dimensional space than there are ways to be right.” However, since they make millions of predictions, models will invariably get something right.

      But we will never know which model gets which prediction right, so that characteristic doesn’t help us at all. It seems clear that the problem is so difficult that the real question is whether models can help at all. V&V is, in this context, a secondary question.

  5. David L. Hagen

    Judith – This is a key issue of great importance as policy makers seek to incorporate GCM’s into regulatory policy.

    There may be systemic tribalism in GCM’s. IPCC failed to incorporate expertise from the hydrological community. e.g. see Koutsoyiannis et al (2009):

    The climatological community seems to subscribe generally to the deterministic paradigm, which is reflected in its confidence in the ability of GCMs to foretell the unknown future. In contrast, the research contributions of the hydrological community have been based on more pragmatic statistical and stochastic descriptions of natural processes, which reflect a different paradigm in both understanding and modelling natural processes. . . . It is well known that the HK (Hurst-Kolmogorov) behaviour, mostly viewed as persistence or clustering of similar events in time, is relevant to (and virtually omnipresent in) all hydrological processes (e.g. Montanari et al., 1997; Koutsoyiannis, 2002, 2003; Montanari, 2003). It is less well known that this behaviour exists within atmospheric processes and temperature in particular (e.g. Koscielny-Bunde, et al., 1998; Koutsoyiannis, 2003; Climate, hydrology and freshwater (Discussion) Cohn & Lins, 2005 for instrumental temperature records; Koutsoyiannis, 2003; Rybski et al., 2006; Koutsoyiannis & Montanari, 2007 for proxy temperature time series).
    . . . The only allusion to HK behaviour in AR4 appears in the last paragraph of Appendix 3.A (Low-pass filters and linear trends; WG1, Ch. 3) and indicates that the authors of the Appendix had no understanding of long-term persistence (i.e. HK behaviour) or of the substantial literature describing it.

    Climate, hydrology and freshwater: towards an interactive incorporation of hydrological experience into climate research, DEMETRIS KOUTSOYIANNIS, ALBERTO MONTANARI, HARRY F. LINS & TIMOTHY A. COHN, Hydrological Sciences–Journal–des Sciences Hydrologiques, 54(2) April 2009, 394-405

    How well do the GCM’s perform when tested? See:
    Anagnostopoulos, G. G., D. Koutsoyiannis, A. Christofides, A. Efstratiadis, and N. Mamassis, <a href=”http://www.itia.ntua.gr/en/docinfo/978/”A comparison of local and aggregated climate model outputs with observed data, Hydrological Sciences Journal, 2010, (in press).

    We compare the output of various climate models to temperature and precipitation observations at 55 points around the globe. We spatially aggregate model output and observations over the contiguous USA using data from 70 stations, and we perform comparison at several temporal scales, including a climatic (30-year) scale. Besides confirming the findings of a previous assessment study that model projections at point scale are poor, results show that the spatially integrated projections do not correspond to reality any better.

    See the previous presentation & paper posted:
    Anagnostopoulos, G. G., D. Koutsoyiannis, A. Efstratiadis, A. Christofides, and N. Mamassis, Credibility of climate predictions revisited, European Geosciences Union General Assembly 2009, Geophysical Research Abstracts, Vol. 11, Vienna, 611, European Geosciences Union, 2009.

    The performance of the models at local scale at 55 stations worldwide (in addition to the 8 stations used in Koutsoyiannis et al., 2008) is poor regarding all statistical indicators at the seasonal, annual and climatic time scales. In most cases the observed variability metrics (standard deviation and Hurst coefficient) are underestimated.
    • The performance of the models (both the TAR and AR4 ones) at a large spatial scale, i.e. the contiguous USA, is even worse.
    • None of the examined models reproduces the over‐year fluctuations of the areal temperature of USA (gradual increase before 1940, falling trend until the early 1970’s, slight upward trend thereafter); most overestimate the annual mean (by up to 4°C) and predict a rise more intense than reality during the later 20th century.
    • On the climatic scale, the model whose results for temperature are closest to reality (PCM‐20C3M) has an efficiency of 0.05, virtually equivalent to an elementary prediction based on the historical mean; its predictive capacity against other indicators (e.g. maximum and minimum monthly temperature) is worse.
    • The predictive capacity of GCMs against the areal precipitation is even poorer (overestimation by about 100 to 300 mm). All efficiency values at all time scales are strongly negative, while correlations vary from negative to slightly positive.
    • Contrary to the common practice of climate modellers and IPCC, here comparisons are made in terms of actual values and not departures from means (“anomalies”). The enormous differences from reality (up to 6°C in minimum temperature and 300 mm in annual precipitation) would have been concealed if departures from mean had been taken.
    Could models, which consistently err by several degrees in the 20th century, be trusted for their future predictions of decadal trends that are much lower than this error?

    The CSIRO’s drought predictions are actually backwards! Stockwell, D. R. B., Critique of Drought Models in the Australian Drought Exceptional Circumstances Report (DECR), Energy & Environment, 21 (5), 425-436, 2010.

    With such poor predictive performance, why should we put much confidence in current GCM’s, let alone bet trillions of dollars and the global economy on their outcome?

    • Thank you very much for these references, we are definitely looking at the Hurst phenomena and the hydrological literature based upon the comments being made at Climate Etc.

    • Michael Larkin

      David,

      An excellent post that helped me put things in better perspective. Much appreciated and many thanks.

  6. David L. Hagen

    Engineers expect rigorous validation. The CFD community engaged with NIST in systematic development of tightly controlled combustion experiments and then objective modeling of those configurations with intercomparison of different codes over the last few decades. e.g.,
    An intercomparison exercise on the capabilities of CFD models to reproduce a large-scale hydrogen deflagration in open atmosphere

    See: Design verification and validation in product lifecycle
    P.G. Maropoulos & D. Ceglarek
    Abstract

    The verification and validation of engineering designs are of primary importance as they directly influence production performance and ultimately define product functionality and customer perception. Research in aspects of verification and validation is widely spread ranging from tools employed during the digital design phase, to methods deployed for prototype verification and validation. This paper reviews the standard definitions of verification and validation in the context of engineering design and progresses to provide a coherent analysis and classification of these activities from preliminary design, to design in the digital domain and the physical verification and validation of products and processes. The scope of the paper includes aspects of system design and demonstrates how complex products are validated in the context of their lifecycle. Industrial requirements are highlighted and research trends and priorities identified.

  7. David L. Hagen

    See Sowell’s post: Chemical Engineer Takes on Global Warming summarizing editorials by Pierre R. Latour. http://sowellslawblog.blogspot.com/2009/02/chemical-engineer-takes-on-global.html

    Pierre R. Latour is “a recognized authority in process automation technology and successful entrepreneur in several process control ventures. Latour began his career in the early 1960s with DuPont and Shell Oil after receiving a PhD in Chemical Engineering at Purdue.”

    Latour writes:

    “Mr. Temple’s (the writer of the other letter to editor),counter-claim against my comment about measurable, observable, controllable, stable and robust characteristics of the dynamic, multivariable nonlinear atmospheric temperature control system under design by Kyoto Protocols misunderstands my meaning. These mathematical concepts are part of the foundation of control systems engineering18 recorded in AIChE Journal, ACS I&EC Journal, IEEE Transactions, ASME Transactions and annual JACC conferences since 1960. They provide exact necessary and sufficient conditions for these characteristics for all linear systems and some nonlinear systems. . . .Now I am merely trying to acquaint climate change scientists, physicists, lawyers and politicians promoting such things as Kyoto Protocols that chemical process control system engineering has a useful voice, weak as it is, in climate control engineering. . . . The tenuous link between CO2 greenhouse effects and the Earth’s temperature indicates humanity has no effective manipulated variable to control temperature; the steady-state gain dT/dCO2 is almost zero. If so, the system is uncontrollable. Kyoto will fail no matter what the political consensus may be.”

    Understanding Latour’s editorials should be required for anyone considering “controlling” climate. Sowell briefly translates Latour’s terms. Even with the best of intentions, anthropogenic warming cannot be controlled as numerous essential control parameters and features are missing.

    Faced with such brutal reality, how can we create a “culture of building confidence” in models and methods that inherently are incapable of performing as politically expected?

  8. Dr. Curry: How would V&V have worked ten years ago?

    Climategate and the ongoing skeptic debates aside, I think the biggest problem facing the climate change movement is that there has essentially been a lull in global warming for the past ten years that to my knowledge no models predicted.

    Now I know that weather is not climate and ten years is a short duration — though not that short considering that climate change advocates base much of their argument on 30 year intervals (1910-1940, 1970-2000) . Maybe global warming is really starting to take off again this year.

    But at this juncture, ordinary citizens are understandably confused and distrustful. The upward warming curve leveled off and suddenly “global warming” became “climate change” instead, although according to the IPCC the underlying threat is still hotter temperatures.

    Plus it seems that every time we read the news, whatever happens is due to climate change. The temperatures are really hot — climate change. The temperatures are really cold — climate change. Giant squids attacking — climate change. (http://www.express.co.uk/posts/view/196228/Man-eating-giant-squid-devouring-fish-stocks)

    I submit that all this is entirely toxic to public confidence in the climate change movement and its models.

    • Dr Curry
      There are many “levels of confidence” but in the case we are addressing there are six:
      a. Confidence of the software designers in their program.
      b. Confidence of the users in the designers and the program and their own data.
      c. Confidence of the subject matter experts in the designers and users and the program and the data and their own best judgement, and the current “academic environment” in their field of expertise.
      d. Confidence of the burocratic ‘experts’ in the designers and their program, in the users and their data, in the experts and their judgement, and the data and the product and the academic environment, and the current “political environment” within and without their ‘buro’.
      e. Confidence of the elected in the designers and their program, in the users and their data, in the experts and their judgement, and the data and the product and the academic environment and the current “political environment” within and without the ‘buro’, and all the other ‘buros’ and departments and advisory boards and committees, and what the press thinks, and what the bankers think, and what investors think, and the UN, and Soros, and the lobbyists, and — oh yea– the Voters back home.
      f. Now things get pretty complicated, the conficence level of the Voters in the elected. This is really way too complicated to even attempt to address; let’s just say it has about as much predictability as the weather.

      PS: Huxley’s ‘bottom line’ says it all.

    • David L. Hagen

      The upward warming curve leveled off and suddenly “global warming” became “climate change” instead, although according to the IPCC the underlying threat is still hotter temperatures.

      Call a spade a spade – Require all IPCC modelers and politicians use “global warming” when calling for reductions in CO2. When models predict COOLING or the global temperature drops, we expect them to advocate the opposite mitigation policy of increasing CO2.

      See Ross McKitrick’s T3 Tax Policy.
      http://rossmckitrick.weebly.com/t3-taxstate-contingent-policy.html

  9. “As climate models become increasingly relevant to policy makers”

    They have probably already passed their peak of relevance. The public will have little patience for academics twiddling screwdrivers inside models while politicians impose personal sacrifices to (ahem!) save the world.

    Your postings acknowledge how the public relations campaign is way ahead of what can be supported in science. The V&V issue should have been long dealt with when “the science is settled” meme was being put around.

    “Further, claims are being made that climate models have been falsified by failing to predict specific future events.”

    For example the predicted tropospheric hot spot. Without observation to confirm its formation by today, it is fair to concluded that none of the C20th warming was attributable to the proposed physical processes behind that prediction.

    “Airplanes are designed using models that are inadequate in their ability to simulate turbulent flow. ”

    Final designs don’t come in a ream of printer paper from a theoretical computer model. Designs go through an extensive development cycle, including prototying and rigorous testing and refinement.

    “how should we proceed in building public confidence in [climate models]?

    How about a sensible regime of rigorous testing and verification process BEFORE parading up and down the high street declaring “the end is nigh”. The utility of a contraption is normally demonstrated through performance by “doing what it says on the tin”.

    “Models are based on physical principles such as conservation of energy, mass and angular momentum.”

    Not so. High climate sensitivity is derived from arbitrary assumptions of amplification by positive feedback. Amplification necessarily involves drawing from an auxiliary energy source (you cannot have more output than the input without adding energy). Conservation of energy is a statement that the climate system is fundamentally a passive system. It directly follows that the climate cannot amplify its forcings beyond the realms of transient (temporary) passive fluctuations.

    “all of the coupled climate models used in the IPCC AR4 reproduce the time series for the 20th century of globally averaged surface temperature anomalies; yet they have different feedbacks and sensitivities and produce markedly different simulations of the 21st century climate.”

    Therefore it’s an ill-posed problem. There isn’t enough data to even start V&V.

    “create a pdf with a robust mean”

    If done properly and reasonably, you’ll get a pdf which is so widely spread that it will contain no useful information. We will have 95% confidence that a particular value lies between all possible outcomes worth talking about. Everything and nothing, as they say.

    “Some models are arguably better than others in some sort of overall sense, but falsification of such complex models is not really a meaningful concept.”

    Then it’s not science.

  10. I’d like to pick up on two aspects.

    1. You talk about ‘comfort.’ My guess is that despite their brave words modellers do not really feel comfortable. If they did they would show more than two small (1/8 page) graphs of simulated and observed values for temperature and precipitation anomalies (figures 9.5a in TAR4 for temperature and 9.18 a for precipitation.

    2. I also think the wording of Knutti you quote was significant:”Models reproduce observed global trends and patterns in many variables.”. As I have commented before even though their representation of trends (or anomalies) leaves much to be desired their estimates of absolute values of temperature (as C) and precipitation (as mm) are often way off.

    • So true! “Comfort”, “Credibility”, “Confidence” are first, for science, issues or matters of integrity. It is possible to reassert this gem of virtue as the cornerstone of science but it will be difficult; so many seem to have lost it or never learned what it really means. It has nothing to do with ‘standing’ beside or in support of those within your field against all comers. It has nothing to do with unflinching support for the IPCC or even the maloderous United Nations, the EPA, or the CO2 lobby and their Carbon Credits Ponzie Sceam. The fact that politicians and investors are scrambling toward expensive ‘solutions’ against CO2 levels means the integrity of science has been totally compromised.

      PS: More on the issue of CO2 at this link –
      “Got Wood?”; October 10, 2010; by E.M.Smith
      http://chiefio.wordpress.com/2010/10/10/got-wood/

  11. In learning chemistry, we used lots of models. Electrons spin around the nucleus like planets in a solar system, the octet rule, hybrid orbitals, etc. In physics and quantum mechanics you have the particle in a box. In ecology we used some very basic equations for population growth. You learn right away the usefulness AND uselessness of models. That they only go so far.

    In quantum, I was taught that we couldn’t accurately describe lithium (quantum mechanically speaking) because it had, get this, three variables. Three! In calculus, I was taught that all of the known solvable integrals can fit in a book no larger than a dictionary, if that. I also learned that given enough fudge factors and enough polynomials, I can make an equation fit any smooth trend that you can come up with…to a certain point. After which, your magic equation diverges greatly and no longer describes jack diddly.

    So what am I getting at? How the hell can anyone have the hubris to think that their climate models can do anything in the area of long range prediction? There are so many unknowns and “unknown unknowns” out there, it oughta be humbling. Sure, you could make something fit…for a short while. But go back enough and forward enough and you’ll see that whatever you came up with only looked good because you hacked it to do so.

    Clouds, cosmic rays, soil mycophytes, animal flatulence, ocean currents and oscillations, solar stuff…etc, etc, etc……all this stuff has been so far out of reach of our ken. And many, if not all affect climate. Has anyone even looked at deep ocean underwater vents and how much they affect things? We may know all lot more about some things than before…but is it anywhere nearly enough?

    How in God’s name can we get our PNAS consensual panties in a bunch over something whose unknown variables probably outnumber the known variables? And how well do we know our variables? Anyone get a grip on UHI? That airport bothering you much? We still estimating the Artic Sea temps based on land stations hundreds of miles away? How’s that darned stubborn AMOC doin’ fer ya? If only those darned butterflies would quit flappin’ their wings!

    And after all this….we still don’t know if we oughta stock up on down jackets or sunscreen. But what I do know, thanks to corrupt climate whitecoats and corrupt politicians (yeah, I know, redundant) is that what I breath out about every 5 secs is a pollutant.

    Clueless hubris.

    • PolyisTCOandbanned

      Lithium has 4 bodies (nucleus and the 3 electrons). Helium is already analytically insoluble, with two electrons and a nucleus.

      • lol…you’re right, of course. Thanks. I meant Li+. And thanks for still making my point. We studied ad naseum the energy states of a single electron of course, and I do remember the prof talking about Li ion being unsolvable due to 3 variables. I remember how I laughing felt inadequate because we were all struggling with the simplest of systems, and the real world was just an approximation. We never did talk about He, just Li for some reason. Oh well…it was decades ago…

    • Hubris might also describe the act of dismissing an entire field of research with broad generalizations extrapolated from a couple classes – in completely different fields – you took in college.

      http://xkcd.com/793/

  12. Ed Fix describes most of the issues I have with computer model predictions far more eloquently than I have time for.

    There is the assumption that faster more powerful computers, allowing more parameters, more money, will somehow give ‘better’ models. Where in reality it give the illusion to the scientists that they are solving the impossible problem Ed describes. In many fields of science there is much discussion about computer models, withthe tendence for the scientist to defend them against reality..

    One of the funniest ones was a computer model for polar bear populations, that quite happily stated based on their behaviours programmed in, EVEN IF the numbers of polar bears are seen to be increasing, (assuming the behaviour does not cahnge) that a ‘tipping point’ will be reached in populations, thus proving they are at risk, totally ignoring that polar bears have lived through historic warm periods and are a large predator that ADAPTS to it’s environment, ie it follows the food supply. if the seal aren’t able to come out on the ice, the seals are much more vulnerable on land/beaches, the polar bears will not go hungry.

    BBC: Polar Bears face ‘tipping point’ due to climate change
    http://news.bbc.co.uk/go/pr/fr/-/earth/hi/earth_news/newsid_8700000/8700472.stm

    http://wattsupwiththat.com/2010/05/25/modeling-the-polar-bear-tipping-point/

    James Lovelock had some relevant thoughts on models earlier this year..

    Guardian: March 29, 2010: James Lovelock onthe value of sceptics and why Copenhagen was doomed.
    http://www.guardian.co.uk/environment/blog/2010/mar/29/james-lovelock

    James Lovelock: “We tend to now get carried away by our giant computer models. But they’re not complete models. They’re based more or less entirely on geophysics. They don’t take into account the climate of the oceans to any great extent, or the responses of the living stuff on the planet. So I don’t see how they can accurately predict the climate. It’s not the computational power that we lack today, but the ability to take what we know and convert it into a form the computers will understand. I think we’ve got too high an opinion of ourselves. We’re not that bright an animal. We stumble along very nicely and it’s amazing what we do do sometimes, but we tend to be too hubristic to notice the limitations. If you make a model, after a while you get suckered into it. You begin to forget that it’s a model and think of it as the real world. You really start to believe it.”

    James Lovelock: “The great climate science centres around the world are more than well aware how weak their science is. If you talk to them privately they’re scared stiff of the fact that they don’t really know what the clouds and the aerosols are doing. They could be absolutely running the show. We haven’t got the physics worked out yet. One of the chiefs once said to me that he agreed that they should include the biology in their models, but he said they hadn’t got the physics right yet and it would be five years before they do. So why on earth are the politicians spending a fortune of our money when we can least afford it on doing things to prevent events 50 years from now? They’ve employed scientists to tell them what they want to hear.”

    • James Lovelock: “The climate centres around the world, which are the equivalent of the pathology lab of a hospital, have reported the Earth’s physical condition, and the climate specialists see it as seriously ill, and soon to pass into a morbid fever that may last as long as 100,000 years. I have to tell you, as members of the Earth’s family and an intimate part of it, that you and especially civilisation are in grave danger.”

      James Lovelock: “It was ill luck that we started polluting at a time when the sun is too hot for comfort. We have given Gaia a fever and soon her condition will worsen to a state like a coma. ”

      An “independent” scientist who likes his bread buttered on both sides, particularly when he has another book coming out.

  13. Most of the climate models take into account reconstruction of the past climate trends. Some of these rely on Carbon14 and more recently Beryllium10 (10Be) dating.
    Data obtained is used in many studies and given unquestionable veracity.
    Here is a particular and very important example of misleading results of such dating.
    Renown solar scientist Dr. K.G. McCracken from the Institute for Physical Science and Technology, University of Maryland, in 2007 published paper:
    Changes in the cosmic ray and heliomagnetic components of space climate, 1428–2005, including the variable occurrence of solar energetic particle events
    McCracken 2007 paper
    Major result of McCracken investigation based on 10Be dating is:
    the estimated annual average heliospheric magnetic field strength near Earth, 1428–2005, based on the inter-calibrated cosmic ray record as shown in Fig. 2 on p. 1073 (4 of 8).
    Initially, I compared his results to CET (Central England temperature) anomaly and got a rathere surprising correlation all the way up to 1950:
    CET-McC
    According to prevailing science two variables should not be strongly correlated, and that is confirmed by post 1950 data, which is based on space measurements. However, the heliospheric magnetic field strength is closely correlated to the sunspont count number.
    Since the CETs are also correlated to another indicator (North Atlantic Precursor – NAP, on which I am currently working), it is of some interest that McCracken data (which in final analysis is only inverted 10Be record, adjusted for geomagnetic dipole variation), contains as a strong component as shown in my calculations of the NAP.
    If this component is taken out with couple of dating uncertainties ‘corrected’ than we find that McCracken data now makes perfectly good sense, which originally did not.
    SSN-McC
    Consequence is serious: radioactive isotopes dating calculations need reassessment.

    • The ‘correction’ of McCracken’s 10Be flux is not based on correct physics, is ad-hoc, and does not ‘make sense.

      • It is not only correct, but it is quoted and used by all scientists dealing with 14C and 10Be including yourself.

      • The original data is used. You dubious monkeying with them is not.

      • With respect Sir, you are totally and completely incorrect and out of order, making the hasty assumptions about my work is bound to get you into a cul-de-sac. Since you have nothing worthwhile to contribute I shall not offer a further reply.
        For benefit of any other reader, the data I used (for NAP) is probably the most reliable for any physical variable going back to 1600 and before. Not a single proxy, no estimate just solid verifiable record. Btw, the 10Be is not the goal, it is the CETs resolution that I am after, but needed a non related confirmation, and McCracken obligingly provided it.

      • If 10Be is not the goal, then don’t distort that record by wishful thinking.

  14. Tomas Milanovic

    Judith

    You wrote explicitely or implicitely in several places that the climate models were not deterministic.
    This might be misleading and certainly deserves clarification.
    Both weather and climate models are based on an extremely simple principle.
    1)
    Establish a spatial 3D grid.
    The overal size and the resolution depends only on the power of the computer that will be used to run the model. So it is the power of the computer which is the constraint and not the intrinsic physics of the system studied. So for example as the volume studied for a weather forecast is small, the resolution will be high. The volume studied by a climate model is huge so the resolution will be low. This is important because at this first and absolutely fundamental conception stage you don’t ask any physical questions. The question you ask is :
    “Given this computer I have and given that I want an answer faster than 1 month from what follows that I can do X floating point operations/sec, how many grid points can I afford?”

    2)
    Having the spatial resolution, select what kind of equation I can write at every grid point. The low resolution excludes already Navier Stokes as candidate for the Climate while some adapted/simplified avatar of NS might (perhaps!) be considered for the Weather. This is an approach by exclusion à la Sherlock Holmes – once I excluded what is not possible, I have to work with what is left.

    3)
    For the Climate and largely for the Weather actually any continuous equations are per definition excluded. So what is left are conservation laws and state equations or better said a discrete version of conservation laws and state equations because space has been cut in N (N integer) cells.
    Subgrid parametrization, is just a cursory acknowledgment that most coefficients of the equations are not constant but depend on processes happening at smaller scales than those that the computer allows.

    4)
    So what you will write at every grid point will be equations translating conservation laws. But, and this my point, as the whole modelling purpose is to solve these equations at t, go to t+dt and solve the new equations again, then this is a strictly deterministic process. The results may depend on tons of parameters like the computer speed and numerical accuracy, the initial conditions, the subgrid parametrization, the adjustment routines (mass and energy is not necessarily conserved between t and t+dt because of numerical errors) etc.
    But it is clearly deterministic and every climate model existing today is deterministic.
    On top of that, Weather models are known to exhibit deterministic chaos while it is not clear what the Climate models do.
    On one hand we have Schmidt who wrote that they do and on the other hand we have other people who think that they don’t.
    I think that Schmidt has the right answer even if it is for wrong reasons and therefore the question of climate ergodicity is for me the single most important question about the usefullness of the climate models.

    If one wanted to have a non deterministic weather or climate model, then the only possibility would be to generalize D. Koutsoyiannis approach.
    Consider that the WHOLE system is purely stochastical and describe it by PDFs.
    If one had to choose this paradigm, and I recall that sofar to my knowledge not a single person did, then the next necessary question would be to ask how these PDFs can be established and how they vary with space and time.
    This approach would be extremely analogue to quantum mechanics.
    One would have to look for climatic “Schrödinger equation(s)” governing some climatic wave function(s) and yielding the PDFs as well as their spatial and temporal behaviours.
    This would indeed be a non deterministic model and I am not saying that it is impossible in principle.
    But it is clear that nobody is going this way today.

    • Extremely clear question: Why should we accept anything less than full transparency and the highest standards of integrity. Taht ahs been sadly and dramatically lacking, so far, in regards to climate science.
      In Mann’s bizarre op-ed piece, about the only thing he was honest about was his name.

  15. David L. Hagen

    How to build confidence?
    Require all applicable principles of scientific forecasting
    Impose the discipline of only allowing AR5 to publish results of models that objectively meet ALL the Principles of Scientific Forecasting as verified by an independent panel (excluding only principles internationally recognized by ALL countries as inapplicable.)

    Published audits show current models fail most of these scientific forecasting principles. See:
    Public Policy Forecasting Special Interest Group
    Public Policy on Global Warming
    http://www.forecastingprinciples.com/index.php?option=com_content&task=view&id=26&Itemid=129/index.html

    GLOBAL WARMING: FORECASTS BY SCIENTISTS VERSUS SCIENTIFIC FORECASTS
    Kesten C. Green and J. Scott Armstrong, ENERGY & ENVIRONMENT VOLUME 18 No. 7+8 2007 http://www.forecastingprinciples.com/files/WarmAudit31.pdf

    Polar bear population forecasts: A public – policy forecasting audit, J. Scott Armstrong, Kesten C. Green and Willie Soon, Interfaces, Vol. 38, No. 5, September–October 2008, pp. 382–405
    http://kestencgreen.com/polarbears.pdf

    The Forecasting Problem
    To determine the best policies to implement now to deal with the social or physical environment of the future, a policy maker should obtain forecasts and prediction intervals for each of the following:
    1. What will the physical or social environment of interest be like in the future in the absence of any policy change?
    2. If reliable forecasts of can be obtained and the forecasts are for substantive changes, then it would be necessary to forecast the effects of the changes on the health of living things and on the health and wealth of humans.
    3. If reliable forecasts of the effects of changed future environment on the health of living things and on the health and wealth of humans can be obtained and the forecasts are for substantial harmful effects, then it would be necessary to forecast the costs and benefits of alternative policy proposals. For a proper assessment, costs and benefits must be comprehensive.
    4. If reliable forecasts of the costs and benefits of alternative policy proposals can be obtained and at least one proposal is predicted to lead to net benefits, then it would be necessary to forecast whether the policy changes can be implemented successfully.
    Guidelines
    The guidelines for the SIG are the same as those that apply to the forecastingprinciples.com site as a whole. In addition, all posted peer review must include the authors’ names, position, email, and any relationship that might be construed as being of potential bias.

    See articles on Global Warming audits
    http://www.forecastingprinciples.com/index.php?option=com_content&task=view&id=78&Itemid=130
    Without obviously satisfying audits that climate models meet the principles of scientific forecasting, how can We the People have ANY confidence in their projections?

    We the People are funding climate modeling. We are being asked to risk national and global economies and to spend trillions of dollars to “mitigate” the “risks”. Why should we not demand that the IPCC ONLY publish projections meet ALL applicable principles of scientific forecasting? While we will hear squeals from those at the public trough, why should we demand any less?

  16. David L. Hagen

    Confidence can be built by publicly addressing each of the issues raised in:

    An Open Letter to Dr. Michael Mann
    Posted on October 11, 2010 by Willis Eschenbach
    http://wattsupwiththat.com/2010/10/11/an-open-letter-to-dr-michael-mann/#more-26235

    • To borrow a couple quotes from another blog –

      “Jeff Houch writing on aat.net news on 12.28/09 reports Richard Fisher, the director of NASA’s Heliophysics Division to have said: “We thought we knew everything about everything, and it turned out that there were unknown unknowns.” Houch writes, “In other words: We don’t know what we don’t know until we know that we don’t know it.””
      http://retreadresources.com/blog/?p=466
      “Commonality Between Scientific Diciplines”

      Note – Dr Curry
      O/T – Re: Resignation of Harold (Hal) Lewis From APS
      Perhaps a seperate page on this. More than ‘just’ a tif over climate.

  17. What is interesting to me is that here, at a forum where each side can freely speak, so few are showing up to defend AGW or to even define the risks that AGW is claimed to present.

    • hunter, the reason is obvious. The scientific case for CAGW is somewhere between negligible and non-existent. The vaste majority of scientists know this. However, if any of them, who still have careers and need to earn money, say so, it is goodbye career, money etc.

      So if you provide a forum with a level playing field, as Judith has, the supporters of AGW cannot compete with those of us who understand the real science. So there are few defenders of AGW on any forum where the playing field is level.

      • The scientific case for CAGW is somewhere between negligible and non-existent. The vaste majority of scientists know this.

        Jim: I’m not a scientist but I doubt this. I assume these scientists are simply part of the environmentalist groupthink (another word for consensus) that’s been going on for decades that human is headed for some environmental catastrophe.

        As Isaac Asimov said after Paul Ehrlich lost that famous bet to Julian Simon that prices of resources would drop with time in spite of a rising population:

        Naturally, I was all on the side of the pessimist and judge my surprise when it turned out he had lost the bet; that the prices of the metals had indeed fallen; that grain was cheaper; that oil…was cheaper; and so on.

        I was thunderstruck. Was it possible, I thought, that something that seemed so obvious to me – that a steadily rising population is deadly – can be wrong?

        Yes, it could be. I am frequently wrong.

        I give Asimov credit for admitting he was wrong. Most people, including scientists, aren’t good at that.

      • huxley writes “Jim: I’m not a scientist but I doubt this. I assume these scientists are simply part of the environmentalist groupthink (another word for consensus) that’s been going on for decades that human is headed for some environmental catastrophe.”

        I would not argue strongly, but my instinct tells me you may be wrong. I am a scientist, and have worked with scientists from many disciplines all my career. The one thing that describes the scientists that I know personally, is that they never associate themsleves with “groupthink”. It is completely foreign to the vaste majority of scientists. I still think one needs to follow the money.

      • I’m not one who believes skeptics are ‘funded by Big Oil,’ which seems cartoonishly reductive to me. However, if it really all is corrupt and money-driven, doesn’t it seem like there’d be more than adequate money out there to fund research that would cast doubt on the AGW hypothesis?

        Why hasn’t this happened, do you suppose?

      • PDA writes “Why hasn’t this happened, do you suppose?”

        I suggest you ask Dr. Tim Ball.

      • Jim, you’ll forgive me, but I prefer a more direct style of interaction. I imagine I am supposed to respond with something like “Who is this Dr. Tim Ball of whom you speak, Jim?” but – especially as you haven’t gone out of your way to pique my interest – I think I’ll decline.

        It’s perfectly fine if you don’t want to directly address my question, but you could have simply ignored it in that case, or at least responded a bit more artfully.

      • Let me give you a hand:
        “Since I obtained my doctorate in climatology from the University of London, Queen Mary College, England my career has spanned two climate cycles. Temperatures declined from 1940 to 1980 and in the early 1970’s global cooling became the consensus. This proves that consensus is not a scientific fact. By the 1990’s temperatures appeared to have reversed and Global Warming became the consensus. It appears I’ll witness another cycle before retiring, as the major mechanisms and the global temperature trends now indicate a cooling.

        No doubt passive acceptance yields less stress, fewer personal attacks and makes career progress easier. What I have experienced in my personal life during the last years makes me understand why most people choose not to speak out; job security and fear of reprisals. Even in University, where free speech and challenge to prevailing wisdoms are supposedly encouraged, academics remain silent.

        I once received a three page letter that my lawyer defined as libellous, from an academic colleague, saying I had no right to say what I was saying, especially in public lectures. Sadly, my experience is that universities are the most dogmatic and oppressive places in our society. This becomes progressively worse as they receive more and more funding from governments that demand a particular viewpoint.

        In another instance, I was accused by Canadian environmentalist David Suzuki of being paid by oil companies. That is a lie. Apparently he thinks if the fossil fuel companies pay you have an agenda. So if Greenpeace, Sierra Club or governments pay there is no agenda and only truth and enlightenment?

        Personal attacks are difficult and shouldn’t occur in a debate in a civilized society. I can only consider them from what they imply. They usually indicate a person or group is losing the debate. In this case, they also indicate how political the entire Global Warming debate has become. Both underline the lack of or even contradictory nature of the evidence. ”

        http://www.canadafreepress.com/2007/global-warming020507.htm

      • PDA,

        The kerkuffle surrounding Tim Ball and Dan Johnson, dating from 2006, is documented here:

        http://www.desmogblog.com/tim-ball-vs-dan-johnson-lawsuit-documents

        Interestingly, Dr. Ball’s testimony in the Canada Free Press dates from 2007.

        Canada Free Press: Because without America, there is no free world.

      • Jim, I could infer what you take from Dr. Ball’s comments, but I’d be guessing. As I said, I find direct communication far more interesting.

        Could you either reply directly or just indicate that you’re not interested in doing so? Thanks.

      • Dr. Ball’s comments don’t require further interpretation. I can see why you wouldn’t want to comment on them. That is OK.

      • I didn’t comment on them because I’d asked you a direct question. I could infer from your reply that you believe Dr. Ball’s case shows there is a sort of ‘groupthink’ culture in climate science. However, I don’t know if that’s what you take from this, and I don’t understand why you find it necessary to play this guessing game rather than make points that can be discussed with some specificity.

        I just find it more interesting – as I’ve said repeatedly – to debate an actual individual, rather than that individual’s copy buffer. However, you’ve made it clear in three successive comments that this really isn’t something you have any desire to engage in.

      • You were originally conversing with Jim Cripwell, who I am not. I just filled in the material to which he referred. I do take from the Dr. Ball’s writings that he has experienced adverse effects from academia due to his public stance on global warming. It seems kind of redundant to restate it.

      • Jim: Perhaps the gray area here is between AGW and CAGW.

        Would you say that the IPCC espouses CAGW? The IPCC says AGW will be more expensive to fix than to prevent, that it will cause hardships, malnutrition and species loss, but it avoids claims of irreversible climate change or widespread famines or catastrophe.

        Of course, a few individual scientists like James Lovelock and James Hansen paint much darker pictures.

        The one thing that describes the scientists that I know personally, is that they never associate themsleves with “groupthink”.

        That also describes the hippies, punks, and liberals that I know, but “it don’t make it so.” I’m not the first point out that the people who count themselves as strong individualists are often moving in lockstep with some group or another.

        Scientists poll as liberal vs conservative by 5:1 or more. I don’t see any scientific reason that this should be so. Therefore I assume that liberality is part of scientific culture, i.e. groupthink.

        I’m not saying that’s a bad thing either. It’s just human to fit in with the people around oneself.

      • “Scientists poll as liberal vs conservative by 5:1 or more. I don’t see any scientific reason that this should be so. Therefore I assume that liberality is part of scientific culture, i.e. groupthink. ”
        More likely, since most publishing scientists work at universities and colleges, which are notoriously left-leaning, it is likely that conservative scientists have as hard a time getting hired as do conservative economists, historians, english profs, etc.

        “Jim”‘s comment above(at 10:23) sums it up nicely:”Sadly, my experience is that universities are the most dogmatic and oppressive places in our society. “

      • huxley: “Scientists poll as liberal vs conservative by 5:1 or more. I don’t see any scientific reason that this should be so. Therefore I assume that liberality is part of scientific culture, i.e. groupthink. ”

        This bogus stat reminds me of the old adage that “93.7% of all statistics are invented on the spot”. Telltale signs of BS: scientists? pray tell, which ones? where? what agency conducted this poll? What was the question: “Are you liberal or conservative?” was it about social liberality and conservativism, or fiscal? How were respondents to answer if they were, say, “liberal” on abortion but “conservative” on homosexuality (or straddled any one of dozens of “polar” issues)? Why do you say “5:1 or more”? Are you not sure which one?

    • The exchange regarding Dr. Ball is interesting.
      As to group think, my observation is that history shows, from the stunt that the Skeptical inquirer pulled showing scientists falling for obvious parlor tricks, to eugenics, to Piltdown’s long reign, to Lysenkoism, to Eisenhower’s farewell warning, that scientists are not immune to any of the foibles of the larger world.
      Yet still not one AGW proponent has listed out the sort of things that will occur to make ‘global climate disruption’ a reality.
      If we are going to remake our economies, spend trillions, set up whole new international governance systems, certainly there should be an objective verifiable list of reasons why?
      Surely we would not be advised by well meaning people to make the radical changes the AGW community demands based on a vague undefined list of risks?

      • Yet still not one AGW proponent has listed out the sort of things that will occur to make ‘global climate disruption’ a reality.

        Not clear what you’re asking for here. Dr. Tobis listed some of the consequences that we may already be seeing in his comment over on the “dangerous” thread, and there’s been pretty extensive coverage on the projected impacts of climate disruption. I’m not talking about the scare stories in the media, but assessments from people in relevant fields.

        Maybe you could help me understand what you’re looking for and not finding.

        I’d also appreciate some information on who’s advocating that we should “remake our economies, spend trillions, set up whole new international governance systems.” The serious proposals under discussion so far have been pretty far short of that.

      • PDA,
        Hansen is talking about turning Earth into Venus and writes books called ‘storms of my grandchildren’.
        Gore is selling a lot books and DVDs about much of the US going under water.
        The founder of the Weather channel is pitching catastrophic weather.
        These are well discussed in the public square, so I wonder why you are putting the onus on me to provide your side’s arguments?
        As to who is advocating radical chagnes in the economies of the world, once again you seem to be begging me to do your work.
        Have you added up the direct costs of building windmills and solar panels?
        Are you suggesting that the President, when he speaks of destroying the coal industry, for instance, is not serious?
        Michael Tobis’s post, with no disrepect intended, is not really what I am looking for.
        This is the problem in dealing with AGW believers, in my experience: direct questions simply get deflected and dissemebled.
        My bet is you will either not respond, or will decide that what I am offering is not what you meant, but you will still decline to offer an answer.

      • These are well discussed in the public square, so I wonder why you are putting the onus on me to provide your side’s arguments?

        I’m not sure how you got that impression. I answered your question as best I could, and specifically asked if you could clarify your question, noting that I wasn’t clear what you were asking for. I gave you two links, not just the one to MT’s comment; is the other also “not really what [you] are looking for?” If not, can you give me any clue as to what it is you are looking for?

        Have you added up the direct costs of building windmills and solar panels?

        No, but no one is seriously suggesting that wind and solar can replace more than a fraction of the energy generation in the US. If you wish to assert otherwise, I am afraid the onus is on you to show who, when, and in what context.

        The White House’s policy page on energy talks mostly about efficiency and conservation, which one would hope would be values more or less equally shared.

        Are you suggesting that the President, when he speaks of destroying the coal industry, for instance, is not serious?

        Interesting, I’d never heard of that before, and I had to do a fair amount of Googling to find out you were referring to.

        If the quote you are referring to is

        So if somebody wants to build a coal-powered plant, they can; it’s just that it will bankrupt them because they’re going to be charged a huge sum for all that greenhouse gas that’s being emitted.

        I think you’ll have to acknowledge that this is a little different than a threat to “destroy” the entire coal industry. Obama was talking about cap-and-trade, and the Waxman-Markey bill setting up the emissions trading scheme was hailed by the hardly-left-wing Wall Street Journal as a giveaway to the coal industry.

        I have answered, to the best of my ability, all the “direct questions” you have posed so for. I have provided links showing the sources for the answers I’ve given. If you feel these responses were “deflected and dissemebled” (whatever that means) I hope you will explain why they fell short.

        I am not a “believer” in anything. I don’t try to debate ideology, but the facts as best as I can ascertain them.

  18. Roger Pielke Sr has a relevant post entitled “When is a model a good model?”
    http://pielkeclimatesci.wordpress.com/2010/10/11/when-is-a-model-a-good-model/

  19. I just spotted an interesting essay by Kevin Trenberth entitled “More knowledge, less certainty.”
    http://www.nature.com/climate/2010/1002/full/climate.2010.06.html

    • David L. Hagen

      Interesting discussion by Trenberth.

      Including these elements will make the models into more realistic simulations of the climate system, but it will also introduce uncertainties.

      “Introduce”?
      “Introduce” suggests that previous estimates of the “uncertainties” have strongly under estimated the “Type B” or bias uncertainties.
      See NIST Evaluating uncertainty components: Type B
      http://pml.nist.gov/cuu/Uncertainty/typeb.html

      He shows “Ten-year mean global surface temperatures from observations (red) and from three independent hindcasting studies”.
      http://www.nature.com/climate/2010/1002/fig_tab/climate.2010.06_F1.html

      I find it fascinating that the observed temperature is outside the statistical error bounds of all three models. Hopefully, the AR5 authors will recognize that the uncertainties are far higher than previously shown, and extend the uncertainty error bars to include ALL Type B errors.

    • I tend to interpret Trenberth’ “prediction” that the uncertainty in AR5’s climate predictions and projections will be much greater than in previous IPCC reports as a tacit admission that AR4 vastly overstates the certainty of projections reported therein. Of course, many people have long realized it. Curiously, this time around he doesn’t mention that the models not only disagree with each other but also don’t reproduce the climate observed today.

    • From Trenberth’s essay as linked by curryja above:

      “It is essential to take on the challenge of decadal prediction. Confronting the model results with real-world observations in new ways will eventually lead to their improvement and to the advancement of climate science.”

      A motherhood statement, of course, but still somewhat encouraging. Results of such confrontations then need to be published in lucid detail for public consumption, not written in impenetrable academic prose and hidden behind paywalls

  20. Re Hal Lewis, I don’t know him at all, not sure what to make of his statement. I did spot an interesting (if dated) interview
    http://www.aip.org/history/ohilist/4742.html

    • Dyson would show up in the morning with the solution. Very annoying.

      Uh, that’s plagiarized, or paraphrased, or something.
      ===============

    • Phillip Bratby

      I met him in the 70s. An outstanding physicist and a communicator in the tradition of Richard P Feynman.

  21. David L. Hagen

    PDO Based Climate Models
    Will scientific models be included in AR5 if they do NOT predict global warming? Or will “peer review”/gatekeeping prevent the public from evaluating other paradigms?

    E.g., See Don Easterbrook’s temperature projections based on PDO oscillations, especially Figure 42.
    http://myweb.wwu.edu/dbunny/research/global/easterbrook_climate-cycle-evidence.pdf

  22. My metaphor from DotEarth’s AGU thread 32 months ago: The climate modelers are like model train hobbyists trying to keep their trains on circular tracks on the ceiling.
    =============

  23. Explain something to a simple electrical engineer Judith. What is the use of a model where all of the physical parameters are not know? I would hope you would admit that at the very least “Cloud Feedback” is currently an unknown. I simply cannot imagine any electrical engineer designing systems using “possibly, maybe, perhaps” etc. That’s why you are safe turning on the computer you use.
    By the way, you may not know Hal Lewis but he has a resume way better than many involved in the climate fields and the guts to stand up.

    • Pete, the issue is the complexity of the system being modeled. An electrical engineer develops a model to control an engineered system. Climate models are used to try to understand how the complex climate system works (a scientific endeavor). Climate models are not fit for the purpose of “controlling” climate, which is unfortunately what some try to argue.

  24. The primary drives of the global circulation are not defined in numerical models, the secondary effects they produce, are not understood well enough to be predictable yet. In weather forecasts the accuracy drops off in 7 to 10 days, and is totally out of phase with the primary drivers of the weather in 14 days.

    As long as “the team” are going to avoid understanding the connections between the primary drivers, their secondary effects (which can be only predicted from the vantage point of incorporation of the primary drivers over time), and their interactions with the initial rest conditions of the tertiary effects that are now being just loaded, crunched, and controlled by the factors that are used in the formulation of the processes that deal with the step progressions in running the numerical modeling programs forward in time.

    There will be little progress in forecast development that has a shelf life of resultant usability better than a month.

    This is why there is little faith placed in CAGW forecasts, any one who knows anything about how the weather really works, understands the real drivers are not even understood enough to used in models yet, and with out considering the background patterns of the seasonal, annual, decadal trends that determine how the weather works, are even used in weather forecasting, in a viable active method, why should ANY confidence be placed in CAGW long range unverifiable modeled forecasts?

    So after raising all of these questions I am responsible to give reasons for my opinion, it would not be in good faith for me not to.

    Short term solar cycles of the 27 day rotation periods, due to the polarity shifts in magnetic flux changes in the solar wind, The moon has a North/South declinational component as part of it’s set of orbital parameters. Thus affecting the South/North movement of the center of mass of the Earth acting from its fulcrum point around the barycenter, in all three orthographic dimensions, as well as LOD, acceleration / deceleration, and perigee apogee changes.

    These modulations are coupled into the atmosphere and oceans as tidal effects, that are well understood and predictable for the oceans, and are just as regular in their affects on the atmosphere. However the study of these drivers of large scale waves in the atmosphere, and their resultant periodic oscillations induced into the ocean atmosphere boundary problems, have not been given enough focus of investigation, in favor of funding dynamic atmospheric forecast models since the 1950’s.

    What has happened is that the base of knowledge that should be there to compute the longer cycles of repeating influence, driving the global circulations dynamics has been left behind, and it is only now that we realize that even a nice new Corvette, is not going to work well 600 miles from the nearest gravel road up in the Himalayas. The basic infrastructure has to be there to support a heavy trucking industry, other wise we are just back packing through the wilderness.

    I have bothered to put together a basic understanding of the Lunar tidal forces and their effects, into an analog method that takes advantage of the repeating composite patterns of the global circulation patterns, that should be forming the basic underlying premise of the forecast models currently in use. In order to bring foreword into the present the package “the team” needs to fix their problems with long range forecasting, of both weather and climate.

    Just by using the analog patterns of how these drivers of the weather repeat in an interacting interlaced method, results in a long range forecast with greater accuracy than the best models get out past 7 days.

    Skeptical? I was when I started, 25 years ago, funded only by part of my own hourly wage, just because it needed to be done, I offer it here as a contribution to the overall increased understanding of how the whole solar system works.
    http://www.aerology.com/national.aspx
    The basic unpolished forecasts can be viewed for the past 32 months of the 72 month set of daily forecast maps patterns, (generated and posted to the internet November and December of 2007,) for verification that the process is as I say it is and works, it is my hope that this ongoing work of my life time contribution, can be used for the betterment and advancement of all mankind.

    There is nothing more wasted than a good education that produces nothing of worth, unless it is a good idea whose time has come being over looked because it was someone else’s.
    regards,
    Richard Holle

    • @Richard Holle

      “In weather forecasts the accuracy drops off in 7 to 10 days, and is totally out of phase with the primary drivers of the weather in 14 days.”

      Since I’ve had access to the Internet (about 15 years now), and especially in the last 4-5 years when one can download 7-day weather forecasts directly to one’s phone from almost anywhere, I’ve been entertained by these 7-day casts

      Keeping rough weekly tabs on them for quite some years now, I completely disagree that their reliability does not drop off until 7-10 days. A cast 7 days out has such a wide spread (temperature, rain/no rain etc) that it is absolutely useless. Useful to a point is 3-4 days out, maximum. The 7-day out cast is deliberately spread wide because there is no reliability

      • Given the vast array of real-time weather data available to the MET, it beggars belief that their local/regional forecasts can be so far out.

        Yet their zillion pound computer can tell me what summer will be like, in my village, in 50 years time.

        Someone really should have stopped this nonsense long before now.

  25. Steve Fitzpatrick

    Judith,

    I expect that improvements in model ‘correctness’ will depend the continues collection of climate data which (at least appears) to conflict with the models. For example, for all the brouhaha about discrepancies between measured atmospheric temperature profiles under increase radiative forcing and modeled profiles (tropospheric ‘hot spot’), the discrepancy does not yet appear sufficient to cause serious doubt among most climate scientists, and most certainly not among modelers. Similarly, the vast majority of models predict substantially more surface warming than has been observed.

    If the models all (or nearly all) suffer from the same deficiencies, and I personally suspect they do, then it seems to me that only more data will lead to better models. Model to model comparisons, while perhaps useful in identifying certain types of problems, are never going to find errors of concept or implementation which are shared by many models. As I have pointed out before, it seems to me that a fair evaluation of climate models is impossible when there remains vast uncertainty in aerosol forcing (direct and indirect), and substantial uncertainty in cloud effects.

  26. Steve, I agree that we need to do a better job of challenging the models with data and interpreting the results. Actually, there is a ton of data out there (esp satellite data since the 1980’s); the challenge is getting it into a form that is useful in terms of evaluating the models. Much more work along these lines is needed.

    • Judith – given that the data you refer to were being collected contemporaneously with their model development, would you not have expected the modellers to have anticipated this and harmonised their work with the instrumentation against which it would eventually be evaluated? Perhaps I have missed something?

      • Tom the challenge is the huge amount of data from satellite. Satellite data has its own challenges in interpretation and calibration, and there are multiple data sets for each variable associated with different groups working on them. One of the strengths of the NASA GISS modeling group is that they work closely with people working on the satellite data sets. ECMWF (the weather and seasonal forecasting group in Europe) also works very well with the people that collect and produce data sets. But overall there is a relative disconnect. The other problem is a mathematical one, in terms of how you actually evaluate with observations a model with a very large number of degrees of freedom that is nonlinear/chaotic as well.

  27. Alexander Harvey

    I am not sure what is meant by confidence, particularly in the phrase “public confidence”.

    Also I can not determine whether it is confidence in models or confidence in outcomes that matters in this case.

    I will try to illustrate my thoughts by using the term scenario in its broader sense as opposed to its narrow sense as in emission scenarios.

    An example from history that has some similarities is the calculation of artillery tables during WWII. Hugely mission critical, very computationally expensive, physics based, and capable of validation up to a point. It may be worth noting that computational overhead was a massive issue as there was a limit to the computational power that could be brought to be bear, in simple terms the number of competent people (able non-combatants and predominantly women) and desk calculators.

    Also artillery tables are projections and guidance systems. They project where something is going to end up.

    How much confidence should the public have had in the artillery tables?

    Suddenly confidence gets all mixed up with costs and benefits. I would suggest that confidence in the adequacy of the tables might decline if one knows that there is a functioning hospital in the range of fire. Suddenly the abstract confidence and senario confidence may be perceived to differ considerably.

    Given that such a critical mission was planned one might resort to having a set of tables recalculated by several different groups. Now how is confidence affected if they all come up with different results for the scenario in question but all validate pretty well over the test cases available. Does it go up or down? Does one take the average? What does one do about combining the error pattern?

    I don’t know.

    A less analogous but more personal example is confidence in a hire vehicle and a GPS system to get one safely from A to B.

    Normally this combination inspires a very high degree of confidence, millions of such journeys are undertaken and rarely a mishap.

    Now change the scenario, A and B are separated by a notoriously hazardous piste. Suddenly a breakdown or getting lost is a matter of life and death. Scenario confidence plummets, the vehicle looks like a death-trap and the GPS becomes an unfathomable box of tricks. How far it plummets is likely to be a factor of the depth of knowledge in vehicle mechanics and GPS methodology. In my case knowing more can lead to less confidence.

    Knowing a little but not a huge amount about GPS I have considered the consequences of what would happen if a major war broke out. Prior to ~2000 GPS signals were degraded as a matter of military precaution(injecting upto 200 metres of error by personal observation) . After 2000 there was no guarantee as to whether degrading would be reintroduced in case of war nor whether it would lead to errors of 200m or 200 km.

    So I am a terminally paranoid technophobe, so what! If you want paranoid, how many people check their GPS to see if a war has broken out! For that matter how many people ever take into account the phase of the moon as the most important timing factor when planning trips? Well lunatics of course, and romantics :)

    Now perhaps I am very risk adverse but I think not.

    On return to civilisation we discovered that the 2003 Iraqi war had indeed broken out some days before. The question is whether my scenario confidence was decreased by some additional knowledge of how a GPS functions, (that degradation was a military option)? I would say it was decreased, it was certainly on a long list of poorly quantified hazards discussed. As it happened GPS accuracy was not impaired during the conflict.

    To make all this a bit more relevant I will ask the equivalent climatic change question:

    How confident should we be that the climate projections adequately prescribe the mitigation necessary to avoid a temperature rise of no more than 2C above the pre-industrial level, I cannot see at a personal level how this can be divorced from a person’s apprehension of risk.

    I deliberately asked the question in terms of the required mitigation as in practical terms that is how I react. Judging my confidence in having taken appropriate measures to mitigate a perceived risk, not my confidence in the chance or the risk itself.

    Not everybody perceives risk to even the same order of magnitude nor the need for risk mitigation. The previous year a couple got lost in that area, the man died of dehydration, the wife was rescued after 14+ days, they were found not more than 10km from water, nor more than one or two nights hike from habitation. I have it on the good authority of some of the local players that they were advised against the journey and had been discovered lost, once already, but refused an offer to turn back and follow the other vehicle. Which raises a relevant moral point:

    What is the obligation if one believes someone is behaving recklessly? What do you do if you “know” better and you “know” they are at great risk of becoming a casuality of the climatic conditions?

    How about you rescue them against their will, it is called kidnapping.

    Personal experience 2000, bogged-down tourist requesting assistance to get his vehicle un-bogged in order to continue his journey. He was bundled off to safety, whilst protesting his competence, and his vehicle recovered separately, once recovered from borderline heat stroke, he was very upset with us, didn’t even buy us a drink! Helping people can really spoil your day. He was dubbed the “Homo Mirabilis” for we would have missed him if we hadn’t been delayed by a carburetor problem (bang it with a rock!), the area averaged one passing patrol vehicle per month.

    Now in a free society what should we have done, and what does this homily have to say about how the people “in the know” should behave towards people, some not yet born, “known” to be at risk of premature death due to an inhospitable climate?

    In his case it was easy, he was “judged” to be delusional. In other cases not so straight-forward you can’t kidnap people willy-nilly. But you can try putting the fear of unspeakable death and suffering into them, then wish them good luck. (Specific instances not theoretical cases.)

    In short one relies on the model that is jusdged to best cater for the circumstances, and acts according to its plausible outcomes, how much confidence one has in that model is irrelevant, the alternative is to dither.

    Such are the burdens of thinking one knows best. Do you spoil their day or risk their spoiling yours and everybody else’s? And what gave us the right to risk our lives in the first place? Nothing, we just prepare for the worst as best as we can, put our affairs in order and make our goodbyes. But we only endanger our wellbeing and that of interested parties, we are not an existential risk to humanity. Conversely can one justify kidnapping all mankind?

    Well so much for my percieved confidence in models, mechanisms and human responsibility.

    So if you hear someone banging with a hammer under a jacked up vehicle on a hire lot, do say hello.

    Alex

    • No offense intended, but isn’t this just a long-winded way to say you don’t know what you are talking about? It is a bit confusing, so I may be wrong, I guess.

      • Alexander Harvey

        Jom,

        You well may be right, in one sense at least.

        Perhaps I cannot fathom what others are talking about and I am talking about that.

        For a start I am not sure how much information about models etc, is likely to instill “public confidence”. As in:

        “As the climate models become increasingly policy relevant, it is critically important to address the public need for high-quality models for decision making and to establish public confidence in these models.”

        Further I really struggle to see how this could be achieved in any relevant timeframe.

        Also I strongly suspect that the public of which I am a member get more dubious about a proposition the more they know are told of its inner workings, on the basis of “Why are they telling me this? Are they telling me that they have doubts as to whether their models work!”.

        If we were discussing how to base public confidence in the models on a more informed appraisal of climate modelling, its technical details, and its provable accuracy, that would be different. But that is not necessarily going to raise public confidence, it might lower it. It seems to risk taking a rational approach to an issue that is inherently irrational, namely public confidence.

        I do not think I can be alone, in that I used to “know” that the models were pretty damn good, and certainly that they had not been tweaked to match the data, or that they were in any way prone to circularity, and that they came up with sound predictions (not projections). To put it more straightforwardly: that they were beyond reasonable doubt.

        I was both more ignorant and more confident not just in their assesment of the future but in their prescriptive power.

        I am particularly concerned about their prescriptive power, which does not seem to get discussed much, so I thought I would talk about it and confidence in prescribed mitigation. That is the action prescribed by whatever model one has of the efficacy of mitigation in order to reduce that risk to something tolerable. Beyond a certain point I am not interested in models of the risk but only in the models of the mitigation. For instance if the models say that if we do nothing there is a real risk that it is going to get dire in 40 years then I really am not in the least interested in whether that is prelude to it staying at dire or getting 12C hotter. Beyond dire I simply don’t care. I really can not see the point of running models beyond the point where there is a real risk that we are all dead, it seems rather acedemic. Knowing whether the doubling sensitivity is 2C or 12C is not something I want to know or care to find out, provided 2C carries a real risk to humanity. If people tell me that there is a real risk of getting to 2C above the pre-industrial baseline but we don’t know if that matters or not then that needs fixing.

        So beyond their being able to say with confidence that there is a real risk of dire consequences, I do not need to know a lot of extraneous details, and anyone thinking that I need more imformation about the models so I can be confident in their grades of being dead should think again. Nor do I need to know if sea levels will rise by 200m by 2200 if we are all going to perish by 2060. I just need to know the answer to a simple question: “If we do nothing is it going to kill us?”

        So to me it does not matter precisely how hot it will get if we don’t do anything, provided it will get to a point were there is a real if broadly unquantified risk e.g. could be 1% could be 99% of extermination.

        So I am asking the inverse question:

        How confident am I that if we take no action things will be alright. It is this kind of confidence that cannot be divorced from the risk involved.

        I am probably still not being clear, but I am trying, (very trying).

        As a member of the public I really not at all interested in a scientist’s confidence in exactly how bad it is going to be. I am interesting in whether I should be confident that it will be OK.

        When I cross the road I like to be confident that I will be OK, if the event of my being killed has a low confidence say a 1% chance, that does not equate to a high confidence of my getting across safely because the downside risk is so extreme. Public or personal confidence is subjective. It is perfectly possible to have a low confidence in survival when playing russian roulette even though the odds are in my favour. I can however be confident of going out without an umbrella when the weather looks a bit iffy because it doesn’t matter much. I am confident that I will be OK being wet is a mere inconvenience.

        Now it seems to me that people might be debating whether there is a 95%, 90% , or even jusdt a 50% chance of a catastrophe for all mankind. I am saying that this is not what I need to know, and I certainly don’t want people to keep gathering evidence or building models till they know which of the three it is.

        All I need to know is that there is a small say 1% risk if we do nothing, but I do want to know in some detail how much mitigation is sensible and some indication that it is likely to be effective (say >99%) in all reasonable scenarios. If the mitigation is 99% effective in all cases then the confidence that it will be OK is not much changed if the unmitigatated risk of doing nothing is not 1% but 90%.

        I only need to be confident in getting the mitigation right, I really do not care about the precision of the precise nature of the doom.

        Hence some of my original post: I do not mitigate downside loss because it is highly likely. I would mitigate if I judged that a trip had a 1% chance of being fatal. I would not be confident to take a trip with a perceived risk of 1% of mishap if that meant a fatality. I would be confident to take that trip if I beleived that I had taken the necessary measures to mitigate the consequences of mishap from fatality down to real inconvenience.

        It is the prescriptive power of the mitigation that interests me, if I can rely on the mitigation then the raw size of unmitigated risk is irrelevant providing it is above a threshold say 1% and the consequences are dire. I would then be about as confident of survival if the unmitigated risk was 1%, 10% or 90%. Think of it as putting faith in the parachute not the aeroplane.

        I feel people are worrying about precisely how confident are we that it is going to be bad if we do nothing. I don’t think that this is how confidence is generally understood. To my mind all I need to know is that there definitely is a risk, that its consequences are dire, but that there are strategies for mitigating that risk that we can be pretty confident will work to reduce dire to merely inconvenient, and lastly that we can afford these strategies.

        So all I need to know that there is a real but small risk that doing nothing is going to bring about circumstances that will be dire for humankind. Either the models can tell me this or they can’t. If I am being told that they they cannot say for certain that there is a real risk, say 1% of a catastrophe for us all, then I am not interested at all. If others are wanting to know if the risk of a catastrophe before is 90% or 99% before adopting some form of mitigation I don’t see that as being useful.

        Much of the rest in my above refered to how the public commonly seem to respond to real risk of death and the moral role of people who genuinely believe that people are running an unacceptable risk. Ask yourself as to how high your judgement of the risk of death from an activity should be before you would stop someone from engaging in it, especially if they seemed ignorant of the risks. Alternatively how persuasive one should be if one feels someone is likely to engage in an activity with a real but small risk of death that they seem not to be aware of or have discounted. Would you really care whether you judged the risk to be 99% or 1% before intervening. I also illustrated that people are not necessary all that grateful to be kept alive if they fail to appreciate the danger they were in. I think that much of this is relevant to all risks of fatality, and if global warming has a real risk of fatality then it is relevant to the general discussion.

        I sometimes get that feeling that either I have got it all wrong or everyone else has, but that happens in life. But ask yourself this. If you considered yourself to be responsible, how big a risk would you take with all humanity? If I said it was between 1% and 99% for extermination starting in 40 years would you really say “Sorry, too vauge, not good enough.”

        Alex

      • AnyColourYouLike

        Alex

        Have you considered that the economic risks of drastic carbon cutting and therefore access to cheap energy for developing economies, not to mention distractions from real and present infrastructure and land-management issues (a very likely factor in the recent Pakistan floods) under the catch-all label of global warming, may in fact represent a blind alley that contributes to a fatality risk for many of the world’s poorest people of at least an order of magnitude greater than 1%?

      • Alexander Harvey

        AnyColourYouLike,

        I should have made it clear.

        The risk I am talking about is an existential risk to the species. Not of individual’s mishaps, regional problems, or the need to migrate, for any of these wmay happen anyway, I am not seeking to try to ensure that the future will be a better place or even a similar place, just not a unpopulated place.

        If it can be shown that if we take no mitigating action there is a small but irreducible risk of the collapse and terminal decline of all humanity then that is enough to trigger mitigation, if there is no existential threat, but the future is going to be challenging, then that is human normalcy. If it really is a question of saving the species then I think the bar should be set very low on the probability that doing nothing would produce the necessary warming.

        I suspect we may be barking up the wrong tree if we do not know how much warming would produce an existential threat. Is it really 2C above pre-industrial levels? Or is it 10C above that level?

        I don’t know.

        I suspect that we if cannot dismiss the prospect that doing nothing will pose a threat to the entire species we will mitigate. If we can dismiss an existential threat I expect that we will not worry overly about an inconvenient future until it starts to getting pretty inconvenient.

        I think that by instinct we discount the future; we place a much lower value on jam tomorrow than jam today, let alone jam for somebody else.

        If there is no realistic threat to all mankind that requires immediate action then as you point out we have plenty of other issues to keep us busy.

        I seem to keep hearing that we must act to save the planet which sounds like there is a known existential threat. Do they mean that or do they mean save some coral reefs and some islands and keep human conditions as they are, e.g. save the planet the way it is?

        If global warming is unlikely to kill us off before something else does then it doesn’t seem to be the number one priority.

        I agree that the mitigation as proposed by some could well be very painful and I think that it may even cause major conflicts, not in fifty years but sooner, so we better know that there is an actual risk of something definitely worse, which I see as meaning extermination not just inconvenience.

        I think that people will put up with the mitigation if the percieved alternative is to risk the species having a future. I think even a small but undeniable risk of extinction if we do nothing, outweighs a certainty that mitiagtion will be tough. Merely a high risk that life in 40 years will be challenging probably doesn’t rank all that high in the scheme of things.

        I don’t think precise odds matter much if the threat is existential. If a ship has a fifty-fifty chance of wrecking and wiping everyone out would we get on it? I think not. If the chance was as low as 1%, would we get on it? I still think not.

        So if global warming is the one thing that posses a definite risk of extermination, it seems to me to be the thing we need to fix first.

        If it has merely a high risk of making the future unpleasant then perhaps it can take its place in the queue, there seems to be no shortage of things that threaten the quality of life 40 years down the road, it would be strange indeed if we chose to impose real hardships to prevent warming and it turns out that it was really number umpteen on the to do list.

        Most importantly I simply do not see nations commiting themselves to making a huge sacrifice to make the future a bit less worse than it would otherwise be. But to avoid a real risk of extinction they surely must.

        Alex

      • I see your point more clearly now. We have to somehow evaluate the benefit and risk of not mitigating, the benefit and risk of adapting, the benefit and risk of mitigating, and the benefit and risk not adapting. To focus only on the risk of not mitigating is only part of the picture. These things are difficult to do without a good idea of the likelihood of a serious problem.

      • “If I said it was between 1% and 99% for extermination starting in 40 years would you really say “Sorry, too vauge, not good enough.”?”

        Yes!

  28. I’d be interesting in learning more about how economic models are used and validated. Anyone know of a good layperson’s guide to economic modeling?

    • Mike, you must be kidding. Economic models are so idealized and incomplete that validation is not even an issue.

  29. It won’t be too hard to challenge Knutti on a point-by-point basis. But even taking everything he laid out for granted is not the reason to trust the models. These could only be the reasons not to put the model output in the garbage bin without further examination. The only way to build confidence is comparing models predictions against reality.

  30. AnyColourYouLike

    Of course, confidence in the models is not uniform, even amongst team members. Not as of this time last year at any rate…
    ===================================
    Mike,

    The Figure you sent is very deceptive. As an example, historical runs with PCM look as though they match observations — but the match is a fluke. PCM has no indirect aerosol forcing and a low climate sensitivity — compensating errors. In my (perhaps too harsh) view, there have been a number of dishonest presentations of model results by individual authors and by IPCC. This is why I still use results from MAGICC to compare with observed temperatures. At least here I can assess how sensitive matches are to sensitivity and forcing assumptions/uncertainties.

    Tom.

    Hi Tom,

    thanks for the comments. well, ok. but this is the full CMIP3 ensemble, so at least the plot is sampling the range of choices regarding if and how indirect effects are represented, what the cloud radiative feedback & sensitivity is, etc. across the modelling community. I’m not saying that these things necessarily cancel out (after all, there is an interesting and perhaps somewhat disturbing compensation between indirect aerosol forcing and sensitivity across the CMIP3 models that defies the assumption of independence), but if showing the full spread from CMIP3 is deceptive, its hard to imagine what sort of comparison wouldn’t be deceptive (your point re MAGICC notwithstanding) perhaps Gavin has some further comments on this (it is his plot after all),

    Mike

    Tom, with respect to the difference between the models and the data, the fundamental issue on short time scales is the magnitude of the internal variability. Using the full CMIP3 ensemble at least has multiple individual realisations of that internal variability and so is much more suited to a comparison with a short period of observations. MAGICC is great at the longer time scale, but its neglect of unforced variability does not make it useful for these kinds of comparison. The kind of things we are hearing “no model showed a cooling”, the “data is outside the range of the models” need to be addressed directly.

    Gavin

    Gavin,

    I just think that you need to be up front with uncertainties
    and the possibility of compensating errors.

    Tom.

  31. Knutti’s list is simply reasons why you should not dismiss models outright, not reasons why you should have confidence in them.

    Models are based on physical principles such as conservation of energy, mass and angular momentum.

    Well if they were not based on physics then there would be no point in running the model in the first place unless you were interested in cartoon climate.

    Model results are consistent with our understanding of processes based on simpler models, conceptual or theoretical frameworks.

    Well that’s nice, the model you created, based on parameters you think are real world, comes up with an answer you think it should.

    Models reproduce the mean state and variability in many variables reasonably well, and continue to improve in simulating smaller-scale features

    How “many” and how “reasonably well” does it actually do? Actually defining how many variables out of the total are within a certain range for each location measured would inspire confidence in the models. However saying we get some variables right some of the time is not confidence building.

    Models reproduce observed global trends and patterns in many variables.

    And a broken clock is right 2 times a day. If one variable matches observation and another does not, you end up searching for the missing heat.

    Models are tested on case studies such as volcanic eruptions and more distant past climate states

    I would hope so. So you should be confident in the models because they react the way we think they should when we input certain real world conditions.

    Multiple models agree on large scales, which is implicitly or explicitly interpreted as increasing our confidence

    From a distance, all the model results look fairly similar. From a long distance, a lot of things look similar.

    Projections from newer models are consistent with older ones (e.g. for temperature patterns and trends), indicating a certain robustness.

    No it indicates that nothing new has drastically changed anything. A new model based on similar understandings of climate and ran based on similar conditions would be expected to produce similar results.

    Again, those are NOT reasons for confidence in models. Passing a few tests that if failed would prove a model is wrong is NOT proof the model is right. Would you still have confidence if a bunch models matched eachother exactly and replicated all global variables with observations exactly, but the model was not based on physical principals?

  32. Sorry Judith, I got started reading this and this time I couldn’t finish it because it was clear that, once again, you are asking the wrong questions. The question is not how to build public confidence in climate models — the question is how to construct a model worthy of such confidence — and whether it is possible to do so.

    Climate, as a purely closed, idealized system, assuming completely constant boundary conditions, is too complex to model in a time-step fashion by grid methods over the proposed time intervals, as all climate models of which I have ever heard do. There is too much sensitivity on initial conditions in the relevant partial differential equations, at all orders of scale.

    That is before we accept that there are uncontrollable, and unmodel-able external drivers to climate, such as the cycle 23/24 deep solar minimum, orbital changes, or catastrophic events like volcanos and meteorites that render these equations unsolvable to any reasonable degree of approximation.

    Sub-grid phenomena like thunderstorms, empirically observed to happen continually worldwide, are capable of completely inverting global predictions made by such models in less than a year. How can you even propose to try to build public confidence when we mathematicians have no confidence whatsoever that the underlying mathematics has any predictive value? This is before we consider that the computer “models” only approximate the in-principle solution — and that very badly over time scales beyond a week or two — and that things get much worse yet if we stop pretending that the system being modeled is contained in some idealized bubble.

    If “scientists” can’t learn from the atrocious record of “the models” at predicting the progression of winters (like 2009/2010 in Norther Europe) or summers (like 2010 in the entire Northern Hemisphere) that the models are inadequate, I proffer that the public at large, at least, is smart enough to see the writing on the wall.

    Climate models have a very important role in understanding climate phenomenal. They are not useless tools, and I don’t mean to denigrate them, I’m only saying that this is not what they are for. A screwdriver is an excellent tool. But it’s not a hammer and you shouldn’t pretend it is any good for driving nails. Long-term predictions about the effect of this or that driver using such models are bogus. They are simply not something the models are well-suited for, and no amount of tinkering will make them suited for such things.

    The emperor of climate models for century-scale prediction has no clothes. Please quit wringing your hands over the need for better “education” for the masses so that they will believe he’s wearing fine robes. The public is not that gullible.

  33. The elephant in the room here remains that missing tropospheric “hot spot”.

    How could any culture have confidence in Climate Models and/or Modelers with that inconvenient fact?

  34. Martin Clauss

    Dr. Curry,
    There is an article in Nature Geoscience by Joyce Penner, Michael Prather, Ivar Isaksen, Jan Fuglestvedt, Zbigniew Klimont & David S. Stevenson, titled, “Short-lived Uncertainty”. The article is behind a paywall, though I read a discussion about it from icecap.us (submitted by Doug Hoffman). It discusses short lived pollutants, some that may contribute to warming, other that contribute to cooling.

    The review by Doug, which cites passage from the Nature Geoscience , is quite interesting, as a lot of uncertainty about the effects are discussed, including attributing 65% of the warming to other factors than CO2. And that the uncertainty is such that IPCC projections (based on biased assumptions about climate sensitivity, are not trustworthy.
    (NOTE: these comments were taken from the article by Doug Hoffman, just trying to give proper attribution, so I won’t be accused of plagiarism . . . ! :-) ) .
    The point goes to the confidence we have in climate models. How does this affect where we stand?

  35. For someone above

    Dr Tim Ball (5 10 minute videos of an interview (2 very interesting – as are they all)

    http://australianconservative.com/2010/10/michael-coren-with-dr-tim-ball/

  36. Phillip Bratby

    All this talk of V&V is pretty well irrelevant unless the code developers/users have good QA procedures to control it all. I see no mention of QA here. In engineering, good QA procedures (under ISO9001) would control all the work. There would b no concerns about going back and running 10-year old code and data because the QA procedures would ensure it was all archived and retrievable. Does any organisation (or has anyone) in climate science use QA procedures properly? The evidence from NASA/GISS is that quality is very much irrelevant to their work. The same would seem to apply to NOAA. Comments?

    • Philip, the old model codes are available, but everyone wants to focus on the new and improved models, and a conclusion about the performance of the old model wouldn’t transfer to the new model. An argument can certainly be made that they need to conduct an ongoing verification (say every 5 years) of all of the model versions, including a sufficient number of ensemble runs for each model version. But this would eat up a lot of computer time that then can’t be used for the latest IPCC production runs.

      • Michael Larkin

        Has anyone ever done this:

        1. Group A at time zero (Tz) runs model A and produces a list of predictions for specified key parameters as they will be in, say, Tz+5yr.

        2. A few other groups B, C and D, using models B, C and D, do likewise.

        3. Groups A-D send their predictions to completely independent group E, which is blind to which group has produced what predictions. Group E sits on them until…

        4 At time Tz+5, group E gathers together actual current measured values for specified parameters and compares them with predictions A-D. Group E does a statistical analysis to evaluate the skill of each of the models as at Tz.

        5. Group E feeds back all results to all groups A-D. In this way, A-D become aware of what the strengths and weaknesses of their models are. Groups A-D engage in discussion, tweak their then current models and enter another test cycle.

      • Michael, the problem with this is that there is lower predictability on time scales of 5-10 years owing to the natural internal modes of the coupled atmosphere/ocean; this is the so-called “decadal predictability” problem, which is an area of active research. For the problem where there is strong integral forcing over the period of say a half century (such as greenhouse gas forcing), this is more predictable, but trying to tie down a prediction on the timescale of a decade just doesn’t work owing to the temporal-spatio chaos that is present.

      • For the problem where there is strong integral forcing over the period of say a half century (such as greenhouse gas forcing), this is more predictable, but trying to tie down a prediction on the timescale of a decade just doesn’t work owing to the temporal-spatio chaos that is present.

        Funny you picked 10 and 50 years as your example periods; it seems at least some of your colleagues disagree on the predictability problem over various time-scales (Klimazwiebel survey results).

        Dr Pielke Sr mentioned in a previous post that he’d be discussing some of these survey results in a future post; do you think they are worth your comment?

      • I was surprised by the survey results, there was a surprisingly narrow spread. Perhaps people with a certain perspective were more inclined to answer the survey?

      • Michael Larkin

        Genuine thanks you for your reply, Dr. Curry. I find it very valuable as it clears up something I realise I’ve had a bit muddled.

        What it does for me is to make me support even less any calls for urgent and draconian action on global warming. Who is to say that after the 50 years or whatever we have to wait for any possibility of validation, we won’t find that the predictions are completely wrong?

      • Michael, decision making under deep uncertainty will be a topic for next week’s post.

      • Re: curryja (undefined NaN NaN:NaN), “production runs”!! What an image. The latest politicized pressure pieces, shortly to be replaced with new ones and never validated.

  37. Surely R Craigen has hit the nail on the head when he says;

    “The question is not how to build public confidence in climate models — the question is how to construct a model worthy of such confidence — and whether it is possible to do so.”

    I think you, Judith, need to demonstrate that the inability of models to cope with such parameters as clouds, their inability to model those aspects of climate that we are very hazy about, their inability to model those aspects of climate that we don’t even suspect have an effect, is not a major-indeed a fatal- problem when looking at the results. By assigning such certainty to them that we are considering changing the world econmy on their results is very foolish.

    And don’t forget that the best evidence seems to be that in climate there are cycles within cycles, not all by any means of known regularity, and those can’t be easily replicated by modellers who like linear projections.

    Models have their place in many fields, but one as complex and still as unknown as climate science? I doubt it.

    tonyb

  38. Tomas Milanovic

    An additional comment on Knutti’s:

    Models are based on physical principles such as conservation of energy, mass and angular momentum.

    The right statement should be :
    Models are based on physical principles of conservation of energy, mass and angular momentum. No other principle is and can be used.

    As I have written above, any model can only solve numerically algebraic equations on a finite grid with very low resolution (typically 100s of km).
    The conservation laws are the only equations that can be written at this low resolution.
    This easily understandable situation leads immediately to the following question:

    Given that the system’s dynamics is described by a continuousand unique solution to some (unknown) system of partial differential equations, how can we know that the states computed by solving algebraic equations representing a discrete representation of the conservation laws converge to the continuous solution or are even near to it?

    The answer is immediate: we cannot know it.
    The reason is trivial, if I don’t know to what I should converge, I can not talk about convergence.
    That’s why the defenders of climate models use so often the oratory trick saying that the model results are plausible.
    By this they imply that despite the fact that convergence can’t be demonstrated, the “plausibility” is equivalent to a convergence proof.
    This is a mathematical horror!
    Let’s illustrate by an example:
    I have a system whose dynamics is given by a continuous solution
    F(t) =Sqrt(at).Sin(t+phi(t)). Observation shows that the system is approximately periodical with a fluctuating amplitude. The numerical model tells me that the system behaves like G(t) = sin(t).
    It is “plausible”.
    Yet not only the model doesn’t converge to the continuous solution for large t but no state described by the model corresponds to a real state of the system.
    Of course G(t) can be fitted to F(t) for a certain period of time in the past but that is clearly no proof of its relevance for the future..

    There is also a technical reason why one should expect large divergences between a model result and this hypothetical unknown real behaviour.
    Gerald Browning made many posts about it – the unphysical viscous dissipation.
    Indeed energy, mass and momentum can NOT be conserved by the model between t and t+dt.
    Why?
    Because of subgrid parametrization. These are empirical ad hoc equations that are independent from the conservation laws but they interact with the parameters that appear in the conservation equations.
    Because of their empirical nature, they add or substract something to the energy,momentum and mass at each time step of the computation.
    But as energy, momentum and mass must be conserved, the modeller must take an arbitrary decision what to do with this excess or deficit. He can spread it all over the system or concentrate it “somewhere”.
    G.Browning analysed in depth this issue (posts at CA) and shows that this procedure leads to unphysical viscous dissipation in a class of models – this is just an example. One could talk about mass as well etc.
    And one cannot heal this problem because it is intrinsically built in the concept (the only one possible) of a discret version of conservation laws + subgrid parametrization.

    Judith, as you obviously know NS and fluid dynamics well and in any case better than most modellers, I would like to have your take on what seems to me really the basic problem of the models.

    1)
    As convergence can’t be demonstrated, can it be supposed that a large number of “plausible” states computed by a large number of runs of a model has the same statistics (PDF) as what the real system would exhibit if we could make experiments on it?
    2)
    If the answer is yes, what is the demonstration? Of course the “plausibility” is no demonstration because it would be circular.
    3)
    If the answer is no, why use then deterministic numerical discret models at all?

    Or perhaps will that be an issue of one of your following posts?

    • Tomas – thank you for another excellent post. I feel the question I asked a few days ago is worth repeating. I used the term “skilful” to embrace the broadest possible range of purposes for which the models might be employed, not merely that with which the general public is most familiar – alarming it into acquiescence.

      I wrote “A question implicit in this thread is “how complex do GCMs need to be[come] if they are to be skilful”? I think there is a case for looking through the other end of the telescope. If we accept that the only truly authentic model we have of the climate is the climate itself, and if we also accept that constraints on computing power and imperfections in our understanding of its drivers will for the foreseeable future – probably forever – prevent us simulating it in its entirety, isn’t it reasonable to ask “Can a model as complex as the climate survive the REMOVAL of ANY complexity and remain skilful?” If not, then attempts (or at least publicly-funded attempts) to construct GCMs for predictive purposes (of any kind) should be discontinued. If it can, then how much complexity can safely be removed, and is the complexity that remains susceptible of skilful modelling? ”

      Your posts incline me to think the answer is no.

      • Tomas Milanovic

        Tom this question :
        Can a model as complex as the climate survive the REMOVAL of ANY complexity and remain skilful?

        is intimately linked to the chaotic nature of the system.
        2 examples.
        Take the Lorenz system of 3 non linear ODE. If you remove the tiniest part of the complexity (here the non linear term), you destroy everything. The “new” system you obtain has nothing to do with the original system (the famous butterfly attractor).

        Take the 3 body system in gravitational interaction. It is here that Poincare discovered 100 years ago that not only is it impossible to solve the problem but that the 3 body system orbits were chaotic.
        Here too the tiniest removal of the complexity which would make the system mathematically tractable completely destroys the real behaviour of the system.

        This kind of systems are extremely resistant to reductionism. You cannot cut them to small simpler pieces, analyse the pieces alone and then put the pieces together again. They behave in a way of “Either you get all the reality and complexity or you get none of it”.

        The best analogy for the climate is the brain.
        The brain is also a spatio-temporal highly non linear and coupled system. It can also exhibit chaotic behaviour.
        But it has already been known for a long time that you will never understand how the brain works by only understanding how its small parts (neurones and groups of neurones) work.
        The latter is (partly) possible but it gives little clues to the former.
        Everybody agrees that there is no model (computer or not) allowing to predict brain states or their “averages” and probably will never be.
        Now the brain is relatively “simple” compared to the climate because it is a discrete system .
        There are 100 billions of neurones what is a big number but infinitely smaller than infinite dimensional continuous climate system.
        How probable is it that the climate can be ever modelized by using a primitive method of gridding with discretized conservation equations?
        More probable than the brain or less?

        Last but not least and I have already said so several times.
        The only way to “get rid” of at least indirectly of the complexity is to develop a purely stochastical model.
        Forget all those pesky untractable equations and predict only probabilities.
        Which may ironically lead to the need of solving other QM like equations which are not necessarily easier :)
        It is clear that this strategy doesn’t work for the brain which is obviously not a random system.
        Can it work for the weather/climate?
        You answer it for yourself because no scientist is going this way sofar.
        But one is sure, if this strategy works, you can throw the numerical deterministic models in the garbage bin anyway.

      • Michael Larkin

        Thanks for this post, Tomas. It allowed a non-expert such as myself to gain better insight into the problem.

    • Tomas, answering that question is beyond me.

  39. Surely the emphasis on the plural is wrong.

    Policy makers want a proven and a successful working model for them to be deemed worthy. Politicians want a proven and a successful working model to prove they care. Taxpayers want a proven and a successful working model to lessen the burden. The world’s poor who are told they can’t live like us want a proven and a successful working model to offer up some hope.

    What does climate science offer but a whole range of models dealing with a wider range of what-if-? scenarios that can be manipulated for political and ideological ends.

    Maybe in the end it is not the plural to blame but the modellers who seem determined to live in a two-dimensional virtual universe where everything and anything is possible, and where all expressed concerns can addressed, “I have a model for that.”

  40. Judith, I am sure you, like myself, have read all the messages on this subject. There is a common message from us skeptics that we dont trust the output of the AGW models; different people have said the same sort of thing in their own words. But the message is the same.

    Maybe I can summarize this in a question that I posed earlier. How do we know that the models are giving the “correct” answer?

    I put the word “correct” in quotation marks for a specific reason. In my career I came across, on more that one occasion, instances where large institutions know what the “right” answer is. In this case, the “right” answer, for the proponents of CAGW, is that those models whose output shows that CO2 is evil, and we must curb emissions immediately, or preferably sooner than that, give the “right” answer.

    What we need to ascesrtain, is what is the “correct” answer. I submit that no-one has established in a way that is scientific and rigorous, that the computer models used by the proponents of CAGW, do, in fact, give the “correct” answer.

    • Jim, the answer isn’t simple, and I tried to address that in my post, it depends on the question you are asking. The model gives the correct answer in terms of generally getting the average distribution of surface temperature (land vs ocean, tropical vs polar, etc.). Are the models capable of giving us a correct answer in a probabilistic sense about the attribution of 20th century climate change, or sensitivity to CO2 doubling? Well in principle they could be capable of this, if the experiments were designed correctly. But at most, you will get an “answer” with range of possible values and an assessment of the uncertainty. So “correct” is an ambiguous word when it comes to a complex model.

      • In all fairness, Judith, you have not addressed the question I asked. What I asked was “HOW do we know that the models are giving the “correct” answer?” (New capitals) All you have said is that the models give the correct answer. But you have not explained HOW we know this. For example, what are the tests which have been done on the output of the models to provide the proof that these outputs are correct?

        I do not think that “correct” is an ambiguous term at all. Let me once again illustrate this with the use of radiative transfer models to estimate the change in radiative forcing for a doubling of CO2. The answer comes to 3.7 Wm-2, or whatever. How do we know that this answer is correct? I am not just talking about GCMs. I am talking about ALL the models used in investigating CAGW.

      • Ok, this one is relatively straightforward. There are two separate issues: the correct radiative transfer model, then the correct ambient atmospheric conditions (H20 and other trace gases, temperature profiles, cloud properties, etc.). The ambient atmospheric properties are important since if there is a lot of other stuff absorbing and emitting IR, then doubling CO2 will have less of an effect (i.e. doubling CO2 is most strongly felt at the surface in the Arctic, and at the top of the atmosphere in the tropics). A discussion of the validity of the most sophisticated line-by-line infrared radiative transfer models is beyond the scope of this reply, but scienceofdoom does a good job with this. The line-by-line IR models have been evaluated under clear sky conditions against FTIR, and they perform very well. IR models used in climate models are compared against the line-by-line calculations, and most of them perform very well (not all.)

        So the 3.7 W m-2 calculation for global radiative forcing could be refined perhaps by an improved experimental design (not necessarily by improved radiative transfer models) running RT models at each grid cell over the globe, over the diurnal cycle and the annual cycle for say 30 years, for the two different CO2 concentrations, such a detailed calculation would refine the 3.7 value. But this is sort of moot, since the same climate models make these calculations all the time, its a matter of possibly a more refined diagnosis of this value from the climate model simulations. So this particular point is one where there is high confidence. Scienceofdoom addresses this issue pretty exhaustively.

      • Judith,
        I might not be understanding the issue correctly, but I think your summary over-states the state of the situation relative to the radiative energy transport problem in the real-world atmosphere. Isn’t this a description of the spherical-cow version of the problem.

        Fidelity of the simulation to the actual atmosphere of the Earth is governed by the parameterizations of all the other stuff in the atmosphere and the parameterization of the interactions of that stuff with the RT.

        The resolution used in the simulations have been mentioned in a guest post at Pielke Sr.’s.

      • The problem I have understanding this, is that radiative transfer models seem to only look at the transfer of energy through the atmosphere, by radiation. The use of Stefan/Boltzmann to estimate how much global temperatures will rise as a result of this, also only looks at the transfer of energy by radiation. So we get to 1.2 C for a doubling of CO2, and there is no consideration of conduction, convection and latent heat. I just think this rings like a cracked bell. And the 1.2 C has not been actually measured.

      • Translating the radiative forcing (from a radiative transfer model) to a surface temperature increase is much more complicated than just the Stefan Boltzmann equation. Surface temperature is determined from a surface energy balance equation. Over land, you have a surface energy balance that includes downwelling IR, upwelling IR (Stefan Boltzmann), downwelling solar radiation minus what is reflected back from the surface, latent heat flux and sensible heat flux (these are turbulent fluxes associated with exchange with the atmosphere), and conductive flux from the ground (below the surface). Over the ocean, you have the same surface fluxes, but because of mixing in the ocean the heating is distributed over some meters of depth in the ocean. If the land or ocean is covered with snow/ice, then you have the latent heat of melting to include if the surface is warming. Climate models include all these processes.

      • Jim Cripwell: “ The problem I have understanding this, is that radiative transfer models seem to only look at the transfer of energy through the atmosphere, by radiation. The use of Stefan/Boltzmann to estimate how much global temperatures will rise as a result of this, also only looks at the transfer of energy by radiation.

        I understand your problem Jim, and agree. As Dan says above, this leads to a “spherical cow” model. Or as a good friend of mine says, whom I shall not name here, but who is preeminent in the field of dynamical systems and was a solid contributor to the practice of modelling climate on computers, “the trouble with the IPCC models is that they treat the climate system as if it were a brick.”

        Having said that, let us recognize that there is a certain naive but elegant and attractive approach which I shall oversimplify by calling “radiative budgeting”, according to which the only things one needs to know are the amount of energy entering and leaving the system. A simple subtraction, and voila! one has a measure of increased heat and so can infer all kinds of things about the “climate trends”. This renders “conduction, convection, latent heat” etc. as secondary considerations whose behavior is confined within a bubble of possibilities determined by the radiative budget.

        If only it were so simple. I’m sure you’re ahead of me by now and know where I’m going, but I’ll finish briefly:

        The surface of the earth is not a sink with a tap (the sun) and a drain (radiation back to space). For one, there is an enormous energy source other than the sun, the magnitude of whose effect is difficult to determine or even approximate, but which has always been there, namely geothermal heat in its various forms. It is currently suspected, for example, that the recent increase in deep-ocean heat content is driven by geothermal sources rather than atmospheric (which would solve the “paradox” that shallow ocean temperatures, over the same period, have fallen slightly).

        Second, as every high school student knows, not all energy is stored as heat, and energy is continually changing forms. Unquestionably, this happens in the surface systems that include the climate. The most obvious example is photosynthesis, which extracts a good portion of that incoming solar energy and stores it in the form of chemical bonds.

        Such things must be accounted for in our “budget” or we get it wrong. Now, we at least can estimate to within an order of magnitude how much energy is taken up by photosynthesis. But there are many, many other highly significant non-heat reservoirs.

        Gravitational potential energy, for example: the water cycle moves enormous volumes of water from a potential energy well (the ocean) to high energy plateaus (lakes, glaciers). This is, after all, where we obtain hydroelectric power; by scratching off an infinitesimal fraction of this stored potential energy for our use.

        There is kinetic energy, in the form of wind and ocean currents — which, again, we can harvest for our use.

        Some energy reservoirs are much harder to model. For example, the formation of carbonate minerals, which happens on a global scale. The more CO2 is stored in the ocean, the higher rate of formation. I don’t think this has ever been estimated; all we know is that it is huge.

        Then there are siliciclastic rocks. The mechanical energy consumed in erosion processes. The solar energy used to degrade photosensitive materials, and so on.

        So while radiative budgeting looks quite attractive from a naive perspective, it is fraught with problems and guesswork that, I believe, render it pretty worthless — at least insofar as being a one-variable descriptor of global-scale climate trends.

        I’d be happy to be contradicted by someone who could produce a climate budget that is plausible and places sufficiently tight error bars around every possible energy source and sink to definitively determine whether the environmental heat-energy balance is increasing or decreasing.

      • The overall trick to getting the radiative fluxes in climate models correct is to get the clouds correct. THis is very difficult do when running (integrating) the model forward in time. However, in a static caclulation to calculate radiative forcing, you can specify the clouds based on climatology, then run the radiative transfer code for current CO2 and doubled CO2. My statement about calculating the radiative fluxes was in response to a very specific question, the direct CO2 radiative forcing to a doubling of CO2.

      • Fair enough. Let us take this one stage further. We use the estimation of radiative forcing for a doubling of CO2 to estimate that, without feedbacks, this change in radiative forcing would result in a rise of global temperatures of 1.2 C. This number cannot be measured, so we have no way of determining whether it is correct or not. Now since this number has been estimated without taking into account conduction, convection or latent heat, do you believe this number is sufficiently well proven so that we can believe that CAGW is real?

      • Jim, I don’t like the common rationale for translating 3.4 W m-2 into a surface temperature increase of 1C. I would go about it in a different way that uses the full surface energy balance equation (but without water vapor feeback, etc).

      • Fair enough. I think the way the 1.2 C has been calculated is just plain wrong. But I am not sure I understand what you are saying. If you dont like 1.2 C, what do you believe the number is? And is it big enough to warrant a belief in CAGW? Or is it impossible to estimate it using the “full surface energy balance equation”?

      • Tomas Milanovic

        Judith then we are at least two .
        My feeling is even stronger – I positively hate the mathematical nonsense relating a space average of the “high configurational entropy” to some really ill defined terme like global (!) “net forcing”.

        However even if the way you would go makes at least more mathematical and physical sense (you surely mean a local energy balance), you have also a problem with it.

        As there is no radiative balance, there is necessarily variation of internal energy.
        But the internal energy will be distributed among the different degrees of freedom depending on what happens just beside the point you are looking at.
        I am well aware that this may seem trivial because I am only demonstrating that your “energy balance equations” will be a system of non linear PDE and one that will be a good deal more complicated than Navier Stokes.

        But isn’t this particular point precisely the problem of all the numerical models who don’t and can’t solve systems of non linear PDE?
        And if you somehow postulate that the points are uncoupled spatially what considerably “simplifies” the equations even if it is physically wrong, how can you be sure that what you compute numerically stays near to the real behaviour for a sufficiently long time?

      • I read ScienceofDoom’s posts and the papers he referenced showing that models can accurately calculate the observed downward long wavelength radiation from the atmosphere to the ground. If I remember correctly, even under the best circumstances (no clouds, low humidity), the error in those calculations was comparable to the 3.7 W/m^2 forcing expected for 2X CO2. This is also only a 1% error, so the observations do show an excellent fit to theory even though their error if large compared with anthropogenic forcing.

        ScienceofDoom’s earliest reference was to a paper calculating DLR in ANTARCTICA with no clouds. Does anyone publishes anything about how well calculated DLR agrees with observation on a cloudy night in the tropics?

        These DLR calculations are done after sending up a radiosonde to measure the current temperature and humidity above the IR detector observing the downward radiation. Climate models need to predict the overhead temperature and humidity, and then calculate the downward radiation. It’s no wonder they aren’t good at predicting next week’s weather.

  41. Watts Up – now has a link directly promoting
    “The Hockey Stick Illusion” A W Montford – on its frontpage.

    So maybe a few more sceintists will take Up Judith Curry’s challeng to read it…

    Hal Lewis: “Anyone who has the faintest doubt that this is so should force himself to read the ClimateGate documents, which lay it bare. (Montford’s book organizes the facts very well.) ”

  42. Judith Curry said: Much of the debate between validation/verification versus evaluation seems to me to be semantic (and Oreskes does not use the V&V terms in the practical sense employed by engineers).

    Roache called it “effete philosophizing,” and I think his grasp of the topic is far firmer than someone at one remove, who studies those who study how to solve PDEs correctly. It’s also funny how many of the STS researchers let their political motivations rise to the top. If we’re talking flows in porous media in relation to nuclear waste disposal, then models can’t be validated and we need more study, but when it comes to atmospheric flows then climate models are the gold standard and we’ve got to act based on their output!

    Smith’s point about being wrong in a large dimensional space is spot on. “Validation” on functionals with high “configurational entropy” (like globally averaged anything, an infinite number of configurations can give the same number) should be rather unconvincing (in this case where the empirical knobs of the various parametrizations are balanced in exquisite relative alignment to prevent nonsense output) if the intended use is prediction.

    I’m sure the continuous evaluation flavor of validation is sufficient for scientific uses of the model, but I’m equally convinced that it is insufficient for the decision support tasks currently being foisted on these tools by the mainstream policy advocates. There seems to be some indignation from the model builders right now, “my validation process is good enough for SCIENCE, why isn’t it good enough for you grubby proles?”

    • On the recommendation of Dan Hughes, I bought Roache’s book about two years ago. I haven’t read it cover to cover, but refer to it frequently. The climate modeling enterprise has much to learn from Roache, IMO.

      • The climate modeling enterprise has much to learn from Roache

        Well, it seems to have taken a couple decades of consistent proselytizing for formal verification and MMS to catch on in the CFD community (and it still isn’t used as widely as it should be) so don’t hold your breath!

        Your point about EPA taking up the fight is a good one. In aerospace it was really Defense Modeling and Simulation Office (DMSO) (followed by organizations like AIAA and ASME) and users of model results that pushed for such standards and that drove the community; it wasn’t much of a grass-roots thing from the model building community as far as I can tell. The decision makers (those with the money) demanded it, and the decision support folks (those building models) obliged.

        This dynamic works because those funding the models and making decisions based on the results have “skin in the game”. If the aircraft doesn’t fly, then you’ve failed. If the model results are merely nice marketing (colorful fluid dynamics) used to sell an agenda, then the right organizational dynamics don’t really exist. Either way, the dynamics of climate policy might be more complex than product development, so the lessons that can be fruitfully drawn from analogy are probably limited.

      • Roache’s work is heavily cited in the paper linked below; he seems to be getting some traction in at least some parts of the geophysical flow modeling community.
        Abstract: There are procedures and methods for verification of coding algebra and for validations of models and calculations that are in use in the aerospace computational fluid dynamics (CFD) community. These methods would be efficacious if used by the glacier dynamics modeling community. This paper is a presentation of some of those methods, and how they might be applied to uncertainty management supporting code verification and model validation for glacier dynamics. The similarities and differences between their use in CFD analysis and the proposed application of these methods to glacier modeling are discussed. After establishing sources of uncertainty and methods for code verification, the paper looks at a representative sampling of verification and validation efforts that are underway in the glacier modeling community, and establishes a context for these within an overall solution quality assessment. Finally, a vision of a new information architecture and interactive scientific interface is introduced and advocated. By example, this Integrated Science Exploration Environment is proposed for exploring and managing sources of uncertainty in glacier modeling codes and methods, and for supporting scientific numerical exploration and verification. The details, use, and envisioned functionality of this Environment are described. Such an architecture, that manages scientific numerical experiments and data analysis, would promote the exploration and publishing of error definition and evolution as an integral part of the computational flow physics results. Then, those results could ideally be presented concurrently in the scientific literature, facilitating confidence in modeling results.
        Verification, Validation, and Solution Quality in Computational Physics: CFD Methods applied to Ice Sheet Physics

      • This is a REALLY good paper! I esp like the ISEE idea.

    • Tomas Milanovic

      Roache is excellent, relevant and professional.

      And this:
      Validation” on functionals with high “configurational entropy” (like globally averaged anything, an infinite number of configurations can give the same number) should be rather unconvincing (in this case where the empirical knobs of the various parametrizations are balanced in exquisite relative alignment to prevent nonsense output) if the intended use is prediction.
      is exquisitely well said and true.

      However one should not forget the forest by looking at the trees.
      Roache deals with CFD and with the science of error estimation/propagation that is 100 years old.
      He deals with solutions of <b<real PEDs and their properties.
      He deals with things that must be done because planes must fly , bridges must stand and pumps must pump.
      These are the trees.

      The forest is that with climate models we are not playing in that league AT ALL !
      There are no well defined PED systems describing the dynamics in climate science and there is no “solving” of them.
      There is neither convergence nor error estimation.
      There are just huge grids at a scale which is light years from any notion of CFD.
      Being on purpose excessively schematic, I would say:
      – climate models need no verification. Resolving algebraic equation on a grid is a so simple a task that there can’t be any problem with the code. There are not more points than in a standard CFD and the equations are much simpler.
      – climate models can’t be validated. Indeed how do you want to validate something that just tries to conserve energy , mass and momentum on a grid with 100s km of size?
      Sure the (artificially enforced) conservation laws will have for effect that what is computed doesn’t look absurd and some very large scale quasi stationary features due to the fact that the Earth is a sphere which rotates around an axis will be more or less reproduced.
      But everything else is the aptly named “high configurational entropy” which has for consequence an impossibility of validation.

      • Tomas, the high configurational entropy thing is intriguing, do you have specific references for this idea that provides more explanation? Thanks.

      • the high configurational entropy thing is intriguing

        I think I might be responsible for that abuse of jargon; just meant to convey that the macrostate (globally averaged temperature) has a whole bunch of equi-probable (or as close as makes little difference given how under-constrained by measurement the system is) microstates (spatial distributions of temperature). An amplification, through misappropriation of thermodynamics, of Smith’s point about being wrong in a million dimensions.

        Here’s a nice page on “teaching entropy“, which mentions the idea at the top. For a readable treatment of the connection between entropy and information see Maxwell’s Demon: Entropy, Information, Computing.

      • Sorry for double posting, but this one: Configurational Entropy Revisited is a good short write-up as well, it is geared towards educators in chemistry; addresses the concept of constraints on the system and number of possible states accessible by the system as key ideas in understanding entropy.

      • thx for the reference, this looks really interesting.

      • I just found this one, which applies these information theory concepts in the context of climatic prediction (similar to the demo using the Lorenz ’63 model I did here).

        thx for the reference, this looks really interesting.

        You’re welcome, I think this stuff has pretty broad usefulness. The same ideas can also be applied in designing validation experiments, where you choose test points that have low/high entropy in their predictive distribution.

      • Thx again, your blog is really interesting, i’ve added the climate links to my blog roll
        http://j-stults.blogspot.com/search/label/climate%20change

      • Validation Models of Complex Physical Systems and Associated Uncertainty Models is a very understandable set of slides on a VV&UQ (verification, validation and uncertainty quantification) effort for a fairly complex multi-physics code with lots of uncertain parameters (~300 due to turbulence and chemistry modeling). Slide 10 describes the two “schools” of validation philosophy (Popper and Bayes). Covers the basic uncertainty taxonomy and optimal validation experimental design.

      • very nice presentation, thanks

      • Sure the (artificially enforced) conservation laws will have for effect that what is computed doesn’t look absurd and some very large scale quasi stationary features due to the fact that the Earth is a sphere which rotates around an axis will be more or less reproduced.

        Exactly correct, Tomas. So long as the numerous strictly numerical artifacts such as dispersion, diffusion, dissipation, and temporal aliasing are properly handled. These non-physical limitations can be introduced in the discrete approximation and numerical solutions domains.

        A rough approximation to the meta-scale dynamics in both the atmosphere and oceans will simply fall out of the approach if these limitations don’t completely annihilate the underlying physics captured by the model equations. And assuming that

        It is solely the parameterizations that provide higher fidelity to the real-world. If we look at the system from the viewpoint that these meta-scale processes serve to transport mass and energy throughout the system components, then the parameterizations govern the sources of mass, energy and momentum exchanges both within a component part and between components. These sources must be formulated in the discrete and solution domains strictly in accordance with the conservation and balance concepts. Otherwise, mass and energy will not be conserved and momentum balances get screwed up.

        Beyond the very rough approximations to the fundamental equations that produce the meta scales, everything else in the models construct a process model of the Earth’s systems and their interactions. It is not formulated as a problem in computational physics. It is formulated as a process model. And at the present time it is a research-grade model, not a production-grade model. As I understand the literature, the present process-model formulations are not independent of the temporal and spatial scales used in the application domain. Not a good situation.

        In this regard, I disagree with Knutti’s first characterization on this important aspect; it’s the parameterizations that are critically important. Mass and energy conservation and momentum balance concepts are only the basis of the models. The full, complete fundamental equations are not used. The model approximations do not correspond to the actual physical phenomena and processes that the material experiences. The material experiences the full and complete statements of mass and energy conservation and momentum balance.

      • oops, that should be, And assuming that the equations have been correctly coded.

      • Tomas Milanovic

        Dan you don’t need to handle many things “correctly” when you are really at big spatial scales.
        You surely know that the constructal theory allows a very simple model which doesn’t even need a computer and that finds correctly not only the existence of Hadley cells but also their location.
        By refining only little the constructal theory model, you can find other very large scale features of the real system.
        The simple fact that we deal with a rotating sphere which is cold at the rotation axis and hot on the equator imposes some large scale spatial structures independently of all the details.
        Of course these structures are stationary or quasi stationary (by definition) so it doesn’t teach us much about the time evolution of the system.

  43. My modeling experience consisted of paper and pencil work in a graduate population genetics class – the most elementary stuff, so I claim no expertise in planetary climate modeling – to say the least. Having said that, I did learn some fundamental principles of modeling, sufficient that I think I’m capable of catching major errors when I see them.

    Regarding multiple models validating each other: if the modelers are all following the same basic approach, it should be no surprise that the get similar outputs from their models. They all took the same classes in school, studied the same previous work, went to the same conferences, etc. In other words, their work is not independent. Thus, there is no validation in their similar results.

    The fallacy of hindcasting: If I could hindcast the market for the period 1900-1970, would you be confident I could forecast the market for 1970-2010? The validity of hindcasting verification rests on the assumption that nothing can happen in the future that has not happened in the past. We are not dealing with a beaker in a laboratory here – unless these modelers can explain all climate variation on the planet over the last several billion years.

    The ‘physical principles’ argument: the number of steps between the conservation of energy and delta degrees C in one hundred years makes the statement true and irrelevant.

    On the agreement of older and newer models – the same people are writing the models, no? I see the ‘logically true but wrong’ problem. Models are consistent with each other? I assume that any fool modelers could be consistent among models. As long as they are consistently wrong, their outputs can be expected to be consistent with each other.

    My personal concern: I was taught that models are only valid when their assumptions hold. One failure of an assumption, and the model cannot be trusted to hold, regardless of the correctness of the math. So what are the assumption of GCMs? I’ve not only never seen them stated explicitly, I’ve never even seen them referred to. My modeling instructor would be frowning now (bless here picky soul). ;-) Of course, an explicit listing of all the assumptions that go into GCMs would give critics and skeptics almost limitless ammunition to attack these models without ever having to get inside them. If anyone knows a plain-English listing of said assumptions, please share them with me.

    Please assume “it seems to me” in every statement above. I’m always happy to be corrected by my betters. ;-)

  44. Someone asked about economic models and their validation. One of the most interesting exercises was from the Warwick Economic Modelling Bureau, which produced ex post analyses of the forecasts of the main macro-models in the UK. Because they had the models deposited with them (a condition of ESRC funding) the group were able to produce pure model forecasts ( in practice most modellers override their models on occasions when they don’t believe the answers – and on average this does improve forecasts…). They could then breakdown forecasting error into getting the exogenous (assumptions made about things which the model itself does not determine, but are not known with certainly when forecasts are prepared) variables (which I think are analogous to forcings in climate models) wrong and pure model error. A short report with links can be found here

    http://www2.warwick.ac.uk/fac/soc/economics/research/centres/esrc/more/
    All kinds of interesting things came to light including that forecasters do better than model forecasts.

  45. The topic of the “confidence” that one is justified in placing in a model falls within the purview of logic. Logic is the science of the principles of correct inferences. These principles are called the “principles of reasoning.”

    A model is a procedure for making inferences. That one is justified in having confidence in a specified model implies that each inference made by this model is correct.

    Usually, when an inference is made by a model, there are many candidates for being made. Thus, the builder of this model is faced with the necessity for identifying the one correct inference from among the many candidates.

    Few model builders have had the wit to select the one correct inference by the principles of reasoning. Most have selected it by an intuitive rule of thumb that is called a “heuristic.” A result is for confidence in most of today’s models to be misplaced. In particular, confidence in the IPCC climate models is misplaced for these models were built by heuristic methods.

    Among modellers employing heuristics, a commonly made logical error is to presume more information than is possessed about the outcomes of statistical events. A consequence from this error is the automatic falsification of the model if and when it is given sufficient testing.

    In the case of the IPCC climate models, the outcomes that are of significance for policy making are average global surface temperatures. The IPCC models assert the possession of complete information about these outcomes. However the modellers agree that, for reasons which include difficulty in modelling clouds, information is missing. It follows that the IPCC models would surely be falsified, if given sufficient testing.

    I understand that around $27 billion has been spent on research in climatology yet following this expenditure we obviously lack a scientific basis for policy making on climate change. Can valuable lessons be learned from this fiasco? Yes! One is that it is crucial for the builder of any climate models that is slated for use in policy making to be built under the principles of reasoning.

    • Terry, its not just the model building, but how the experiments are designed, which end up as exercises in circular reasoning.

  46. Oops! The last sentence of my previous comment should read: “One is that it is crucial for any climate models that is slated for use in policy making to be built under the principles of reasoning.”

  47. I really appreciate Dr. Curry’s hosting this blog. It is nice to have a place where skeptics can argue with AWG’ers without being moderated into oblivion. It is also nice of Dr. Curry to tolerate the chasing of some rabbits. Thank you.

    After all this talk about climate models, I’m wondering if a good solution might be to get a bunch of countries together to sponsor a really super super-computer and then let climate scientists share time on it like astronomers share telescope time or physicists share accelerator time. That might result in fewer but better models run on a superior, hopefully scalable, machine. Maybe more Monte Carlo simulations could be run to get a better idea of the bounds of the output. Then the freed-up computer time could be used for other science.

  48. To do a rapp, if we had data from the future we would not need models.

    • This is not the issue. The issue is whether models are capable of predicting what will happen in the future. One of the issues I have fought for all my career is whether the best we can do is good enough. Just because the use of models is the best we have to try a determine what will happen to global temperatures as CO2 quantities go on increasing, is no reason to believe that the output of climate models is anything other that scientific wild a**e guesses.

  49. Nicola Scafetta has a paper in press (J of Atmospheric and Solar-Terrestrial Physics) describing where he sees this GCM confidence-building

    NOT behind a paywall, here:

    http://www.fel.duke.edu/~scafetta/pdf/scafetta-JSTP2.pdf

  50. Judith, We seemed to be getting to what I consider to be a very important point. You wrote “Jim, I don’t like the common rationale for translating 3.4 W m-2 into a surface temperature increase of 1C. I would go about it in a different way that uses the full surface energy balance equation (but without water vapor feeback, etc).”

    I agree completley. You indicate that you dont agree with the way the IPCC estimates the surface temperature increase using the change in radiative forcing for a doubling of CO2, as stated in the Chapter 6 of the TAR to WG1. However, you have not explained what you think the correct number is. If 1 C has been estimated incorrectly, what is the number if you do the estimation using “the full surface energy balance equation (but without water vapor feeback, etc).”?

    • Jim, I’m not sure the calculation has ever been done this way, but I think I spotted something relevant at scienceofdoom at one point, I will check.

      • Thank you Judith. However, what I said was “You indicate that you dont agree with the way the IPCC estimates the surface temperature increase using the change in radiative forcing for a doubling of CO2, as stated in the Chapter 6 of the TAR to WG1” Do you agree that the way the IPCC does the estimation is wrong?

        Surely this issue is absolutely vital. If the IPCC number of 1 C for a doubling of CO2 is wrong, then the estimate of the contribution of feedbacks is wrong by the same factor. For example, if the number is only 0.1 C instead of 1.0 C then by the end of the century there will be little increase in global temperatures, and CAGW is just plain wrong.

      • The way feedbacks are estimated is not very good IMO. But the overall sensitivities derived from climate model simulations that double CO2 is a robust method for determining overall sensivity, with the caveat that the models are imperfect. Does this make sense?

      • Judith writes ” Does this make sense?”

        What I am asking is whether you agree that the way the IPCC estimated the 1 C, in Chapter 6 of the TAR, for a doubling of CO2 is wrong.

      • do you mean the expression
        delta Ts/delta F = lambda?

        No, I am not a fan of that expression for relating a delta Ts to a delta F, although the expression can be useful in a diagnostic sense for other applications.

      • Yes that was what I meant. Thanks.

  51. Hi Judy- This is yet another outstanding post that you have provided us! I have posted on one part of your post on my weblog

    Comments On Judy Curry’s Post “The Culture Of Building Confidence In Climate Models” [http://pielkeclimatesci.wordpress.com/2010/10/14/comments-on-judy-currys-post-the-culture-of-building-confidence-in-climate-models/]

    • Roger, I agree that Knutti’s points aren’t strong. So I’m not sure why climate modelers have such confidence; I suspect it is “comfort” rather than confidence.

  52. Dr. Curry et al.: To ask a naive question, as a member of the public for whom confidence is an issue, to what extent are climate models open and transparent?

    It would seem to me that the data going into a model, the code itself, and documentation explaining the model and its assumptions, plus how to run the model on some hardware/software configuration must exist somewhere — at least in the modelers’ possession — but is this information more generally available?

    Before getting to questions of validation and verification, I’m concerned about the basics of replication. Perhaps this has already been handled and professionals know the score, but it’s not clear to me.

    We are almost a year past Climategate which was about climate data and the dubious practices around that data. There was talk that some of the proprietary restrictions were being lifted, but other than that, nothing really got fixed as far as I know in terms of making the data and processes open and transparent.

    Are models any better?

    • Huxley, the most transparent climate model is NCAR’s CESM, which has been designed as a community climate model. For info, see this link. In the U.S., NASA and GFDL have made their code publicly available, but do not have the documentation and “how to” instructions at the same level as the NCAR model.

      • Dr. Curry: Thanks!

        I was able to register, login and examine source code. It’s been over thirty years since I’ve looked at Fortran, but on cursory inspection the code appeared straightforward, well-organized and professional. Good.

  53. David L. Hagen

    Judith
    How do we build “comfort” on the impact of clouds? I have heard that the cloud feedback is so uncertain that we do not even know its sign!

    A growing number of papers are showing solar/cosmic impact on clouds. e.g.
    Svensmark “When the sun sleeps”

    It turns out that the Sun itself performs what might be called natural experiments. Giant solar eruptions can cause the cosmic ray intensity on earth to dive suddenly over a few days. In the days following an eruption, cloud cover can fall by about 4 per cent. And the amount of liquid water in cloud droplets is reduced by almost 7 per cent. Here is a very large effect – indeed so great that in popular terms the Earth’s clouds originate in space.

    See the NIPCC’s further discussion on
    5.1. Cosmic Rays

    Length of day correlated to cosmic rays and sunspots

    the evolution of cosmic rays and the amplitude of the semi-annual day length are correlated (correlation coefficient the order of 0.7), and are in phase.

    I have heard that global climate models have very rudimentary cloud models and do not include such effects of solar/cosmic rays nor of the correlation with earth’s Length of Day (as impacted by temperature and wind changes).

    How do you propose changing current culture sufficiently to incorporate such impacts on clouds sufficiently to build “confidence in climate models”?

    • Clouds are the source of massive discomfort and indigestion. With regards to possible solar/cosmic ray effects, this is in the “white” area of the italian flag, we just don’t know. Many people are actively working on this, and this issue will have its own chapter in the IPCC AR5 report (with a good team of lead authors), so hopefully we will get illumination on the subject, if not a “fix”

      • Scafetta doesn’t believe he’s in the “white” area of a nationalistic flag. He believes that correlative (not necessarily causative) evidence he is presenting is being ignored in the GCM’s

        I agree with your other comment on this issue – paraphrased as “we are clueless here”

  54. J. Curry: “Clouds are the source of massive discomfort …this issue will have its own chapter in the IPCC AR5 report (with a good team of lead authors), so hopefully we will get illumination on the subject, if not a “fix”

    This response is a source of massive uncomfortable for me. Considering Mr. Hagen’s question was about “changing the culture” to “incorporate such impacts”, it does not inspire much confidence in me that your response appears to be (to paraphrase) “I understand that the IPCC is working on fixing (illuminating) the problem with clouds for us, so we’ll just wait on their lead”.

    If modelers really are waiting for the IPCC to provide leadership in either illuminating or fixing one of the most serious problems with their work, then I think I just lost my last shred of respect for these fools. Sorry, giving the IPCC the lead hand in this is a very good way to subvert “the culture” of which you speak and to do the opposite of “building confidence in climate models”. I sincerely hope this is not what you meant.

    • No, I did not say i am waiting for the IPCC to fix the problem, but i would expect a thorough assessment will help understand and illuminate the problem. The problem with clouds in climate models are of two different types: the first is a microphysics/chemistry one, regarding the physics and chemistry of how a population of cloud particles interacts with aerosol particles and evolves with time. The second type is more intractable in terms of the dynamics of cloud formation and evolution, which is tied up with the overall chaotic fluid dynamics and problems with the fundamental equations in dealing with the interactions of the broad range of scales in this problem. There is much progress on the cloud microphysics front; with regard to the cloud dynamics issue, it seems that stochastic parameterizations are the way to go. But phase boundaries in a chaotic fluid is pandemonium for lack of a better word (Tomas won’t like this, but i don’t think that anyone investigating spatio-temporal chaos has dealt with the phase boundary problem.)

      • Judith, perhaps I wasn’t clear. I was not concerned about whether IPCC would be doing the science — obviously not. I am concerned about the confidence you are displaying in them to summarize the research well and to give a balanced presentation. My confidence in climate modelers is inversely proportional to the confidence I see them show in this body that has proven to be agenda-driven and which obviously has a vested interest in what, if anything, predictive models do with cloud feedback. I have confidence in those studying the fluid dynamics and chemical/physical problem to carry out their work in good faith. I do not have the same confidence in the IPCC process given its record of cherrypicking science to fit a certain narrative and presenting that which cannot be cherrypicked with a slant.

        I have seen you make similar criticisms of the work of the IPCC. Whence comes your present confidence?

      • Well, i certainly want to hear what they have to say on the subject, and they have some very good people lined up for this chapter (people that personally know that don’t have an agenda). What makes it into the summary for policy makers is another story . . .

      • Okay Judith, that’s good enough for me; I’ll cool my guns and withhold judgement on this point until AR5 and its summaries are published.

        Resolving cloud feedbacks would be quite an accomplishment indeed, if done from first principles. I’m not convinced it’s possible. I liked Lindzen’s approach that bypassed the intractable equations when he showed that the early faint sun paradox could be resolved simply by assigning cloud feedback a slight negative value.

      • Tomas Milanovic

        But phase boundaries in a chaotic fluid is pandemonium for lack of a better word (Tomas won’t like this, but i don’t think that anyone investigating spatio-temporal chaos has dealt with the phase boundary problem.)

        It is not really about liking but I have never met this word in the litterature, that’s why I (mildly) objected.
        As for chaotic (generally fractal phase boundaries) there has been work on crystal growth f.ex snowflakes . I will look up what I have on this subject . However it is probably easier to consider a cloud as a collection of liquid spheres with varying diameters.
        I have only a superficial knowledge of cloud microphysics (condensation nuclei and such) so have not much to say to this issue.

    • In the new year, there will be a number of posts on the cloud-climate issue. There is A LOT of research being conducted on this topic; my little research group publishes about 3 papers/yr on this subject.

  55. A LOT

    This warms my grammar-scold heart. ;-)

  56. Judith: You wrote: “Paul Barton Levenson provides a useful summary with journal citations of climate model verification.” We shouldn’t have to rely on bloggers for information about which predictions of change made by climate models have been reliably verified by observations. This is a task that should have been undertaken in AR4 WG1, Chapter 8: “Climate Models and Their Evaluation”. Instead, the authors avoid discussing whether climate change or feedbacks predicted by models have actually been observed.

    Out of curiosity, I checked some of PBL’s references. By coincidence, the first one I investigated was mis-characterized. PBL claims that Yin (2005) provides observations that confirm model Trenberth’s (2003) projection that storm tracks will move poleward. However, Yin (2005) studied the poleward shift of storm tracks PROJECTED to occur in the 21st century and contains no observational data. The paper’s introduction has one sentence on the observational evidence for a poleward shift in storm tracks reported in earlier papers (McCabe et al. 2001; Fyfe 2003) . Their analysis in these papers is fairly primitive compared with that being done for hurricanes and is complicated by the possibility that long term patterns unrelated to GHG’s (AO, NAO, PDO) could bias the trend. The jury is still out IMO. http://www.cgd.ucar.edu/cas/jyin/IPCC_paper_GRL_Jeff_Yin_final.pdf

    Next I checked PBL’s references about “Tropopause and radiating altitude rise” and got bogged down. Neither of Thurber (1997) nor Kushner (2001) use models to make predictions about this aspect of climate change.
    http://www.math.nyu.edu/caos_teaching/student_seminar/archive/spring_04/jclub_s04/thuburn_craig97.pdf
    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.141.820&rep=rep1&type=pdf

    Santer (2003) compares the rise of the tropopause from 1979-1999 from observations (reanalysis) to the rise calculated by a model driven by anthropogenic and non-anthropogenic forcings. Unfortunately, the tropopause rises in response to warming of the troposphere caused by GHGs AND cooling of the stratosphere caused by ozone loss. The models indicated that 30% of tropopause rise was due to greenhouse gases and 50% was due to changes in ozone that cooled the stratosphere. However, the greatest change in tropopause height occurred at the South Pole where the “Ozone Hole” began to appear in Antarctic spring. Unfortunately, Santer’s paper doesn’t discuss the reliability of the ozone forcing (input or calculated?) used by his model. In a comment, Pielke (2003) noted that NO warming of the troposphere was observed NCEP data used by Santer, implying that all of the observed rise in the tropopause could be due to stratospheric cooling. So observation of change in the height of the tropopause turn out to be a lousy place to look for confirm that models to predict changes expected from GHGs.

    PBL’s second reference on observations of “Tropopause and radiating altitude rise” is Seidel and Randel (2006). In the abstract, they say: “Tropopause heights in the SUBTROPICS exhibit a bimodal distribution, with maxima in occurrence frequency above 15 km (characteristic of the tropical tropopause) and below 13 km (typical of the extratropical tropopause). Both the radiosonde and reanalysis data show that the frequency of occurrence of high tropopause days in the subtropics of both hemispheres has systematically increased during the past few decades”. Seidel and Randel make no mention at all of what computer models predict about the phenomena of the changing bimodal distribution of tropopause height.

    My understanding is that the latest research on what PBL calls “Temperature trend versus UAH results” and I might call the “phantom hot-spot in the upper tropical troposphere” indicates that observations currently invalidate climate models.

    Based on personal investigation of three areas summarized by PBL, I’d say that that the web page you cited is mostly alarmist propaganda. Your readers do need a source of reliable information about whether ANY substantial observational evidence confirms the predictions of climate models.

  57. Thanks for the audit of PBL’s site. I referred to it based upon Bart Verheggen’s reference on a previous thread, I haven’t looked at it closely.

    • note, in the new year I will be digging into observational issues on the blog.

    • In a broad, multi-disciplinary problem like this, it’s tough to keep the “error cascade” under control. I’ve noticed that lots of those lists like that are for rhetorical points in blog discussions more than anything else. Lots of folks just assume that the guy putting together the list is competent, or won’t tell the big lie, so they just stop talking/thinking/digging. A common tactic seems to be to link to an authoritative looking list that on superficial examination seems to be related to the discussion, but on closer examination is not.

      I wish there were a slashdot-like way of rating comments, Franks would get a big “Informative” from me.

  58. Willis Eschenbach

    curryja | October 10, 2010 at 9:37 pm | Reply

    Jim, the models have been validated in some sense, but improvements are needed, and the documentation of many of the procedures and tests that constitute the validation are not organized in a way that is accessible.

    Judith, I’m sorry, but claiming the models “have been validated in some sense” means nothing. How have they been validated in the slightest? They disagree with each other, they disagree about the present and the future, they are tuned to reproduce the temperature trend by tweaking the parameters, they use greatly varying climate sensitivities … validated?

    Occasionally they get something right, and it’s possible that is what you are calling “validation”. But as my father used to say, “Even a blind hog will find an acorn once in a while.” That is not validation.

    As a scientist, I’d sure like to see you approach this scientifically rather than by waving your hands and making vague claims. Please give us some citations for how the models are “validated in some sense” … what sense would that be? Because as far as I can tell, the models routinely crank out weather that has never happened on this planet. But since the trends of the temperature look right, no one thinks to look at say the first and second derivatives of the temperature … try that on some model results, and you’ll see what I mean. Wild temperature gyrations never seen on earth, huge monthly swings that have no natural counterpart, swift reversals of direction … but they still get the trend right, so they are claimed to be “validated”. Never mind that they are not lifelike in any sense …

    Finally, you talk about how to “build confidence” in the models … this is just like our prior discussion about building confidence in climate science. The AGW supporters think the lack of confidence in climate science is a PR problem, when the problem is bad science. Similarly, the problem is not “building confidence” in the models.

    It is that the, just like many climate scientists, the models have not done a single thing to earn our confidence. How about, rather than trying to bolster confidence in the models, we take a look to see why a quarter century of climate models have hardly changed our understanding of climate at all? The range of climate sensitivity has barely changed in a quarter century. Could it be because the models share common problems that prevent them from advancing our understanding, and thus we need to throw them out and start over? You’ll never find out if you are beavering around trying to “build confidence” in the models.

    Instead of that, you might start by considering that we have little confidence in the models because they give us wrong answers, and that they disagree with each other. Start by assuming that the lack of confidence in the models is totally justified, and see where that takes you. I assure you that you’ll end up in a very different place than if you start by trying to build confidence in the models as you are doing.

    • Willis, you are confusing verification and validation. The complexities of this issue for a model of a complex system are substantial. The extreme position you are taking is less justified than the position of the IPCC. So this doesn’t really help.

      • I apologize if this comes across as nit-picking, but I think you’ve confused verification and validation (easy to do since they are synonyms in the dictionary, and our choice of technical definitions is arbitrary, Roache even used them opposite the currently accepted usage in some of his early work : ).

        We’ve talked about definitions a lot on Steve Easterbrook’s site: [1] [2] [3]. It’s good to get this low-level stuff clear so we don’t waste our time talking past each other.

      • you are correct! I’m jet lagged and got my V&V’s mixed up

  59. This is interesting; Computational science: …Error. From Nature News, even. Comments allowed over there.

  60. Related to above; Publish your computer code: it is good enough. Comments there, too.

  61. this post from jstults’ blog is excellent and very relevant, pointing out the challenges of validating climate models

    http://j-stults.blogspot.com/2009/12/bayesian-climate-model-averaging.html

  62. Re: Validation
    According to Easterling ( http://prod.sandia.gov/techlib/access-control.cgi/2003/030287.pdf ): “Validation of a computer model means the comparison of computational predictions to physical outcomes of the events being predicted.” He goes on to say that “Conventional validation practice is to use these comparisons to make a pass/fail, valid/not-valid decision about the computer model.”

    So far as I have been able to determine, the events being predicted by the IPCC climate models have never been described. This would make it impossible for one to validate these models. That they were impossible to validate would signify that these models lay outside science under Karl Popper’s criterion for separation of scientific from non-scientific models.

    The spatially and temporally averaged global surface temperature defines a set of outcomes that is unsuitable for the validation of a model as the number of outcomes in the set of all outcomes is unbounded. It follows from the unboundedness that claims regarding the validity of computed outcomes lack statistical significance in almost every case. To cite a fictional example, the claim of validity for the computed temperature of 14.669662738… Celsius lacks statistical significance because no such temperature has ever been recorded.

    For the IPCC, the the word “evaluation” seems to reference an idea which differs from the idea that is referenced by the word “validation” by Easterling. “Evaluation” seems to reference comparison of computational predictions to physical outcomes without the ability to use these comparisons to make a pass/fail, valid/not-valid decision about the computer model. Under IPCC “evaluation” it is impossible to validate an IPCC climate model but for the statistically naive there is the appearance that this model can be validated.

    • According to Easterling ( http://prod.sandia.gov/techlib/access-control.cgi/2003/030287.pdf ): “Validation of a computer model means the comparison of computational predictions to physical outcomes of the events being predicted.” He goes on to say that “Conventional validation practice is to use these comparisons to make a pass/fail, valid/not-valid decision about the computer model.”

      We have to be careful here. The term “valid” is used here in a technical sense meaning “suitable for a particular intended use”. Someone who isn’t familiar with the V&V literature might be forgiven for thinking that “not valid” means “wrong”, but that’s not the technical meaning at all. The whole culture of V&V needs to be understood in the context of providing credible decision support. Another way of saying “valid” might be “demonstrated usefulness for a particular purpose.” Saying a model is “valid” or “not valid” doesn’t mean much without context describing what sorts of decisions it was intended to support. A model could easily be validated for certain uses and not for others. For instance, I think current GCMs are perfectly valid for scientific uses (basic knowledge creation), but have a ways to go before they can provide useful support to many of the currently popular policy prescriptions.

      • jsults (October 19, 2010 at 7:48 am):
        Thank you for affording me the opportunity to clarify my comment of October 18, 2010 at 7:27 PM.

        Regarding the semantic issue, by the word “valid” I mean “logically correct.” This is definition #3 for the word in the Merriam-Webster online dictionary. Under this definition, to “validate” a model is to determine that each of its hypotheses is logically correct.

        In logic, a hypothesis is an example of a proposition. Under the deductive logic, every proposition has a variable that is called its “truth-value.” The truth-value takes on the values of false and true.

        Often, in the construction of a model, one must extend logic from its deductive realm and into its inductive realm. The latter realm differs from the former in the respect that information for a deductive
        conclusion is missing.

        Logic can be extended in this way by replacement of the notion of a truth-value by the notion of a probability ( for details see, for example,: http://knowledgetothemax.com ). Following this replacement, a probability value of 0 corresponds to the truth-value of false. A probability value of 1 corresponds to the truth-value of true. When probability values are limited to 0 and 1, information is not missing and the logic reduces to the deductive logic.

        The following remarks make the assumption that, given that the system under study is in condition C, the model predicts that the value of a specified observable will be ‘a’ at each time in a specified set of times in the future. The hypothesis that “the value will be ‘a’ ” is a hypothesis of this model.

        If, in the empirical phase of the associated study, the study’s statistical population is sampled with single replacement, this hypothesis cannot be determined to be true with a sample of finite size; as every realizable sample is of finite size, this hypothesis cannot be determined to be true. However, sampling theory allows the same hypothesis to be found to be highly probable under repeated trials. Thus, that the hypothesis is valid implies this hypothesis is highly probable. In logic, then, “valid” is the equivalent of “highly probable.”

        Now, let it be stipulated that the observable is a temperature. With this stipulation, ‘a’ designates a temperature value.

        Every temperature value is a real number. As this is so, in an interval in temperature that is of finite length the number of temperature values is unbounded. With a sample of finite size, only a vanishingly small proportion of model-predicted temperature values can be associated with samples of sizes exceeding 0. Only in the case that the associated sample size exceeds 0 can the hypothesis that “the value will be ‘a’ ” be validated. Thus, in virtually every case, this hypothesis of the model cannot be validated.

        The IPCC’s computer models are of a different character than the model described immediately above. In each of the IPCC models, the notion of temperature is approximated by a number with a finite but large number of digits; this number is not a real number. Thus, though this number erroneously is labelled a “temperature,” it cannot be a temperature for its value is not a real number. An infinite number of temperature values lie between every pair of consecutive values of the erroneously labelled “temperature.” Thus, without recourse to observational data it can be stated that it would be extremely unlikely for the hypothesis that “the value will be ‘a’ ” to be validated. It would be extremely unlikely for the dearth of observational data.

        Every “evaluation” of an IPCC computer model of which I am aware features comparison of model-predicted (and erroneously labelled) “temperature” values to temperature values in nature. By the argument made in the previous paragraphs, an evaluation of this type does not establish the logical correctness of the associated model’s hypotheses.

        Many hypotheses of the IPCC models are logically incorrect. For the people of the world, to make public policy under this circumstance has a number of drawbacks. One is that, through the use of one or more logically incorrect hypotheses as a premise, one can reach a logically incorrect conclusion. There is the possibility of basing hundreds of trillions of dollars in CO2 abatement costs upon a logically incorrect conclusion.

        The logical shortcomings that are exposed by this comment would be erased by replacement of a temperature as the independent variable of the IPCC climate models by a set of non-overlapping ranges of this temperature. However, as I’ve partially demonstrated elsewhere in this blog, by itself this move would not erase all of the logical shortcomings of IPCC climatology.

      • Regarding the semantic issue, by the word “valid” I mean “logically correct.” This is definition #3 for the word in the Merriam-Webster online dictionary. Under this definition, to “validate” a model is to determine that each of its hypotheses is logically correct.

        That’s the problem I was trying to point out. The word “validation” in the context of “verification and validation” (V&V) has a specific technical definition. The computational physics guys at Sandia (who you cited btw) use the term in the sense accepted by AIAA, ASME and DMSO, which is slightly different than the defnitions given by software guys for the same two words (IEEE accepted definitions). Dictionary definitions don’t really matter in the context of a technical discussion, since we give words that laymen may use precise technical meanings apart from their common usage.

        In each of the IPCC models, the notion of temperature is approximated by a number with a finite but large number of digits; this number is not a real number. Thus, though this number erroneously is labelled a “temperature,” it cannot be a temperature for its value is not a real number.

        Are you really arguing that the climate model predictions can’t be validated because they use finite precision arithmetic?

      • For the IPCC models, a fatal error arises from the use of language in which a finite precision number (fpn), labeled as a “temperature,” is this model’s independent variable. Any model which asserts that the value of a temperature is the value of an fpn asserts a falsehood, thus being logically invalidated. In this way, the IPCC models are logically invalidated.

        There is a way out of this predicament. This is to rebuild each IPCC model such that a temperature is not this model’s independent variable. Instead, a set of non-overlapping bands of temperature values is a model’s independent variable. In the computer that runs the model, the value of an fpn points to the predicted band. A prediction is wrong if and only if the measured temperature value lies outside the predicted band of values.

        Now suppose the computer that runs an IPCC model represents every number by an 8 decimal digit fpn. If the model builder were to employ this number as the pointer to the predicted band of temperature values, there would be 100 million bands. There is a price to be paid for the use of such a highly precise number. This is that the number of observed events that would be required for statistical validation of the model would exceed 100 million. It follows from the facts that: a) climatology averages temperatures over periods of 10 or more years and, b) the global temperature and CO2 concentration records extend back in time for less than about 150 years, that modern climatology is unable to provide more than about 15 observed events for use in the validation of a model that is pertinent to policy making on CO2 emissions and global temperatures. Fifteen is very much less than 100 million.

        If the model builder were to resist the temptation to employ the computer’s 8 decimal digit fpn as the pointer to the specific band of temperatures and to employ a 1 binary digit fpn instead, the benefit would be to reduce the number of events that would be required for validation of the model by a factor of 50 million. Though this would help, climatology would still be grossly deficient in the stock of observed events that were available for the validation of its models.

        Now, for didactic purposes, suppose a computer were to be invented that could store and operate upon a real number. This computer would support assertions by a climate model of temperatures in nature. However, the size of the sample that would be required for validation of these assertions would be infinite. As samples of infinite size are not realizable, the selection of temperature as the independent variable of the IPCC models can be described as a fundamental error in the construction of these models.

      • There is a price to be paid for the use of such a highly precise number.

        I guess if the IPCC was making claims that the uncertainty was due to round-off error in the least significant digit then you’d have a point.

      • I’d appreciate some details on your assertion. Precisely what claims are made by the IPCC and how can these claims be statistically validated when less than 15 statistically independent observed events are available for this activity?

      • Terry, I really look forward to your input/assessment of my Part III on detection and attribution, which addresses the overall logic of the IPCC’s argument

  63. Jstults: “We have to be careful here…. A model could easily be validated for certain uses and not for others. For instance, I think current GCMs are perfectly valid for scientific uses (basic knowledge creation), but have a ways to go before they can provide useful support to many of the currently popular policy prescriptions.

    Though I’m not a modeler myself I have a friend (who prefers not to be brought into blog discussions which is why I’m not mentioning his name) who modelled climate systems been for many years, since his postdoc days, and is now well known in both dynamical systems and numerical analysis.

    In my discussions with him about climate modeling he has repeatedly made the same point, jstults: It is not that climate models are bad tools, they are just good tools for something other than what they are popularly used for, by IPCC etc. He also complains that there is an overly naive tendency among modelers to believe that the climate system can be treated like a hot brick (insofar as determining longterm “average” behavior) and he argues quite effectively that this is nonsense and pretty well guaranteed to lead to wrong answers.

    Judith’s original question might be answered by saying that there would be more confidence in climate models if they were not repeatedly used toward ends for which they are not well suited, such as long-term prediction and selling policy makers on catastrophic climate trends which are more likely to be artifacts of the medium (like the tropical tropospheric hot spot) than realistic aspects of climate.

  64. Bishop Hill has a link to a presentation by Mike Hulme entitled “How do Climate Models Gain and Exercise Authority?”
    http://bishophill.squarespace.com/blog/2010/10/22/mike-hulme-on-climate-models.html

    I haven’t had time to listen to it yet

    • Judy:

      I viewed the video of Prof. Hulme’s presentation. Hulme’s ideas about how models should be validated are illogical. The illogic leads him to the non-conclusion that there is no answer to the question of whether the models should be trusted.

      Hulme seems to be saying that we don’t know whether there is a trustworthy basis for policy making or there is not such a basis. After the expenditure of $20 billion or so on climatological research, Hulme seems to be telling us that we have no information that bears on this question.

      Roughly speaking, I think Hulme is correct in holding this point of view. However, he exhibits no knowledge of how we might escape from this predicament. The route of escape is to employ logic in the construction of and validation of the climate models. Here I’ll address only on the question of the validation.

      For Hulme, in the validation of a model one compares predicted to measured temperatures. If the predictive data “resemble” the measured data, the model is validated; otherwise it is invalidated. The resemblance is a heuristic which, like all heuristics, has the shortcoming of lying in the eye of the beholder. By this, I mean that person A may perceive resemblance under circumstances in which person B does not. In this way, Hulme’s idea of validation by resemblance violates the law of non-contradiction.

      Non-contradiction is the cardinal principle of logic. The violation of it by Hulme’s method of validation leads to Hulme’s inability to determine whether a model is trustworthy or is not trustworthy. He can’t make this decision about a model because person A may perceive resemblance under circumstances in which person B fails to perceive resemblance.

      Under a logical approach to validation, each prediction that is made by a predictive model is viewed as making a claim about the outcome of a statistical event and this claim is viewed as a logical proposition. If the predicted outcome is not observed, the associated claim is falsified.

      The IPCC models predict outcomes deterministically. In this kind of model, a single false claim is sufficient to invalidate the model.

      In illustrating his ideas on validation, Hulme shows a comparison of temperatures predicted by a model of Hansen to the associated observed temperatures. The predicted temperatures differ from the measured temperatures. Thus, in logic, Hansen’s model is invalidated. For Hulme, however, Hansen’s model is not invalidated.

      A model that predicts temperatures is virtually certain to be invalidated if the criterion for invalidation is logical for a temperature is an example of a real number, hence in a finite interval there are an infinite number of them; it follows from the infinite level of precision that a match between the predicted and observed values is extremely unlikely. One of the strategies that is available to a model builder toward avoidance of invalidation is to make the claim that the observed temperature will be one of the temperatures in a logical disjunction of the form T1 OR T2 OR… where T1, T2… are values of a real number and the elements of the set {T1, T2…} lie in a band. If Hansen had employed this strategy, his model would have predicted the band of temperatures in which the observed temperature would lie in each statistical event and not the temperature.

      In his presentation of the results from Hulme-style validation of Hansen’s model, Hulme exhibits no grasp of the idea of a statistical event. There would be a finite number of statistical events, each extending over a specified period of time but Hulme shows us a continuum of predicted and measured values. My guess is that neither Hulme nor Hansen grasps the elementary ideas of statistical reasoning in science. In its reports, the IPCC seems to reveal itself to be similarly clueless.

  65. Judith,
    Thanks for the kind words about my post on V&V of climate models. However, you appear to have misunderstood what I was writing about: “Steve Easterbrook has a superb post on what a V&V process might look like for climate models”. I’m not reporting what a V&V process *might* look like; I’m reporting what I’ve observed as actual practice in my detailed anthropological studies of the culture and practices at four major climate modeling centers, in the US and Europe. I agree that these practices aren’t as visible to those outside the modeling community as they ought to be. But your argument that the validation practices need to be improved is quite wrong – they are already better than any V&V process I’ve seen in any part of the software industry (and I’ve studied V&V practices in some of the most demanding safety-critical applications in the world).

    In my opinion, two things need to change:
    1) The actual culture and practice of climate modelers needs to be described to the broader world in much more detail (my studies are, in part, an attempt to do this). And…
    2) People outside the climate modeling community need to stop creating strawman arguments about lack of validation of climate models.

    I think the IPCC process itself has been a blessing and a curse for the modeling community. A blessing because it has brought them together around a common set of benchmark scenarios, and hence has driven progress far faster than might otherwise have happened. But a curse because it over-emphasizes the work on projections of future climate (which is where the models are weakest) and has under-emphasized the efforts to understand the detailed processes that shape current and past climates (which is what the models do best).

    • Steve, thank you for this clarification. It is certainly reassuring to hear your perspective since you have carefully looked at climate models, since others in software and engineering think otherwise. A key element of V&V has to be communication of this to the users, which seems to be a shortcoming. I still think that the validation is inadequate in terms of fitness for many of the tasks to which climate models are applied, and insufficient attention has been given to exactly what these metrics should be.

    • Steve, I agree that vast improvements in communicating the status of the V&V and SQA activities in the climate science software community are badly needed.

      I disagree that those of us seeking additional information, and there are many as indicated by the number of comments made at various blogs, are simply throwing out strawmen. All of us have direct experiences with the standard V&V and SQA procedures and processes used in various other science and engineering areas. The information available to the public from climate science is clearly lacking relative to those standards. There can be no question about this statement. These standard procedures and processes have a track record of proven successes. My experiences have all been associated with models, methods and software that is used to supply information that will be part of policy that affects the health and safety of the public. By far as important as anything else.

      In my own situation, the V&V and SQA cases for which I have been a part have all been conducted for private organizations so that the results are not available to the general public. However, the activities presently underway at some of the national laboratories here in the USA are in the public domain. I have posted specific links to some of the reports, papers, and presentations at various blogs, including yours, and will not repeat those here. Basic searches at osti.gov will easily locate many citations. The physical and software domains for these applications are as complex as those for climate science.

      It has been only recently, in the past five years I think, that papers investigating verification of the dynamical core of some GCMs have appeared in the public domain. This is like over forty years after the first papers were published. One of the more troubling aspects of the present state in climate science is that from the outside verification, which must always precede validation, is given short shift. Especially because all software, without exception, can be verified. All the basic and fundamental components require verification. To skip complete and extensive verification and jump directly to validation is an invalid and unacceptable approach to V&V and SQA.

      Finally, you have visited four organizations and observed the activities associated with GCMs. I think there are about 20 other GCMs in use. Extrapolation of your limited investigations to these other models and codes cannot be justified by any means or methods. Additionally, there is much other software used in climate science and each and every piece requires independent V&V and SQA. Directing attention to experience with a very limited sub-set of climate science software might be considered a strawman.       

  66. Karl Hallowell

    An approach to consider here, one which has been extremely successful in other areas, is betting markets. Most of the value to today’s society of building climate models is to make predictions about the future. A common tool here is a sort of binary contract, that pays out a fixed amount in total (say a certain amount of some underlying currency or investment security). One side, which I’ll call “YES” pays from zero to the full amount depending on how closely some statement is to becoming true (for example, “The global mean temperature in 2050 will be 2C higher than in 2000” or “If 10% or more of the world’s electric power is still produced by coal burning plants in 2100, then global temperature will be 10C higher than in 2000 (as per the XYZ climate model prediction in 2010).”). The other side, which I’ll call “NO” pays out the remainder. Payout occurs when the conditions for judgment of the contract are satisfied.

    There are several examples of this in practice. Two I recommend for study are Intrade and the Foresight Exchange, both which already trade in a small number of climate change related bets.

    This has two advantages. It allows for creation of a financial instrument which can directly reward someone for being right, well down the road. Second, it gives a current term way to collectively estimate climate effects.

    Another approach is to create contests where entries attempt to make predictions about fixed future events, and the model with the best collection of predictions (by whatever metric is used by the contest) wins a considerable prize. (This approach incidentally is compatible with the market approach above. Among other things, you could produce contracts about the relative ordering of climate models in the final ranking for the prize and trade the contracts.)

    The point behind these approaches is that it provides a very public way to evaluate proposed models and generous incentive to build working models. For example, it currently is not worth the while of small groups to attempt to compete with the various big academic players for building a climate model. Unless you have a chance at a government contract or equivalent private source of funding, there is no possible payout for making the attempt.

    I see that as part of the problem here. The only people really trying to make long term climate predictions all have similar funding sources and come from similar academic environments. There is a serious lack of diversity. Something like a prize or a large market opens the field up to the other smart people in the world. Even if the climatologists end up on top (as they are likely to), they’ll have a validation in the public eye that goes far beyond anything that can be provided by the IPCC or the academic world.

    • very interesting post, thank you, i am flagging this for further consideration

      • Karl Hallowell

        For some really good speculation along these lines, check out some of the writings by Robin Hanson. He’s an economics professor at GMU. He also has a blog, Overcoming Bias. I particularly like his papers “Shall We Vote on Values, But Bet on Beliefs?” and “Are Disagreements Honest?” (coauthored with Tyler Cowen). He was the prime intellectual force behind the Foresight Exchange and it’s less functional predecessor, “Idea Futures”.

        The climatology debate happens to be one of his favorite examples of how markets can resolve issues that are both complex and which contain many conflicting special interests.

      • very helpful for the post I am working o right now!

  67. Somewhat belatedly, I offer thoughts on issues raised here from the perspective of someone with a relatively superficial knowledge of climate model construction but a reasonable understanding of how the climate system behaves and the physics underlying that behavior. I also do this with the wish that one or more expert climate modelers were undertaking this task instead, because their input would be valuable in response to some specific comments made earlier. On the other hand, the recent comment by Steve Easterbrook reinforces my willingness to proceed with the following tentative set of views. I’ll confine my comments to the predictive value of models, which is the area of performance most subject to challenge.

    Complexity of the climate system and uncertainty surrounding input data preclude extremely accurate predictions, but they are not incompatible with an ability to approximate real world outcomes within a range narrow enough to justify future planning on the basis of reasonable probabilities. In fact, this ability has been demonstrated by numerous examples. Most of these entailed hindcasting, but Hansen’s 1988 model projections have exhibited some skill in a forecasting mode, despite his use of inputs now known to overestimate the most likely value of climate sensitivity. I suppose that if his estimates were corrected to reflect current values, his modeled scenario B would likely correspond closely to what actually transpired, rather than somewhat overestimating it. That is conjecture, because it is illegitimate, once a model is tuned to a starting climate, to retune it later in order to make the results conform to expectations.

    It has been claimed that an even better forecast than Hansen’s could have been made without models, simply by extrapolating past trajectories into the future. That’s true. Using the past to predict the future is a good heuristic. It works – except when it doesn’t. One of the potential virtues of modeling is the ability to predict deviations from past trends based on changes in the factors determining the trends. It is self evident that anthropogenic greenhouse gas emissions, aerosols, and land use changes are candidates for this type of application. It is also likely that models will be more useful when the deviations are modest and/or gradual rather than sudden and extreme in amplitude.

    Models also perform unequally in other regards. They do temperature well, and hurricanes less well. Most conspicuously, the perform well in projecting global outcomes on multidecadal or centennial scales and disappoint in their ability to address regional or short term outcomes.

    Why is that? A disconcerting feature of much commentary here and in other blogs is the relentless refrain claiming that the “climate system is chaotic.” It isn’t. Rather, elements within the system appear to behave chaotically according to our current understanding. These elements contribute considerably to short term and regional unpredictability, but as the interval is extended and the regional coverage expanded, these variations can be shown to even out, so that unpredictability is increasingly outweighed by non-chaotic forces that operate long term. Ultimately, climate, unlike weather, is rather predictable. Living in the Eastern U.S., I can confidently predict that the average temperature in my area next July will probably be in the eighties (Fahrenheit), and will be considerably warmer than the temperature in Fairbanks, Alaska. More importantly, I am equally justified in making that prediction whether today is unseasonably warm or cold, or whether it is raining or the sun is shining. These initial conditions impinge little on what will happen next July here or in Fairbanks. Models behave similarly in regard to the types of scenarios where they perform well. Within ensembles, short term outputs are sensitive to initial conditions, but over the longer term, the models tend to converge toward similar outcomes rather than diverge. That is not the case for all scenarios addressed by models, and signifies inadequacies that remain to be addressed.

    I have discussed the predictive power of climate models, but in a larger sense, I’m not sure that models ever make predictions about many of the most contentious phenomena in climate science, or if they do, I suspect it is rare. In these cases, they do not “predict” so much as “quantify”. I suppose I’m exaggerating to make a point, but the point is that what models say will happen is what the basic principles of physics, in concert with observations, say will happen; the models don’t invent the results but rather assign a number to them. We do not need models to anticipate that significant rises in atmospheric CO2 concentrations harbor the potential to raise temperatures significantly (Fourier, 1824, Arrhenius, 1896), nor that the warming will cause more water to evaporate (confirmed by satellite data), nor that the additional water will further warm the climate, nor that this effect will be partially offset by latent heat release in the troposphere (the “lapse-rate feedback”), nor that greenhouse gas increases will warm the troposphere but cool the stratosphere, while increases in solar intensity will warm both – one can go on and on

    It would not be unreasonable to challenge the physics – for example by claiming that the principles do not adequately account for offsetting factors, but those who make that argument should acknowledge the extent to which their quarrel is with climate physics rather than climate models. When that distinction is acknowledged, I suggest that the models will still be vulnerable to challenge when the quantitation portends meaningful differences in outcomes (e.g., when it determines the sign of cloud feedbacks), but on balance, the models will emerge with rather good credentials, and the consequences of their imperfections will be seen to be less dire than sometimes claimed. In particular, I hope that impugning models as a means of rejecting serious concerns about the future consequences of anthropogenic CO2 emissions will be seen as misguided – based on the false assumption that without models, the edifice of climate prediction will collapse. Even in that unlikely circumstance, the edifice would survive, but the models, with all their imperfections, add further confidence to what the basic principles alone predict. And with the acknowledgment that 100 percent confidence is unattainable, I will end this comment.

  68. The overall problem with models is that they are designed from the start to view GHGs as the central assumption, so a model output that gives an answer related to GHGs is the expected one by definition. They are wholly unable to predict that which is not an input, such as cloud formation via cosmic rays. This factor, as far as we now know, may or may not be a critical factor. If it turns out to be so, almost all model outputs to date are largely invalid.

    As Jerry Pournelle has famously observed, “the map is not the territory” in that models can be believed only when one inputs data from a date preceding the MWP and the thing predicts/simulates the MWP. You then supply data from say AD1450 and it correctly predicts the LIA. At that point you can say it works. Not until. This has yet to happen. Thus far models can’t even recreate past events with any reliability/skill. And no, fudging data and massaging it gently and adjusting parameters until you get the one of (n) “hindcast” that matches up doesn’t count. Rather, plug in the conditions and let it rip. It either works or doesn’t.

    Dr Curry, it’s probably a simple knee jerk response to dismiss Pournelle as merely being an old guy with no climate qualifications, but in my book when a guy with a 170 IQ and arguably the smartest cookie in the room says something, it’s usually a good idea to listen. If models can’t convince someone with that much mental horsepower, they’re probably inadequate to the task, meaning that confidence ain’t going to happen.

    • I’m not familiar with pournelle but i have heard the name, can you provide some links?

    • Alston, your very first statement, “The overall problem with models is that they are designed from the start to view GHGs as the central assumption,” is wrong.

      If you can provide a good set of boundary condition data for the pre-MWP time period, I’m sure climate modelers would be happy to do the runs for you. Care to give it a shot?

      • Since you are here, how were clouds modeled?

      • You’re correct, Gavin – I had forgotten about the 3.6 runs. My bad!

      • So guys, what about the following paper on clouds formation:

        http://www.atmos-chem-phys.net/10/10941/2010/acp-10-10941-2010.pdf

        How do the models account for cloud formation this way?

        Is it that hard to understand why so many don’t have high confidence in climate models?

      • What of the paper? See my comment at

        http://judithcurry.com/2010/11/24/engaging-the-public-on-the-climate-change-issue/#comment-15034

        This paper isn’t the death knell for AGW nor for GCMs – heck, they used a GCM as part of their work!

      • Derecho, I don’t see how the fact that a GCM is used in this paper somehow negates Martin’s point. The point is not that GCMs are considered to be valueless, only that they are not considered to be appropriately verified for use in long-range prediction at a level appropriate for informing public policy. The work on clouds only illustrates an underlying problem with GCMs which, so far, has proved intractible, and which argues against any such use.

        One of the strongest opponents I know of this appropriation of climate models is the preeminent expert in numerical analysis and dynamical systems, Chris Essex, who himself worked on climate models for years. Try telling Chris that GCMs are worthless, and you won’t get far — the point is not their lack of worth, it is that there are certain purposes for which they are unreliable and therefore inappropriately used. Their use in the international political forum today is arguably one of those purposes.

      • FYI see my reply further down – I see I didn’t hit the right ‘reply’ button – wanted it in this location . . .

      • As per U. Wisc Prof George E.P. Box, “all models are wrong, but some are useful.” I don’t think anyone is claiming the death knell for that which is essential to understanding. What’s debated is the degree of usefulenss.

      • Very interesting, Mr. S. For those of us unfamiliar with the literature can you answer for us the most pressing question about this as a reply to Alson’s question: are the paleoclimate runs referred to in this abstract performed by one of the models used for contemporary climate prediction and informing the global political process — i.e., one of those referred to in the IPCC reports? Which one?

        I was unaware that the models referenced by IPCC take into account “orbital, solar, volcanic” influences, which your abstract identifies as “chief” drivers during this period. For those of us not familiar enough with the acronyms in the technical literature and without enough time on our hands to read papers in someone else’s discipline, can you say a few things about what models are telling us about these forcings today?

        In the abstract you say that you can easily define certain effects (in the MWP period) such as GHG influence, “the reconstructions of solar, volcanic and land use-related forcing are more uncertain”. Really? As I understand isotope proxies of cosmic ray penetration are used for pretty good solar activity estimation, and ice core ash is a pretty reliable proxy of volcanic activity. I’ll accept that land-use forcings are harder to get, especially since around that time it is believed the Native Indians in the North American central plateau were burning vast stretches of forested plains (which are wastelands today as a result), which must have had some sort of climate consequences, and who knows for sure what was going on in the far east, Oceana, Africa, South America and Australia?

        Given the poor time resolution of recent (< 2K years) ice core CO2 proxy, how is it that the abstract says with confidence that GHG forcings for this period are "easily defined"? (The standard online databases don't even provide this data for anything under 2K years.) Thinking about the statements in the abstract it sounds a bit like you guys just pulled the boundary conditions for this project out of your … ah, well, you know.

  69. I could have been a bit clearer in ‘confidence in the climate models’. I don’t doubt climate models are a useful tool in understanding global circulation patterns.

    The issue I have is the claims made (using the climate models) that CO2 increases will raise temperatures sufficiently to cause all the catastrophes (more violent weather, more hurricanes, more drought, sea level rise of 2-3 feet this century).

    If cloud cover can vary as noted per the paper, and just a few percent change in cloud cover can have a large effect to temperatures, then how much confidence can one have in the ‘CO2 is the greatest driver of increased temperatures?

  70. “As climate models become increasingly relevant to policy makers, they are being criticized for not undergoing a formal verification and validation (V&V) process analogous to that used in engineering and regulatory applications. Further, claims are being made that climate models have been falsified by failing to predict specific future events.”

    They are also falsified by their failure to include the cloud variation we know has a large effect but can’t quantify.

    End of story.