NASA Earth Science Advisory Subcommittee

by Judith Curry

This week I am attending a meeting of the Earth Science Subcommittee of the NASA Advisory Council.  As described in the Public Notice for the meeting, the topic of this meeting is evaluation of NASA’s  Earth Science Modeling and Activities.

The ppt presentations from the meeting are stored temporarily in a drop box, they will remain at this site for 14 days, so download anything you would like a permanent record of.

Overview from Mike Freilich

Michael Freilich is Director of NASA’s Earth Science Division.  A few points of interest from his overview presentation (ESDUpdateFreilichESSMay11):

As per verbal statement, NASA is the source of 48% of the funds for the USGCRP (U.S.Gobal Change Research Program).

The launch of the Glory satellite failed for the same reason as the OCO (carbon dioxide) satellite.  The issue is Taurus-XL launch vehicle, which has crashed 3 out of its last four launches.  The problem is that there aren’t any other certified U.S. launch vehicle for this range of payload weights.

The Glory satellite was to carry a TIM (Total Irradiance Monitor) and APS (Aerosol Polarimetry Sensor).  Unlike the OCO mission that failed, a copycat version of Glory will not be developed.   Re TIM, the SORCE and ACRIMSAT missions will continue to at least 2016, so solar radiation measurements are covered.  Re APS, this sensor is being reassessed by two separate review panels.

An interesting tidbit: Glory was classed as a “C” priority mission (the lowest priority that actually flies) with a class 2 launch priority (3 is top priority with the best launch vehicles).  If all satellites were class 3 priority A, the program would be unaffordable.   Whether the ASP or some version of the polarimeter survives, remains to be seen.

A “launch vehicle crisis” was discussed: “ESD/SMD/NASA is losing reliable, predictable, access to space via affordable, proven launch vehicles. After 2 consecutive failures of the Taurus-XL LV, there is no certified U.S. LV with capacity between the Pegasus (440kg to LEO) and the Atlas-V (9750-29,240 kg to LEO).  LV availability and reliability problems are causing launch delays and cost increases now, and will have greater impacts on the Earth and Space science programs in the coming decade.”  The issue is the high cost of private sector launch vehicles, the requirement to use U.S. launch vehicles (to support the development of a viable private sector launch vehicle capability in the U.S.), and a mismatch of private sector practice (continued improvement of vehicles) vs NASA’s certification requirements.

GISS Model

The GISS climate model was presented by R. Miller, pdf link here.  A few comments:

The GISS model is gradually starting to more closely resemble the other U.S. climate models.  For CMIP5 (AR5), they have increased horizontal resolution to about 2 degrees.  They are using the NCAR/DOE sea ice model.  Their aerosol-cloud interaction (aerosol indirect effect) is far more advanced than the (non existent) aerosol indirect effect in the NCAR climate model.  The GISS model is conducting the complete suite of CMIP5 simulations.

One of the committee members asked what kind of outside advisory group provided advice and evaluation of the GISS model.  The response was that there is a very large number of people on the model development team, and that they write a new proposal every 5 years for renewed funding, with peer and programmatic review.  Note the NCAR climate model has an extensive advisory committee structure.   The NOAA GFDL model received a comprehensive review a few years ago; I can’t find the details of this, but when I was a member of the NOAA Climate Working Group, there  was a ~2 day on site review that the CWG participated in, as well as a designated review panel that wrote an extensive report.

GEOS5

Michelle Reinecker’s presentation is ESS May2011 Reinecker.pdf in the dropbox. NASA’s other global model is GEOS5, which is a completely separate development from GISS.  It focuses on higher resolution shorter time scale simulations, with a particular focus on the assimilation of satellite data.  Their new atmospheric reanalysis MERRA is advertised as being comparable to the ECMWF Interim Reanalysis, with a particular focus on the atmospheric hydrological cycle.  I use the ECMWF product, but this definitely motivated me to take a look at MERRA.

Discussion with Edward Weiler

Edward Weiler is Associate Administrator for the Science Mission Directorates.   We had an hour of discussion.  He provided his perspective on what is going on on “the hill.”  The importance of international coordination of space missions was emphasized, with shrinking budgets not only in the U.S. but also Japan.  The critical issue of the polar orbiters was discussed.

Several committee members (myself included) mentioned the importance of increased emphasis on the applied science program   to demonstrate the societal value of the satellite data.  Quick availability of the data, realtime data handling is critical for the satellite data to be used for applications.

Evaluating CMIP5 Simulations

Duane Waliser gave an excellent presentation, see  Waliser ESS/NAC PPT
in the drop box.  The presentation is about making better use of satellite data for climate model evaluation.  In particular, a new effort is underway to bridge the gap between the satellite data sets and people evaluating climate models, the bridge consisting of identifying, formatting, arciving, and delivering observations in a form useful for model analysis (which requires model, observation, and IT expertise).  Basic tenets of the activity are:

To provide the community of researchers that will access and evaluate the CMIP5 model results access to analogous sets (in terms periods, variables, temporal/spatial frequency, dissemination) of satellite data.
• To be carried out in close coordination with the corresponding CMIP5 modeling entities and activities – in this case PCMDI and WGCM.
• To directly engage the observational (e.g. mission and instrument) science teams to facilitate production of the corresponding data sets and documentation.

The current problem is that there are numerous instrument specific data sets for each measured variable, in inconvenient formats.  This project should be a huge boon for climate model validation.

Other presentations

Additional presentations can be downloaded from the drop box:

Model V&V

A subcommittee member Rober Schutz asked about model verification and validation (V&V), which was the topic of a previous Climate Etc. post).  The response was that there is extensive model verification, but the climate modelers seemed unfamiliar with verification in the context of V&V.

NASA is no stranger to V&V; in fact they have an IV&V Facility, with the following mission: 

Welcome to the NASA IV&V Facility, home of the NASA IV&V Program. The NASA IV&V Program has embarked on a process to establish an increased value-added/needed presence within the NASA community. The process centers around the NASA IV&V Program’s main purpose of offering needed software services, including IV&V of critical software under development, systems engineering support, and software assurance research.

Vision

The NASA IV&V Program provides confidence and integrity in software that cannot be found elsewhere.

Mission

The NASA IV&V Program will reduce the inherent risk in the Agency’s ability to procure, develop, deploy and operate software within desired cost, schedule and performance goals by

  • Performing IV&V on safety and mission critical software
  • Providing software expertise to the Agency’s SMA activities
  • Conducting research that improves IV&V and software assurance methods, practices and tools
  • Performing Science, Technology, Engineering, and Mathematics (STEM) outreach
  • Performing management and institutional services with excellence

Well, needless to say, NASA’s IV&V does not occur in the Science Directorate at NASA, but rather in the context of mission development.  But as a result of their IV&V heritage, NASA may be more open than other modeling centers to considering some sort of formal V&V for its climate models.

As per Jack Kaye, NASA’s lead person on the USGCRP, further motivation for this is coming from the USGCRP’s need to consider the Information Quality Act in its forthcoming assessment reports.  As per the discussion, USGCRP was planning on relying heavily on peer review in this regard, but I don’t think that is going to satisfy the lawyers and OMB.  The Information Quality Act, in combination with the EPA endangerment finding and the USGCRP’s assessment process, may provide the impetus for a more formal and thorough climate model V&V.

This topic was discussed for almost an hour, not sure if/how NASA will respond, but the issue on their table.

124 responses to “NASA Earth Science Advisory Subcommittee

  1. Dear Professor Curry,

    Congratulations on your selection and thanks for sharing the information.

    I hope that you will be able to convince NASA leaders of the merits of divorcing itself from politicians and from the unhealthy influence of members of the National Academy of Sciences.

    The problem: Those seem to come to the agency with government funds.

    With kind regards,
    Oliver K. Manuel
    Former NASA Principal
    Investigator for Apollo

    • NASA is to survive with Government Funds. Climate funds are easy money. Politicians and NAS members influence easy money. NASA survives on easy money.

      • I agree, Sam.

        As Professor Curry reports above, Michael Freilich, Director of NASA’s Earth Science Division, admits that NASA is the source of 48% of the funds for the USGCRP (U.S.Gobal Change Research Program).

        How crazy is that?

        We sacrificed the world’s very best space science program because someone thought we could stop a natural process, climate change!

        Now the USA is losing its position of leadership in the world and our political leaders are still trying to sacrifice our economic strength to prevent climate change – the very process that has driven and will continue to drive the evolution of life long after we’re gone.

        Al Gore – the champion of “space sciences” who “invented” the internet apparently found a way to use the popularity of NASA’s space science program to fund his crazy, unscientific plan (with UN leaders) to stop climate change without getting Congressional approval.

        Al went on to make a Nobel-Prize winning discovery, “An Inconvenient Truth” (A Convenient Untruth?), and almost became President of the World before e-mail leaks exposed the fraudulent base of his program.

        What a sad state of affairs.

      • Oliver, the reason NASA gets such a large % share is that it includes the satellites and launches (including the ones that just failed), which is most of the hardware side of the climate research program. Most USGCRP agencies just get research money. This has been true since the USGCRP was founded by law in 1990. You can read the latest budget details here (in the back): http://downloads.globalchange.gov/ocp/ocp2011/ocp2011.pdf

        I doubt that the space science program was “sacrificed” for the climate research program, which is pretty small.

      • NASA human spaceflight and NASA science, as far as I understand things, are entirely separate funding pools. Human Spaceflight is by far the bigger piece of the pie, despite the nothing-beyond-earth-orbit-in-40-years syndrome it suffers from.

      • “entirely separate funding pools” coming out of the same USA government budget is another false illusion!

        The USA budget has been foolishly squandered.

        Government funds intended for space-NASA, energy-DOE, environment-EPA, defense-DOD, etc. were diverted to support false propaganda by the UN’s IPCC, Al Gore and an army of government-paid climatologists.

        Nothing, absolutely nothing – including Social Security – is now safe from budget cuts.

        Why ?

        I do not know the answer.

        Is it a coincidence that the AGW campaign started about the time the Berlin wall came down (1989)?

        Was world communism defeated in 1989?

        Or did Nikita Khrushchev correctly describe how the communist movement would respond?

        “We will bury you ! [1956, 1960]

        1956 Time Magazine:

        http://www.time.com/time/magazine/article/0,9171,867329,00.html

        1960 Address to UN:

        http://www.youtube.com/watch?v=8Xv7z5h7yBQ

      • Sam I do not understand what you are saying, not a single sentence of it, which is impressive in a way. NASA is a government agency so government funds are all it gets. It is part of the government. Nor is there anything easy about the federal budget process, but then I have no idea what you mean by “easy money” here. Unless you think that taxes are easy money.

      • I suspect that David is unfamiliar with the workings of our government, inside the beltway.

      • On the contrary Oliver, the workings of the government inside the beltway are my special province. I work closely with policy makers, staffers, analysts and lobbyists, and have done so for 40 years. How about you? Can you explain what Sam is saying?

      • Thanks, David, for admitting here that “the workings of the government inside the beltway are my special province.”

        “I work closely with policy makers, staffers, analysts and lobbyists, and have done so for 40 years.”

        I am not surprised that government agencies have people watching blogs like this to try to discredit anyone who might let the public really know our government really operates inside the beltway of Washington, DC.

        It won’t work. The search for truth – by practicing the basic principles of science and/or spiritual – is sacred:

        “Truth is victorious, never untruth.
        Mundaka Upanishad 3.1.6; Qur’an 17.85
        Numerous verses from other scriptures

        Additional information was posted at 6:22 pm, May 12, 2011.

      • Oliver, no one is paying me to blog here and I am as big a skeptic as you are. The problem is that a lot of what you say about how the government funds climate science just is not true. How science operates is one of my fields of study.

        Funding of climate science is an elaborate process requiring many reviews and approvals. The USGCRP budget document, created by law in 1990, provides a relatively detailed accounting on an agency by agency level. Each general item must generally be approved by four different Congressional committees, two House and two Senate. Within each agency every grant and contract award is publicly available. This is the stuff I study. No one is sneaking anything through. (Nor is there any “soft” money, whatever that means here, as the term is usually used in conjunction with campaign finance.)

        Instead of attacking me personally you should try debating the facts.

      • Sam is saying that compared to producing a product that pays for the research, applying for grants and getting approved is easy money. He’s correct assuming I am correct in interpreting his words. NASA does survive on easy money. If it were a commercial enterprise, they long since would have gone out of business even if you presume they were allowed to operate at a loss. They would have gone out of business because their culture of management-decisions largely impacting launch failures is unacceptable. NASA started off as a select group of highly motivated smart guys with the blessing of the American people to explore space. Presidential politics turned them into just another federal agency, although strangely NASA has to justify its budget each year, unlike say, the Department of Commerce.

        It is no surprise to me, therefore, that in order to fund NASA science programs, they’ve pursued climate change doggedly. A fear was created, NASA has a good science program, so they went for funding by pursuing the political winds. There’s nothing surprising about this. Whenever humanity gets excited about superconductivity again, a similar rush for grant money will appear to fund the next breakthrough. Science works in bubbles of funding that inflate on excitement and deflate on reality. It has been this way for a while now. Note that I’m not saying it’s the best way to do things, but it has worked this way without major issue for a while.

        What is abhorrent is how the more money you give scientists to research climate change, the more one-sided the presentation seems to get. The press never announces, “All is well, according to new NASA climate change study.” You never see, “Warming climate might not be as bad as thought.” You’ll never read, “Adaptation to climate change more cost-effective than mitigation.” Instead the public is shovel-fed more fear about looming catastrophe, which feeds into more political movement to fund research, which then backfeeds the same scientists spreading fear. These people, instead of appropriately biotch-slapping the media for presenting non-nuanced primal fear of doom, simply continue to bask in the fame (if not also the fortune) of being a champion of saving the world. NASA, for all the good it does, cannot extricate itself from being a major player in this corruption of the process, they will have to face the music like any scientific organization when the unwarranted fear subsides and the public sobers up.

        We’re all happy that NASA gets some excellent sensor packages in orbit out of the deal. We’re not so happy that the public has been sold fear to help justify continued funding. Selling fear is easy funding. Selling a worthwhile product is much harder. If NASA had to live and die by the accuracy of it’s climate projections (akin to weather forecasters), they wouldn’t survive.

      • “Easy money” is government money, disguised so that it will be easy for politicians to vote for it. For example,

        Funds for space-NASA, energy-DOE, environment-EPA, defense-DOD, etc. that were diverted to support the illogical, unscientific plans by the UN’s IPCC, Al Gore and an army of government-paid climatologists to stop the natural process called climate change.

        Oliver K. Manuel

    • 1. NASA is the source of 48% of the funds for the USGCRP (U.S.Gobal Change Research Program) – Michael Freilich, Director of NASA’s Earth Science Division.

      2. China is increasingly able to destroy or jam our space satellites – Gregory Schulte, Assistant Secretary of Defense in charge of Space Defense [1]

      3. China plans to hike its defense budget 12.7 percent in 2011 to 601.1 billion yuan ($91.7 billion) in 2011 – Gregory Schulte, Assistant Secretary of Defense in charge of Space Defense [1]

      Can you help us and the American public find out how much and why government funds – intended for other purposes (space-NASA, energy-DOE, environment-EPA, defense-DOD, etc, etc.) – were diverted to support efforts by a power-hungry group of scientific illiterates try to stop the natural process called climate change?

      1. “Worried on China, US seeks rules in space,” PhysOrg.com (11 May 2011)

      http://www.physorg.com/news/2011-05-china-space.html

    • Professor Curry,

      Your readers may also enjoy comments recently published on the American Establishment by Walter Mead:

      http://blogs.the-american-interest.com/wrm/2011/05/12/establishment-blues/

    • A flea has as much chance of dragging an elephant around by its tail as the government-paid army of climatologists have of stopping climate change, . . .

      The natural process, recorded in Earth’s geologic record, that helped drive the continuous evolution of life.

      The motivation for the AGW scare is starting to be revealed in discussions elsewhere about the decline of the USA’s formerly dominant position in the world’s economy, defense, and space.

      http://www.physorg.com/news/2011-05-china-space.html

  2. V&V is poison for AGW, and climatologists on the gravy train know it.

  3. The presentation is about making better use of satellite data for climate model evaluation

    So data collection is slave to the model is slave to the IPCC deadline. Not sure how much actual science can be extracted out of that.

    • steven mosher

      did you even read the presentation. Anyone who has worked with sat data sets and climate datasets understands the problem they are talking about. One issue is making use of the data already collected. So a big part of the work is putting the data into easily used formats and standardized output formats.
      sheesh. spend more time reading, less time typing

      • Steven – if you look at the totality of NASA climate-related satellites, they have all been designed NOT to collect multidecadal climate data, but WHATEVER might be useful “for climate model evaluation”.

        Operational lifetimes have always been <7 years (despite interplanetary probes having survived much longer), the chosen 700-km orbits cannot even in theory be sustained for more than 30 years and there is no plan to send one satellite after another of similar type to achieve what Meteosats have done for the weather.

        That's why that sentence struck a chord with me: instead of studying the climate, they are trying to be useful to the climate modelling community (that in turn is trying to be useful to the IPCC, ie the policymakers). We are wasting time doing some kind of "pretend science" instead of observing the Earth.

      • steven mosher

        so you didnt read the presentation and you still misunderstand.
        The presentation is about more than the NASA ‘climate science’ sats, if you read it you would have seen the DOD platform, among others.

        http://trmm.gsfc.nasa.gov/
        http://cloudsat.atmos.colostate.edu/
        http://podaac.jpl.nasa.gov/OceanWind/SSMI
        http://aqua.nasa.gov/about/instrument_airs.php

        SECOND, you dont understand how sat data is used in validating models. nice try with the decade nonsense, but It shows you didnt read the presentation or any of the material it referenced. I’ll also suggest you read some of palmers work on data assimilation and GCMS.

        So, instead of flying off to hackneyed talking points from one sentence. do some reading.

      • I agree. This sounds like yet another example of the mistaken idea that the models are somehow the core of the science. In reality these monster models are just the repository for processes we think we understand fairly well, well enough to write detailed equations for anyway. This means that the models are about as far away from the scientific frontier as one can get.

        In times of shrinking budgets we should not be turning satellites into model input sensors, given that the primary role of these models seems to be to support the IPCC, which is not a scientific role. How about formatting the data for research purposes instead, which may be highly specialized?

        In my view the models have become parasites in the scientific system, something to be cut not fed.

      • steven mosher

        did you read the presentation? simple yes or no.

      • I am not talking about the presentation, Steve. I am talking about the role of the models in climate science. Do you see the difference? For several years I did staff work on the US Interagency Working Group on Digital Data (IWGDD). I have a very good idea what this all about.

      • steven mosher

        “In times of shrinking budgets we should not be turning satellites into model input sensors, given that the primary role of these models seems to be to support the IPCC, which is not a scientific role. How about formatting the data for research purposes instead, which may be highly specialized? ”

        That is certainly not being done. in fact if you read the presentation you would see that much of what they talk about is data that has already been collected. Simple things like standardized file formats. The data is ALREADY formated for scientific purposes. The question is can that work be repurposed to do what many here scream about.. model validation and improvement. take the Sea surface salt problem, a known issue in GCMs. It seems abundantly clear that the only way that aspect of models will be improved is by using the data that will become available from space platforms. Similarly with Grace.
        If you have ever tried to use any of the digital products produced by various agencies then you should understand the importance of adhering to and developing standards and tools.

      • Standardized formats sound nice but they reduce creativity and flexibility. They are only justifiable when there is a major higher purpose and feeding GCMs certainly does not qualify. I am not interested in “improving” these AGW models. They are the problem. What part of “parasitic” don’t you grasp? I want these models defunded so I do not want to burden the sat people with reformatting their data to feed them.

      • moshe –
        You’ve obviously never had to deal with the instrument PI’s. Their attitude? —-
        Standards??? – we don’t need no stinkin’ standards

        Let’s just take the UARS program, for example –
        There were 10 instruments on board, ALL of them constrained to the same spacecraft bus interface standards, all of them constrained to the same data output format. I know that because I wrote the Interface documents for 3 of them – and we all worked to the set of standards (which were generated to match the spacecraft harness assembly and the comm/data storage system constraints).

        Some months later, the instruments started to arrive for I&T. There were 10 instruments, only one of which met the bus interface standard, and none of which met the data format standard. Lot of excuses/reasons/lies. And the PI’s refused to change anything.

        What standards?

        It was even worse on Terra because here were bandwidth standards that were overrun by nearly all the instruments. And the PI’s refused to cut their bandwidth usage to their agreed allotment, so data recovery was a nightmare that had to be handled by the ops crew. But I was gone by the time that one launched. And happily so.

        What standards?

        Now – we can talk about data formats if you like, but that story is no better – and certainly no more standardized.

        The problem isn’t developing standards and tools – it’s enforcing those that have been in place for the last 20 or more years.

      • David,
        In many fields experiments are totally worthless without the help of models. Without models every number given by the experiment is meaningless and nobody can use to any purpose. Everybody, who has worked with experimental data on complex systems obtained by sophisticated equipment knows that, not only in atmospheric sciences, but in all fields of research.

        These presentation discuss this point in many places. Satellite data is even more dependent on models than most other complex experimental data. Its shear volume means that in must be processed by models. Satellite data is also data that is collected from outer edge of the volume observed, and that means that it must be subjected to the methods on inverse problems. That means that models must be applied, and most of the models must describe the atmosphere, i.e. specifically the use of atmospheric models is mandatory.

        I repeat from another of my messages: All these models are climate models, or more generally atmospheric and Earth system models. They are not AGW models. The only AGW models that I can imagine are small toy models that have nothing to do with the detailed analysis of satellite data.

      • Pekka, you are confusing two different things, namely models in general and climate GCMs. You are correct that models are used throughout science and they are very valuable. I myself have built several models, mostly of important aspects of scientific communication and of science education.

        But the GCMs are a very special kind of model, one which implies a degree of global understanding that we do not have. They are built with a very limited basket of mechanisms, with human forcing being dominant. They are being used primarily to further a political agenda. They must be stopped.

        I know of no parallel to the climate GCMs in any other part of science, not where a handful of universal models dominate the research and are used primarily to make politicized policy prescriptions. Climate science has become completely politicized and the big models are the core of that evil.

        I am not being anti-science; I am trying to restore the science to its rightful place, exploring climate. At this point we need a lot of little models, exploring the many uncertainties. The big, universal models do not work and cannot work, because we simply do not understand climate well enough. They are a classic “big science” boondoggle; the computer program that explains everything, but we can’t at this time and tweaking these monsters is useless. Except they are worse than useless because they are being used for political purposes.

      • Pekka, you are confusing two different things. Models per se are used throughout science and they are very useful. I love models. But the GCM’s are a special case, and a bad case at that . They imply a level of global understanding that we simply do not have and they are being used primarily for political policy purposes. They cannot be tweaked into being accurate.

        What we need are a lot of little models exploring the many uncertainties. Universal, global climate modeling has failed and should be discontinued. It is a classic “big science” boondoggle. Even worse it is supporting a political agenda.

      • David,
        My view is that you are confusing two different things. I mean that you are confusing the way some people use the model results to the significance of these same models for the climate science.

        Many things can be studied by small models, but as an example the long term oscillations are an issue that hardly can be studied without large GCM-type models. Some small models may have similar oscillations, but as I wrote earlier, that doesn’t prove anything. Studying them with large models is also very difficult, and we may have to wait long for the success, but the most likely way of reaching such success is through development of large Earth system models.

        For much of the argumentation that you have had in this chain with Steven Mosher, my view is that you refuse to even read valid presentations on the research being done, or at least you don’t want to give serious thought on those issues. As if you would be afraid of realizing that you have erred.

      • A bit of a side-point, but there are reasons that orbital satellite lifetimes are smaller than many interplanetary probes.

        1) Orbital satellites deal with rather extreme thermal cycling from going into and out of the earths shadow. Thermal cycling ages a spacecraft MUCH faster than constant sunlight. First to go are usually the batteries, which hate thermal cycling. Even if you can create perfect thermal control on the batteries to mitigate this, the electronics themselves will (usually) eventually fail from internal coefficient of thermal expansion mismatches. It’s just a question of when, and which material interface will go.

        2) Many orbital satellite programs, in an effort to cut costs, will use as many commercial off-the-shelf parts as they can. This hurts even more. Military standards exist for a reason, and that reason is reliability.

        3) Interplanetary probes are generally built bigger and more robustly. They have to be as they’re going to be experiencing greater launch stress for longer (bigger rockets to launch them), and they’ll be doing their own burns while slinging around the solar system.

        4) Interplanetary probes are often higher-profile launches, with more funding. Orbital science is almost taken for granted by comparison.

  4. This from Wikipedia:

    It has been suggested that Formal verification#Validation and Verification be merged into this article or section. (Discuss)

    Verification is a Quality control process that is used to evaluate whether or not a product, service, or system complies with regulations, specifications, or conditions imposed at the start of a development phase. Verification can be in development, scale-up, or production. This is often an internal process.

    Validation is a Quality assurance process of establishing evidence that provides a high degree of assurance that a product, service, or system accomplishes its intended requirements. This often involves acceptance of fitness for purpose with end users and other product stakeholders.

    It is sometimes said that validation can be expressed by the query “Are you building the right thing?”[1] and verification by “Are you building it right?”[2] “Building the right thing” refers back to the user’s needs, while “building it right” checks that the specifications are correctly implemented by the system. In some contexts, it is required to have written requirements for both as well as formal procedures or protocols for determining compliance.

    That definition seems fairly clear and correct. The climate modelers seem to have the verification stuff pretty well taken care of. The software appears to do what its design calls for and the code is well maintained. It is the validation that is a big point of confusion. Most of us outside the climate modelling field consider Validation to include the question of whether or not the output of the model is a VALID representation of the real climate. When there is a significant mismatch between model output and reality, we would consider the model to not be validated.

    In the past, the response to the complaint about the model and reality not matching was to simply say that it does not matter. There are better models now. The old model output should be ignored. That response does not play well outside the climate modeling community. After all, the horror stories about possible harmful anthropogenic climate changes, and subsequent demands for suppressing CO2 production were based upon those older models.

    Climate Models should be validated against what happens in the real world. Claiming that the time frame is not long enough to do that is simply an admission that it is not possible to validate the algorithms and software.

    What is the climate model validation procedure as currently practiced?

    • Gary i’d like to know this too. I have spent a (unfortunatley) long time of my life performing IQ,OQ and validation. It is a long, drawn out and incredibly tedious process- but incalculably valuable (which is why we do it).

      Dr Curry, if you could somehow ‘grab’ a detailed description of the validation process (or a promise of a post ON the details) it would be exceptionally useful.

      For example- what are the acceptance criteria, how are they defined, what are the run-limits (how are they decided), how many runs are completed- what is the spread etc etc etc (im sure you’re familiar with the overall process of validation).

      This is one of the large sticking points for me on cAGW (model verification, accuracy and validation)- cross this off and i become a lot happier on the matter.

      • Easterbrook and Johns (2008) discusses the processes used by the Met Office’s Unified Model group. It’s pretty high-level, so probably not the detail you’d like (or me, for that matter), but it does give an overview.

      • Gene,
        As I read the paper you referenced, I see a logical disconnect between the traditional definition of Validation and what the authors of the paper expected. Essentially all of what is described in that paper would normally fall into the Verification category in most projects. The model is run using known data to calibrate and tune it. Quite a lot of experimenting can be done to tune model parameters and algorithms to try to make a good match. That is still part of the construction phase of the software product.

        Validation methods expect that, given a set of initial conditions and input processes, a predicted output pattern or value is generated. Validation is confirmed when multiple predictions prove adequate for the needs of the end user.

        That paper seems to be taking the tack that the end user is Climate Modelers and the model output values are of only academic interest. Unfortunately, that is not the case. Those very ‘academic interest only’ outputs are used as justification for major regulatory actions. That, of course, means that the true end user is the general population of the planet, and the expectation on their part is that the climate models have been validated in the more classic sense. That is that they have produced future predictions that have been validated against real world data at the end of the prediction time frame.

      • That paper seems to be taking the tack that the end user is Climate Modelers and the model output values are of only academic interest.

        Exactly, and that’s a major area of concern in my opinion. The processes and technique documented in that paper are a commendable, but insufficient for a “production” system. It should be noted, that the model described actually is a true production system in that it’s used by the Met Office for both climate and weather work. In that respect, they’re more constrained than pure climate models.

    • Gary W

      As you point out Verification and Validation are two sides of the coin.

      It appears to me that validation is only meaningful if it is done relative to real-time physical observations and not just model-to-model.

      Is this a basic weakness of the CGMs today?

      (Not being in this business, I’m afraid I cannot judge.)

      Max

      • Maybe I am completely out of my depth on this issue, but I fail to understand how it is possible to validate any sort of model, if the output of that model is not compared with hard measured data.

        This implies that climate models need to predict what will happen within a reasonably short time frame, so we have the measured data with which to compare the climate model output. The only time a climate model has tried to do this, so far as I have been able to find out, is Smith et al Science August 2007, as we discussed on a previous thread. My estimate is that there seems to be a 75% probability that this attempt has failed to predict what actually happened. We will know more by 2014.

        If anyone knows of another attempt to compare the output of a climate model against hard measured data, I would be very grateful if they would give me a reference.

      • evaluation of climate models against data is routine (both in situ measured and satellite). given the millions of degrees of freedom, the issue is on what time and space scales do you evaluate the model, which variables, and then which data sets. The “which data sets” issue is what was addressed by Duane Waliser’s presentation. For example, if you want to evaluate the modeled sea ice, there are more than a dozen data sets to choose from, using different data and analysis methods.

      • Thanks for clearing this up, Judith.

        Guess I was more concerned about the actual physical data to support the model-based climate sensitivity estimates rather than sea ice data sets.

        AR4 WG1 Ch.8 (plus Ch.3) refers frequently to “experiments with CGMs” but there is very little reference to actual real-time physical observations other than the surface and satellite temperature records and some rather sketchy radiosonde data on atmospheric humidity, which are largely discounted.

        It appears to me that, while peripheral phenomena (like Arctic sea ice) are measured fairly accurately (since satellite measurements started) and we have a temperature record (with all its warts and blemishes), the basis for the model-derived climate sensitivity (i.e. the basis for the whole premise that AGW could become a serious problem) is not robustly founded on empirical data derived from actual physical observations.

        I would hope that efforts are made to resolve this basic dilemma as part of the “V&V” process.

        Max.

      • Judith you write “evaluation of climate models against data is routine”

        This is the problem with words that bedevil this subject. I am not talking EVALUATION. I am talking VALIDATION. I will repeat what I said. The only study I am aware of that has made a specific prediction into the future, (near enough so that we can check whether the prediciton is correct), of what a climate model claims will happen, is Smith et al Science Augusr 2007. If there is another paper that uses a climate model that predicts what will happen sometime in the future, short enough so that we can compare model predictons with actual results, could you give me the reference. TIA

      • Jim,

        Trying to validate predictions rather than hindcasts may actually be more error prone due to the fact that some aspects of the future (volcanic activity being a big one) can’t be controlled for. Given a set of known data, a valid model should be able to reproduce a historical period. When dealing with future periods, you’d have to negotiate away the effects of unrealized assumptions.

      • Rob Starkey

        Gene– and at the end of the day, the only thing about GCM’s that matter is their ability to accurately forecast future rainfall and temperature at specific locations around the globe. Until they can demonstrate the ability to do that, they are “works in progress” and semi interesting to study the process of development, but worthless for anything else.

      • Rob,

        I agree that finer resolution is is a worthy goal, but I sincerely doubt that models will ever be predictive out to longer time frames for the reason I noted above. I think they could have use as a what-if tool (assuming they get to the point of reproducing reality), but I don’t see them having any hope of creating any sort of long term “forecast”.

      • But when the data is known in advance, finally matching it after years of effort is hardly a validation. (Not that the models have matched it very well, as no two agree.) There is a reason why prediction is the core of scientific method.

      • Yes and no. If the essence of the code is “If year = x output y”, then I agree, that tells you nothing. On the other hand, assuming the logic is coded honestly (which the lack of agreement among the various models and reality would seem to indicate), then hindcasts that matched reality would be strong evidence of validity. Validating against prediction would require accounting for differences between expected and realized volcanic activity, GHG and aerosol emissions, solar activity, etc. At best, that would be a much more subjective process.

      • I agree to a point. (Science is not simple, in fact we do not understand it very well at all.) If one observes a phenomenon (hence has data), then if you can think of a mechanism that explains it (matches the data) then that makes it a plausible hypothesis. Thus AGW is a plausible hypothesis.

        If there are no other hypotheses then one can even accept the explanation, and we do this all the time for simple cases. But this is hardly the case with AGW, so matching historical data does no more than make AGW a viable hypothesis, no matter how elaborate the models may be.

        But then this is where warmers and skeptics diverge, isn’t it? Over whether viable alternatives are in play? The modelers claim that because we cannot write equations for natural variability that explain what we see, therefore it does not exist and AGW is the only viable hypothesis. Skeptics deny this, pointing toward large scale natural variability that we cannot presently explain, so cannot write equations for. It is all about uncertainty and how one sees it.

      • so matching historical data does no more than make AGW a viable hypothesis

        I don’t argue that it will. My position is that matching reality will make the model valid. If the theory underlying the model prevents it from validating, then that says something about the theory.

        The modelers claim that because we cannot write equations for natural variability that explain what we see, therefore it does not exist and AGW is the only viable hypothesis.

        I certainly would not want to try to defend the statement that an unexplainable observation doesn’t actually exist.

        I don’t think we’re that far apart…my position, in a nutshell, is that a valid model is one that matches reality. As I noted above, if your theory prevents you from matching reality, that’s a pretty strong indication that the theory needs work.

      • David,

        I don’t think we’re in disagreement. My position is that a valid model is one that matches reality. If the theory behind that model prevents that, then that says something about the theory.

        I agree that the position that an observation that cannot be explained doesn’t exist is an untenable one.

        (my first response appears to have been eaten by the spam filter so I apologize if this results in a duplicate)

      • Gene,
        Sure, predictions may be impacted by some uncontrolled and unpredictable events. However, the prediction from before the event still stands. The cause for difference between the original prediction and real data must be explained in both magnitude and timing. That is completely reasonable. It can even lead to an understanding of how a similar uncontrolled event might be compensated for when it happens in the future. Of course, any claim of validation based upon that analysis must be considered provisional at best, depending upon the magnitude of the error.

        The point, of course, is that a claim that traditional validation is not possible is not justification for claiming that a model is validated adequately for use in justifying new laws.

      • Gary,

        I’m not suggesting that the models are adequately validated. I’m merely pointing out that validating against a hindcast would likely be a less error prone and less subjective exercise than trying to validate a forecast.

      • Gene,

        I’m not suggesting that the models are adequately validated. I’m merely pointing out that validating against a hindcast would likely be a less error prone and less subjective exercise than trying to validate a forecast.

        That is a point that I don’t believe. In validating against hindcast the modelers validate against something on which they have unavoidably some knowledge, when they are building the model. Therefore an unknown amount of fitting to that data is done. This is a very well known problem in validating any model against historical data and there is a lot of scientific literature on the seriousness of this problem in validation.

        In addition comparison with the past contains all the same subjective factors that you listed as problems for comparison of forecasts with real observations, when all that is history.

      • Pekka,

        Agreed that it’s possible to “cheat” when working with a known data set. However, the longer the period you’re trying to match, the hard it is to cheat well without being caught. I’d argue that it would be equally hard to claim that the bias introduced was accidental.

        To come up with a model that faithfully replicated the twentieth century one would either have to get it right or cheat to the extent that it would be extremely hard to hide.

      • I’m not thinking about conscious cheating but rather about the fact that modelers avoid at various points choices that they know (or guess) to lead to conflict with past observations. This is a mechanisms, whose importance is very difficult to estimate, and it’s common in many fields that it has been much more important than people thought.

      • I’m not thinking about conscious cheating but rather about the fact that modelers avoid at various points choices that they know (or guess) to lead to conflict with past observations.

        That’s a concept I’m very familiar with (“never test your own code because you avoid doing the obviously wrong things that end users do”). I wonder, however, how easily it would come into play over a century’s worth of data. I’m not sure you could reproduce the last hundred years via unconscious bias.

      • My purpose is not to claim that hindcasting would be of no value, and most results on the detailed comparison are likely be valid. The problem concerns mainly the main trends and some other particularly strong effects. When the same data is used to test consecutive models or model versions of the same developers, the problems of hindcasting grow severely.

        The other point, and my original statement was that hindcasting is in no obvious way better than forecasting – except for the essential point that it can be done immediately.

      • The other point, and my original statement was that hindcasting is in no obvious way better than forecasting – except for the essential point that it can be done immediately.

        The value I see is that with the historical data, there’s an element of “it is what it is”. You have the conditions that serve as inputs and the model output that you then compare to historical observations. With a prediction, you first have to reconcile your assumed conditions with the observed conditions and determine their affects prior to comparing the the model output with the observations. This is the part of the process that I see as more error prone and subjective.

        Hope you’re not getting tired of the discussion, I’m finding your insights on this valuable.

      • As far as can imagine the same problem is with historical data. You must figure out, how various case specific circumstances have affected the data. Furthermore for historical data you have poorer quality and less supporting information than you are likely to have in future for the future data.

        For most of the historical data all kind of assumptions are made, and the data is often interpreted trough uncertain models, either explicitly or by assuming that no corrections are needed, while the only justification for that is that we don’t have the supporting information to make those corrections.

      • I agree with you, Jim.

        Be gentle.

        It is always difficult – if not totally impossible – for any scientist to see a basic fallacy in their own field of study.

        I know from experience.

        From trying to tell astronomers and solar physicists that the Sun and other ordinary stars generate and discard Hydrogen by neutron decay (n =>H + 0.782 MeV) and that neutron repulsion powers the Sun, the cosmos, and sustains our very lives on the skin of this tiny ball of dirt orbiting the Sun.

        Sometimes nature herself intervenes, like today’s news reports of explosions in the remains of the Crab Nebula, the supernova that exploded in ~1054 AD.

        http://www.physorg.com/news/2011-05-fermi-telescope-superflares-crab-nebula.html

        http://www.bbc.co.uk/news/science-environment-13362958

    • Verfication and validation starts with knowing and documenting ALL of the differential equations, boundary, and initial conditions associated with your model, and then clearly documenting your numerical algorithms. It also includes clear coding with numerous, informative comments.

      GISS hasn’t figured that out yet (and never will bother with proper documentation, apparently).

      I was also struck by the following comment:

      “The GISS model is gradually starting to more closely resemble the other U.S. climate models.”

      This begs the question…WHY have dozens of models, each nurtured by competing groups, which essentially do THE SAME THING. It’s giant waste of money and effort. Why not standardize on a single code stream to which all modelers can contribute? I would defer to the modelers to decide which code becomes the standard, but I know which one I wouldn’t use…

  5. Judy –
    A “launch vehicle crisis” was discussed: “ESD/SMD/NASA is losing reliable, predictable, access to space via affordable, proven launch vehicles. After 2 consecutive failures of the Taurus-XL LV, there is no certified U.S. LV with capacity between the Pegasus (440kg to LEO) and the Atlas-V (9750-29,240 kg to LEO). LV availability and reliability problems are causing launch delays and cost increases now, and will have greater impacts on the Earth and Space science programs in the coming decade.”

    This is a rerun of 20 years ago when we had spacecraft stacked like cordwood waiting for launch. Then we had the Russians volunteering to launch our birds, but the President decided that National “honor” was at stake and refused.

    It’s also a rerun of the launch vehicle debacle – and by the same company. Orbital had the same (lack of ) success rate with the Pegasus vehicle back then. Gotta wonder why they’re still in business.

  6. Will this be an actual V&V of models, or will it be a whitewash?
    How constrained are members of the subcommitee?

    • steven mosher

      IV&V. I’ll take it you’ve never done IV&V.

      • IV is often needed indeed.

      • lol.

      • steven,
        Thanks for your kind help.
        I was referring to this from Dr. Curry:
        “Model V&V

        A subcommittee member Rober Schutz asked about model verification and validation (V&V), which was the topic of a previous Climate Etc. post). The response was that there is extensive model verification, but the climate modelers seemed unfamiliar with verification in the context of V&V.”
        If I want to receive content free snark for a pretty clear question, I can go to RC or Joe Romm’s place anytime.
        Why you have gotten ticked off lately is beyond the scope of this thread, probably, but as someone who enjoyed your book and have watched you deconstruct a lot of stuff, I am a bit disappointed that you are acting this way.

      • steven mosher

        If you have followed the issue of IV&V with models, an issue that Dan Hughes and I have been railing about since 2007 (in my case) then you might understand. And we have been talking about the DQA since then as well. Read what Judith wrote all the way to the end.
        If the V&V is expected to meet the requirements of the data quality act it will have to be IV&V. So read the whole thing she wrote. follow all the links, read everything. THEN comment.
        But here is what you do, it seems. You read and look for a quick shot.
        Committee = whitewash.
        But if you read the whole thing and all the links and did a little reading at climateaudit ( go read the whole blog from start to finish)
        you would see that a handful of us have been talking about IV&V for a while, and wondering how the data quality act could be brought to bear on the problem.
        When we see positive steps in this direction, that is a good thing. Its not the time for snide comments about whitewashes.

        get it?

      • steven,
        Thank you for expanding and explaining what you meant.
        My question stands, not was it meant as a cheap shot.
        We need critical thinking, such as what I have seen you do on many occasions, finally brought to bear on this issue.
        The labels are much less important to me than the substance of the work that will be done under the label.
        I have participated in project reviews of large engineering projects (~$1b range) for contract compliance, engineering quality, ISO compliance, etc., and I have been involved in due diligence in the financial markets of products and companies.
        Have I done rocket science? No.
        But the idea of verification and validation is one that has a lot of meaning to me personally and from you are saying as well as what our hostess is saying, a lot of potential.
        It will depend, however, on if the full light of the process is allowed to play out, and may the chips fall where they may.

  7. Thanks for the post Dr Curry, i’ll be following this one with some interest.

  8. Rose Dlhopolsky

    FYI. I am having problems reading the PPT files with OpenOffice.

  9. Judith,
    thanks a lot – this is really interesting stuff (although I too can’t open all the files).

    Was there any discussion of other aspects model intercomparison at all, beyond verification and validation?

    http://mitigatingapathy.blogspot.com/

    • paul, which files can’t you open? i could open them all, i could post somewhere as a pdf if you let me know which files you would like to look at.

  10. Judith Curry

    You probably caught this one, but if not…
    http://www.columbia.edu/~jeh1/mailings/2011/20110505_CaseForYoungPeople.pdf

    James E. Hansen is getting teenagers involved in his battle against carbon.

    Max

    • The extremists always go after the youth.

      • Ach ja, in de gut oldt days ve hat de “HJ” and “Junge Maedels” vafing de Flagg.

      • Everywhere, the young is easy to brainwash and be used as a fodder in ideological and religious struggles.

  11. Gene writes ” On the other hand, assuming the logic is coded honestly (which the lack of agreement among the various models and reality would seem to indicate), then hindcasts that matched reality would be strong evidence of validity.”

    This goes to the core of the problem. The issue is not whether the computer code has been written properly. The question is, is the logic that has been used to address the problem, valid? That is, do the equations represent reality? Hindcasting data can NEVER address this problem.

    I believe that the climate is so chaotic, that NO equations in climate models represent reality. The only way anyone is going to convince me otherwise is to have the models predict what will happen in the future, and then have this happen.

    Your point about other things interfering with this process of prediction is a red herring. If the signal from adding CO2 to the atmosphere is strong, then it will show up against any other effects. If, as I suspect, the signal from CO2 is so weak that it cannot be measured, then it will not show up properly when the predictions are made.

    You cannot have it both ways. Either the CO2 signal is so strong that it is undeniable, in which case validation is possible by correctly forecasting the future. Or the signal from CO2 is so weak, that the models can never be validated.

    • Jim,

      I absolutely agree that the validity of the logic is the overarching question here. I don’t believe that logic is currently validated, but have no opinion on whether it could be in the future (I design systems for a living, the science is way out of my expertise). You may well be correct that the system is so chaotic that a deterministic approach to modeling it is futile. It seems we just disagree on how the validation process could be constructed.

      I’d like to hear more as to why you believe a hindcast cannot provide validation? Regardless of whether we’re looking at a forecast or a hindcast, we will be looking at model output vs observations. In the case of a hindcast, we should have the inputs that the theory says provide a particular output (and for the sake of argument, let’s say we’ve verified the model against the theory). The only question then is does the output match the observations. In the case of a forecast, we have the inputs we assumed and the output based on them. We also have the actual inputs and the observations. This adds additional complexity to the reconciliation, making it more subjective and more error prone.

      My understanding (flawed as it is) is that the factors that determine temperature are varied and that there isn’t a strong signal. The fact that models have difficulty with the early 20th century tells me that something’s missing in the logic. When they can model the entire century, then I will have a bit more confidence that the various components are accounted for.

      • Gene writes “I’d like to hear more as to why you believe a hindcast cannot provide validation?”

        Let me try. If you have exact equations, you can CALCULATE what the answer is. Models are never exact equations. They are some form of approximations. As such, they simply have to have what we call “fudge factors”. These are bits of logic put in to bridge the gap between what the equations being used lack, and what is reality. To paraphrase John von Neuman. “Give me four fudge factors, and I will create an elephant. Give me five and I will make it wiggle it’s trunk”.

        What hindcasting is, to put it rather crudely, is adjusting the fudge factors so that the model ouptput matches what has happened. This cannot, and never will, validate the model. A better word is “calibrate”.

        Reductio ad absurdum. I could create a mathematical formula, similar to the way Hastings did in his classic “Approximations for Digital Computers”; or some form of Fourier analysis. By adjusting the various factors, I can get an equation that almost exact matches what has occcurred in the past. Equally good, or maybe even better that the climate models can. I would NEVER suggest that this exercise would result in a “model” which predicts the future.

      • Jim,

        It is absolutely possible to fiddle with the code to force agreement with historical observations. However, doing so without getting caught requires a high degree of opacity. The longer the period you have to diddle, the more likely it is that you fall down. Lucia’s comparison certainly doesn’t show evidence of that amount of “fudge”.

        I would NEVER suggest that this exercise would result in a “model” which predicts the future.

        To be honest, I sincerely doubt we’ll get to the point of having good predictions. I think it is possible, once the processes are better understood, to have useful what-if tools. The difference being that it’s highly unlikely that all the assumptions going into a projection will pan out.

      • Gene writes “To be honest, I sincerely doubt we’ll get to the point of having good predictions.”

        Now we come full circle. The estimates of climate sensitivity have no observed data to support them. We have no idea what the climate sensitivity is for a doubling of CO2. Now we must conclude that the climate models cannot predict what will happen in the future. The doom and gloom predictions of the proponents of CAGW are sheer fantasy.

        Why on earth should we conclude anything else about what happens when we add CO2 to the atmosphere from current levels, other than

        WE JUST DONT KNOW.

        The IPCC and all the other proponents of CAGW are completely misusing what I know as “science” to push an agenda, which is almost certain to cause the world to commit economic suicide. Why on earth does anyone want us to “decarbonize” society. There is absolutely ZERO evidence that adding CO2 to the atmosphere does any harm; and there is lots of evidence that it does good. Such little evidence as we have strongly suggests that adding CO2 to our atmosphere has no effect on climate whatsoever.

        For heavens sake, let us forget all this nonsense of trying to limit the emission of CO2.

    • The models are climate models, not AGW models. Their validation is validation of climate models, which means that all climate data is relevant. There most certainly is a lot of validating data, but also many discrepancies as discussed in the opening post on the NCAR model. All research done using models is to some extent part of the validation process. Formalized validation is largely impossible as the models are too complex and predict too many things for making an unique definition of such a process almost meaningless.

      Validating some aspects of the model has varying relevance for the expected validity of the models other forecasts and for the expected validity of the model under significantly different conditions.

      • I think they are AGW models, in the straightforward sense that they are based on the AGW paradigm. A large anthropogenic forcing is assumed and very few natural forcings are used. The basic approach has been to start with AGW and then to factor in other mechanisms only to the extent needed to explain what AGW cannot. One can easily imagine other, non-AGW approaches that would look very different, but these are not being built.

      • Anthropogenic forcing is just a minor part of all forcings in the models. It is not a separate part of the models and the model structure has no obvious connection to it. It’s an unavoidable part of the models, because the models must calculate the radiative heat transfer to get any reasonable results, and the GHG concentration enter that calculation. The models do not “start with AGW” and I really cannot understand, what you mean by “non-AGW approaches”.

        It’s certainly true that the models are incomplete and that the incompleteness is apparent in the inability to describe many oscillations and climate shifts, but the weaknesses in this area are hardly a consequence of on “AGW-approach”, but rather an indication of the fact that too little is known about them and that those processes are very difficult to model.

        There are also other factors that may cause bias. It’s, e.g., obvious that model developers search more strongly for alternative methodological choices, when they are not satisfied with the results than, when the results follow their expectations. How strongly factors of this type influence the outcome is very difficult to estimate. Even a systematic attempt to avoid these biases is always deficient, and one can ask, how systematic attempts have been made by all developers, when there are so many other issues to ponder.

      • First, it is far from true that anthro forcing is a “minor” part. Prior to that TAR it was all of the 20th century warming and it is still most of it. It is all of the dangerous future warming. The model is built around AGW. It is built to demo AGW.

        As to alternatives there are many obvious choices. For example, the standard way to explore chaotic phenomena is to see how much chaos can be generated, to look for the chaotic regime. Where is the chaotic climate model that does this kind of research? Where is the ocean oscillation model? Where are the indirect solar forcing models? And so on. The present situation is a single minded AGW exercise.

      • David,
        You didn’t read, what I wrote. I was discussing, how AGW enters in the model structure and how the model structure is developed. The large share of AGW is a result. It’s not built in the basic model structure, but it may be a consequence on, how the models are applied.

        Chaos is an interesting topic and chaotic phenomena are important in the atmosphere, but proceeding from this observation to develop models that are capable of providing quantitative results is certainly far beyond the capabilities of every scientist. As discussed in full threads here, the potential chaos is not of the simple kind of artificial chaos models of few discrete variables, but rather spatio-temporal chaos, or at least chaos of a very many interconnected variables. (I wrote “potential”, because I’m not sure on the importance of chaos for the averaged climate variables, when the periods are decades.)

        The present set of models represents reasonably well, what is possible based on existing knowledge and modeling capabilities. Their basic choices are only mildly influenced by the fact that the principal outside expectation is to get knowledge on the risk of serious AGW. The influence is so mild, because the main limiting factors are in the capabilities of the modelers and in the observational data available. Building as good models as these limitations allow determines largely the outcome.

        I’m rather confident that I do not err severely in the above description although I have not been involved with climate models and although I have had almost no direct contact with climate modelers. Those, who are involved may wish to correct, where my description is not right on spot.

      • Pekka, chaos is just one example among many, but your response seems to make my point, namely that it is not included in the models because we do not understand it well enough. You say “The present set of models represents reasonably well, what is possible based on existing knowledge and modeling capabilities.”

        My point is that science should be focused on increasing existing knowledge and modeling capability in this area, and several others. Instead we are excluding what we do not understand, then making false claims of understanding.

        By the way, computer modeling is standard practice in chaos science, so I question your claim that we can’t do it for climate.

      • Of course computer modeling is used in chaos science, but we are discussing climate science. For that it’s not relevant what is done in chaos science until somebody can show that they learn really something important on climate by the methods of chaos science. Unfortunately most and in particular the most mature parts of chaos science can hardly tell much about climate. Climate is too complicated for applying any specific results of chaos science.

        I fully agree that science must focus on learning about climate and develop models having that goal in mind, but I also believe that much of the present climate science is doing just that.

        I have not seen any good arguments to support your claim: “Instead we are excluding what we do not understand” as a statement about climate science in general. Certainly the applied part of climate research is using those tools that are available at the time of use and forced to leave out things that cannot be taken into account. That adds uncertainty to the results. I tend to agree that uncertainties are larger than often indicated, but that doesn’t mean that the methods used could not be the best available.

      • Pekka, I have specifically and repeatedly pointed out the kind of research that is not being done, to which you have never replied that I know of. So have Spencer, Curry and many others.

        As for chaos, you seem to be claiming that it is both too complex to be understood and not important. You can’t know both. Moreover, I think you are confusing the unpredictability of chaos with difficulty in understanding it. My conjecture is that Simla chaos is easily demonstrated on the dec-cen scale and that will be all we need to know. The point is that no one is trying, they are all trying to make the big models work.

      • Sorry I meant simple chaos. (We are having a lightning storm so I posted too quickly.) In fact here is an interesting puzzle. When the TAR first came out it included a set of diagrams at the back of one chapter, I think #7 but it was a decade ago. It was rather mysterious because I could find no discussion of these diagrams in the chapter text. But each showed a schematic of the nonlinear feedbacks in one subsystem of the climate. The captions made the point that each of these subsystems was capable of oscillation simply due to these feedbacks (that is, chaotic behavior under constant forcing).

        This is precisely what is not being explored, along with several other basic natural mechanisms. The thing about chaos is that if the simple dynamics create it no added complexity can make it go away. So we should at least be actively exploring the simple chaotic dynamics, as an alternative hypothesis to explain the observed warming.

        Moreover, exploring this hypothesis does not mean trying to factor it into the GCM’s in some minimalistic way, which is what is presently being done. It requires a new model, probably rather simple at that, but designed for the job. The same is true for the rest of the uncertainties.

        The idea seems to be that just because the GCM’s are universal and global, so they must already encompass all the important science. This is the fundamental fallacy. The reality is that most of the fundamental questions are not being looked into at all, precisely because they do not fit into the GCM framework. The community has boxed itself into a super-model corner and it is time to break out.

      • I forgot to explain the mystery, which is that these very important diagrams seem to have disappeared from the Web version of the TAR, not long after they appeaed.

      • About chaos I have tried to say that one cannot gain much knowledge from the chaos theory. This by itself doesn’t say anything for or against the importance of chaotic processes, it just tells that the possible role of chaos adds to the uncertainties.

        Accepting that means that we must try to estimate the importance of chaotic processes from the use of models without chaos and comparisons of them with empirical data.

        It’s always difficult to use models to estimate their range of validity, and it may become totally impossible, if it turns out that we are too close to the onset of such chaos that will dominate the overall behavior of the model. But there is also the other possibility that a properly formulated model describes issues that it’s supposed to model without approaching the onset of chaos or other modeling issues that make the results to be of little value.

        There are chaotic processes in the Earth system, but we don’t know, whether they are so essential for essential climate variables that it makes results to be of little value. One and perhaps the only way of finding out is to continue with the development of large Earth system models.

      • David, chaos is exemplified by the ensemble GCM members that are used and comprise the IPCC runs. They each have their own decadal variations, and their spread shows the size of the chaos in relation to the climate signal. What aspect of chaos do you think is missing, if it isn’t decadal variations? Taking the ensemble average is designed to average out chaos that is not large compared to the long-term climate behavior exhibited by the ensemble.

      • al in kansas

        “The large share of AGW is a result. It’s not built in the basic model structure, …”
        Yes, it is. When you make assumptions about co2 sensitivity, about cloud feedback vs. forcing, and a lot of other input parameters and internal relationships that we have poor understanding of. Example, see Roy Spencer’s discussion of cloud forcing/feedback. Learn some electronics and play with some Spice simulators. It will give you some sense of model limitations. When the climate model cannot simulate the last 10,000 years with some accuracy, come back and talk to us. I won’t be holding my breath.

      • This is not “basic model structure”. I used that formulation to emphasize the difference between the very tedious work done in building the models on the and model runs, which are influenced by choices made in parameterizations of deficiently known mechanisms.

      • al in kansas

        Oops, should be: “When the climate models can simulate the last 10,000 years with some accuracy, come back and talk to us.”
        Also, Roy Spencer has a good discussion of OHC and sensitivity that makes all AGW moot.

      • You ask also: Where are ocean oscillation models and indirect solar forcing models?

        What would be a realistic ocean oscillation model? It’s a model similar to the present ocean models, but at a level that gives realistic oscillations as outcome. We have read also in the postings on this site that the recent models have more realistic ENSO than earlier models. That is a small step in the right direction, but we may still be far from good models.

        It would probably be rather easy to build toy models with ocean oscillations, but they would not necessarily tell anything worthwhile, as they would not be even close to realistic in other aspects, and as they might appear realistic for totally wrong reasons. The realism of oscillations is significant only, when it’s the result of a physics based model, not when it’s created artificially in a toy model.

        Another possibility is to force oscillations on the ocean component of the full Earth system model, but that might also lead to a seriously wrong model. Furthermore we don’t really know in sufficient detail, what the oscillations are.

        Concerning indirect solar forcing, I don’t believe that there are any significant problems in including them to the present models, when scientists studying such phenomena can tell, what should be included. Some vague speculations and unsubstantiated claims cannot be included.

      • Pekka, the point, and I will keep repeating it, is that this research is not being done. Nor do I accept your wording that any model that is less than a GCM is a “toy.” Indeed it is the very size of the GCM’s that makes them useless. The science should be exploring what we do not understand, using relatively simple models at first, not building ever more precise (and therefore more misleading) giant models of the little we do know. The GCMs are a scientific dead end, like many of the “mainframe monster” models built in the 1970 and 80’s to explain everything about some system or other. They did not work because we do not understand that much. Same for climate change.

  12. The link to Miller’s presentation seems to be wrong. It is to Potter’s presentation. This is the correct link is, and it can be found also from the dropbox.

  13. Willis Eschenbach

    The link for the giss model is wrong, should end in 3165 viz:

    http://esx.hq.nasa.gov/cabinet3/files/3165.PDF

    Regarding the GISS climate model, I along with others have shown that their global temperature output is functionally equivalent to a simple one-line transformation of the input forcings. See the discussion here and the graphic here. In other words, despite all its complexity, the GISS model is simply outputting a lagged linear transformation of the input forcings.

    I’m sorry, but … that don’t impress me much. That’s not like nature at all. Does anyone really believe that there is a linear relationship between input and output in a highly complex and interconnected system like the climate? Because I sure don’t … but the GISS model does.

    People need to think about what that means. I see nothing in his 10 megabytes about that at all.

    On the matter of satellites, I have to say that I love the irony. NASA is unable to do the job I wish they wouldn’t do (blathering about the climate and misinterpreting satellite results) because they are unable to do the job that is their PRIMARY MISSION—providing reliable launch vehicles.

    NASA should get out of the climate business entirely, and focus on how their core business is in the dumper. It’s not clear to me why they are in the climate game at all, nor why they are putting one penny towards it when their launch vehicles are falling out of the sky.

    But hey, that’s just me … all I can do is laugh and cry. Hearing that the GLORY satellite had gone down was one of those mixed feelings deals, like seeing your mother-in-law go off a cliff at the wheel of your new Lamborghini sports car …

    w.

    • Roger Andrews

      Talking of linear model responses, over at KNMI
      http://climexp.knmi.nl/selectfield_co2.cgi?someone@somewhere
      you can download an ensemble of 23 PCMDI models that hindcast or project temperatures for the IPCC A1B scenario between 1900 and 2099. You can replicate these temperatures almost exactly (R=0.999 for annual global means) simply by converting the A1B ppm CO2 values to forcings using 5.35 ln(C1/C0), multiplying the forcings by 0.727 and adding 13.1.

    • Yes, “NASA should get out of the climate business entirely, and focus on how their core business is in the dumper.”

      If Professor Curry will allow me to do so, I will explain why:

      DOE, EPA, and many other research agencies have been used by the likes of Al Gore to slip tax funds secretly past the public and Congressional reviews that are required by law, the easy money inside the beltway that insiders pretend to know nothing about.

      On the same day that Professor Curry publishes a statement by Michael Freilich (Director of NASA’s Earth Science Division) that NASA is the source of 48% of the funds for the USGCRP (U.S.Gobal Change Research Program):

      1. A senior US official in charge of space defense is concerned about China’s increasing able to destroy or jam US satellites:

      http://www.physorg.com/news/2011-05-china-space.html

      2. Space scientists are surprised at the explosive ‘superflares’ coming from the supposedly dead embers of a supernova that exploded in 1054 AD to produce the Crab Nebula.

      http://www.bbc.co.uk/news/science-environment-13362958

      http://www.physorg.com/news/2011-05-fermi-telescope-superflares-crab-nebula.html

      3. NASA scientists are surprised by solar wind data – data confirming findings 3-4 decades ago that NASA tried to hide and avoid – data that shows Earth’s heat source is not the giant, stable H-fusion reactor that Al Gore and his army of climatologist assumed:

      http://www.physorg.com/news/2011-05-scientists-solar-genesis-mission.html

      So why are NASA, DOE, EPA, etc., not doing the jobs assigned to them and instead siphoning off public funds to support such irrational, illogical and unscientific projects as USGCRP (U.S.Gobal Change Research Program)?

      World leaders (Al Gore & Associates), government research agencies, the US National Academy of Sciences, the UK’s Royal Society, the UN’s IPCC, leading research journals (Nature, Science, Proceedings of the US National Academy and the UK’s Royal Society) and the news media (BBC, PBS, CBS, The New York Times, etc) joined forces to convince the public:

      a.) They understand NATURE,

      b.) They can ignore parts of NATURE, e.g.,

      http://www.omatumr.com/Data/2000Data.htm

      c.) They can control NATURE !

      All these statements are false and unscientific.

      Earth’s climate is controlled by repulsive forces between neutrons in the core of the Sun[1] – the erratic, explosive violence that sustains our stormy Sun [2,3], generates solar flares and eruptions [4] and ‘superflares’ from the supposedly dead embers of other supernovae – which explodes like the wrath of a woman scorned and is well illustrated in the image of the Hindu Sun goddess, Kali [5].

      1. “Neutron Repulsion,” The APEIRON Journal, in press, 19 pages (2011):
      http://arxiv.org/pdf/1102.1499v1

      2. Curt Suplee, “The Sun – Living with the Stormy Star,” National Geographic Magazine, lead story (July 2004): http://ngm.nationalgeographic.com/ngm/0407/feature1/index.html

      3. Stuart Clark, “The Sun Kings: The Unexpected Tragedy of Richard Carrington . . .” (Princeton University Press, 2007) 211 pages
      http://www.amazon.com/Sun-Kings-Unexpected-Carrington-Astronomy/dp/0691126607

      4. “Super-fluidity in the solar interior: Implications for solar eruptions and climate”, Journal of Fusion Energy 21, 193-198 (2002).
      http://arxiv.org/pdf/astro-ph/0501441

      5. “Video on Neutron Repulsion”:
      http://www.youtube.com/watch?v=sXNyLYSiPO0

      With kind regards,
      Oliver K. Manuel
      Former NASA Principal
      Investigator for Apollo

    • The discussion link seems to be broken.

    • Willis

      I love the next-to-last slide in the glitzy NASA slide presentation by Gavin Schmidt (“assessing multiple impacts”).

      Here we have global warming by 2050 directly linked to annual global crop losses, annual premature human mortalities and mortality damages in trillions of $.

      Talk about GIGO. This has got to be the biggest bunch of totally unsubstantiated rubbish ever put on a Powerpoint slide!

      And US taxpayers are paying for this garbage.

      I would hope that the evaluation of “NASA’s Earth Science Modeling and Activities” would raise some serious questions regarding this sort of tax-payer funded rubbish.

      But, instead, it will probably give them a pat on the back and more taxpayer money to waste on such idiocy. According to the report below the NASA Earth Science program budget is supposed to increase to $1.65 billion by 2013.
      http://www.newscientist.com/article/dn17097-earth-science-gets-boost-in-nasa-budget.html

      That’s a lot of money to pay for this sort of trash.

      Max

  14. Very dumb question for you guys…

    Regarding climate models, about which I know nothing at all, can’t they test them using predictions about the past? So for example, can they go back so say 1920, gather all the data that would have been available at that point and applying their models predict climate conditions 20 years hence, that is 1940? Wouldn’t this be the easiest way to test their validity?

    • Nice question. Is there a model that can predict accurately the past 10 years and the next 10 years? Apparently, not. Climate models are jokes.

    • Pokerguy,
      Effectively, that is what the climate modelers are attempting. A couple of problems crop up though. The data available is of too low an accuracy, especially seeing as most of the data from before about 1900 is not measured values but estimations based upon proxies. The temperature trends we are dealing with are small fractions of a degree per decade. Averaging a bunch of proxies of vague accuracy and provenance can give you a set of precise numbers but they are still no more accurate than their original values.

      Next, the claimed accuracy of the model predictions is supposed to be poor for periods less than several decades in length. They are supposed to be good a predicting climate a century from now but not a decade from now.

      Until a prediction is made of some future climate state and that prediction proved accurate, any claims of validation are weak.

      • the claimed accuracy of the model predictions is supposed to be poor for periods less than several decades in length. They are supposed to be good a predicting climate a century from now but not a decade from now

        Huh?

        Doesn’t make sense.

        Max

      • Internal variability which dominates short ranges is less predictable than the climate change due to significant long-term forcing changes (as with CO2 doubling) which only have a larger magnitude than internal variability at longer ranges.

  15. Rapping climate scientists. Off topic, but…well…see for yourself…
    (off color language alert – but would it be rap without it?)

    http://www.youtube.com/watch?feature=player_embedded&v=LiYZxOlCN10

    Not sure this will help the cause.

  16. Pooh, Dixie

    Dr. Curry, Validation can not be done without documented requirements. I am unable to find a reference to a requirements discussion or document in 3174.pdf or 3155.pdf (which appears concerned with Net Primary Production of CO2).

    I would expect a requirements document to be much longer than a presentation; that would explain its absence from the list of presentations in 3174.pdf. However, if the subcommittee is to sign on to “building the right thing”, a requirements document has to be available.

    If it exists, could you please point us to it? Thanks!

  17. My comment yesterday, from Fiji is missing.
    BTW: I am in agreement with Willis Eschenbach that NASA should get out of the Climate Change business and back into satellite launching business. One Giant Step for Mankind.

  18. RiH008,

    “BTW: I am in agreement with Willis Eschenbach that NASA should get out of the Climate Change business and back into satellite launching business. One Giant Step for Mankind.”

    Yes but there are limiting funds available with satellite launching business and easy fonds available in the Climate Change business. NASA has to survive with Climate Change business. James Hansen would have been laid off long time ago without Climate Change easy funds.

  19. Pekka – what if the best model is still not good enough ? What if the IPCC is waiting for the model results , who’s going to say there are too many unknowns ? And so we end up discussing uncertainties whilst the IPCC proclaims almost dead certainty. it’s a game of broken telephone with none having the courage to admit it.

    • You have obviously no knowledge on, what IPCC reports contain. They definitely do not “proclaim almost dead certainty”.

      There is certainly good criticism on the uncertainty estimates presented by IPCC, but it’s totally wrong to claim that IPCC would claim that uncertainties would be small. If they would claim that, how could their uncertainty band for the climate sensitivity be so wide (from AR4/WG1 Summary for Policymakers):

      It is likely to be in the range 2°C to 4.5°C with a best estimate of about 3°C, and is very unlikely to be less than 1.5°C. Values substantially higher than 4.5°C cannot be excluded, but agreement of models with observations is not as good for those values.

      Uncertainties are admitted and discussed all around the reports. The presentation is always logical, formulations vary from place to place, and guidelines given for the authors are not followed. The IAC report was pretty critical on these faults, but existence of uncertainties is certainly acknowledged.

      • Rob Starkey

        Peeka- would you agree that the range estimates of the IPCC regarding what a doubling of CO2 would do, were skewed towards the perception of additional CO2 having a greater impact. It appears that this was done to try to build a more urgent “case for action” and not really around solid science.

      • Pekka

        it’s totally wrong to claim that IPCC would claim that uncertainties would be small. If they would claim that, how could their uncertainty band for the climate sensitivity be so wide (i.e. in the range 2°C to 4.5°C )

        Based on more recent observations (Spencer & Braswell 2006, Lindzen & Choi 2009 with addendum 2011, Spencer 2011), it looks like the uncertainty band is even greater that IPCC thought back when they published AR4: say “in the range of 0.6°C to 4.5°C “).

        Max

  20. I don’t agree with the way you formulate the point, but I do agree on something in that direction:

    Some of the scientists have overemphasized the value of their results. This is a common (and humanly natural) fault of scientists of all fields, but here that has probably been done almost always in the direction of strong AGW. Thus I see here a bias, but I don’t know, how strong it is.

    Many of the estimates were made based on data up to around 2000. What we have seen after 2000 indicates that same methods lead now to lower estimates and give support for a significant role of decadal oscillations (or something similar) in the warming up to 2000. Not all climate scientist appear willing to accept fully the value of this new evidence.

    There is certainly some pressure within the climate science community to stay in line. Some well known people exercise such pressure amazingly openly, as we have able to witness. This is very counterproductive for the progress of knowledge. As far as those people are right, it acts against them as it strengthens the skeptic arguments. When they are not right, it makes proper scientific discussion more difficult. Thus the consequences are negative, whether they are right or wrong on the science.

    I could formulate my view also as follows:

    I’m not sure, how much truth is in the criticism of skeptics, but the way many AGW supporters have behaved justifies the existence of skeptics.

  21. Rose Dlhopolsky

    FYI, for those who use OpenOffice, I managed to be able to read the PPT files with OpenOffice.org 3.2.1. This was not necessary for all of the ppt files. The trick was to say “no” to the repair request and then wait for it to … well…repair itself.

  22. Eric Ollivet

    Model V&V

    Of course NASA is familiar with models and their verification & validation. NASA has already issued a Standard on this topic :
    NASA-STD-7009

    In any field of science models are use and their V&V is a key issue as providing confidence in models’ ability to produce reliable forecasts for decision making. Everywhere the baseline procedure for validating a model and certifying its ability to reproduce “real world” is to confront model’s forecasts with experimental data, for different test cases.
    See following examples :
    1. Charles M. Macal’s presentation here
    2. Good paper from Los Alamos Nat. Lab

    For Climate science and models, experimental data are (past) observed ones.
    And so far, the main outcome of this cross verification & validation is that NONE OF THE CLIMATE MODELS IS ABLE TO CORRECTLY REPRODUCE PAST CLIMATE TRENDS.
    – They all fail reproducing the 60 years oscillations caused by PDO (and AMO)
    – They fail reproducing cooling periods as observed over [1880 – 1910], then [1940 – 1970] and since roughly 2000
    – They fail reproducing the equivalence of warming rate (+0,16°C per decade) observed over [1910 – 1940] and [1970 – 2000] periods : calculated warming over [1910 – 1940] is 2,5 times lower than observed…

    This inability to reproduce “real world” formally invalidate ALL these models, and forbid any decision making based on these models’ predictions.

    Therefore Climate models V & V shall be one the main issue for Climate Science for AR5 and more generally the coming decade.

  23. As if on cue…here comes another NASA “climate” mission ready to collect 3 years of data that mathemagically “will eventually be used to improve the accuracy of climate forecast models“. Yawn.