What can we learn from climate models? Part II

by Judith Curry

In my original essay on this topic October 2010, my short answer to this question was “I’m not sure.”   My current thinking on this topic reframes the question in the context of fitness for purpose of climate models across a range of uses and applications.

This post will eventually get to a talk that I gave a few weeks ago, but first some context and background for the venue and audience.

DOE BERAC

I gave a talk on this topic several weeks ago at the DOE BERAC    meeting.  The agenda and presentations of the meeting are [here].

Some of the presentions of the DOE Office of Science administrators should be of general interest:

I have to say that I came away from this meeting very impressed with DOE science.

JC’s presentation

As a new member of BERAC, I was invited to give a one hour science presentation, my talk “What can we learn from climate models?” can be found [BERAC curry talk].

I’ve taken the text from my slides and put some narrative around it, and embedded some links to references and my previous posts for background.

What can we learn from climate models? 

The Strategic plan for the DOE Climate Change Research Program (2009) provides the following overarching statements:

  • “Priorities for climate change research have moved beyond determining if Earth’s climate is changing and if there is a human cause. The focus is now on understanding how quickly the climate is changing, where the key changes will occur, and what their impacts might be. Climate models are the best available tool for projecting likely climate changes, but they still contain some significant weaknesses.”
  • “Projections of future climate change . . are the basis of national and international policies concerning greenhouse gas emission reductions.”

In the 2008 Report CCSP SAP 3.1 Climate Models: An Assessment of Strengths & Weaknesses, the path of climate modeling in the U.S. is articulated as:

The science of climate modeling has matured through finer spatial resolution, the inclusion of a greater number of physical processes, and comparison to a rapidly expanding array of observations.

With increasing computer power and observational understanding, future models will include both higher resolution and more processes.

Strategic planning regarding climate modeling is currently underway in the context of a NRC Study in Progress on A National Strategy for Advancing Climate Modeling.   From what I can glean from meeting agendas and committee membership and other information provided about the project, the following topics seem to be the main focus:

  • Increasing resolution and adding complexity
  • Fully interactive earth system models (chemical, biogeochemical, land cryosphere); interface with human systems models
  • Seamless prediction across timescales; data assimilation and initialization
  • Downscaling for regional applications
  • Infrastructure
  • Communication of climate model results (including uncertainty, credibility); engagement with stakeholders:  usefulness for decision making

So the U.S. is continuing on a path that builds upon the current climate modeling paradigm by adding more complex physical processes (e.g. carbon cycle) and increasing the model horizontal resolution.

I think there are some deep and important issues that aren’t receiving sufficient discussion and investigation.  Some issues of concern:

  • There is misguided confidence and “comfort” with the current GCMs and projected developments that are not consonant with understanding and best practices from other fields (e.g. nonlinear dynamics, engineering, regulatory science, computer science).
  • GCMs may not be the most useful option to support many decision-making applications related to climate change
  • Is the power and authority that is accumulating around GCMs, and the expended resources, deserved?  Is it possibly detrimental, both to scientific progress and policy applications?

Why do climate scientists have confidence in climate models?

With regards to the question in the last bullet, this quote from Heymann (2010) poses the questions surrounding this issue in the following way:

The authority with which climate simulation and other fields of the atmospheric science towards the close of the twentieth century was furnished has raised new questions.

  • How did it come about that extensive disputes about uncertainties did not compromise the authority and cultural impact of climate simulation?
  • How did scientists manage to reach conceptual consensus in spite of persisting scientific gaps, imminent uncertainties and limited means of model validation?
  • Why, to put the question differently, did scientists develop trust in their delicate model constructions?

I wrote a previous essay entitled The culture of building confidence in climate models.   There are two basic approaches for building confidence in models:

  • Formal approach:  verification & validation; explicit analysis of model errors, including a detailed analysis of model interactions
  • Informal approach:  GCM modelers personal judgment as to the complexity and adequacy of the models

For GCMs, the informal approach has dominated, because model complexity limits the extent to which model processes, interactions and uncertainties can be understood and evaluated.

So why do climate modelers have confidence in their climate models?

Confidence derives from the theoretical physical basis of the models, and the ability of the models to reproduce the observed mean state and some elements of variability.  Climate models inherit some measure of confidence from successes of numerical weather prediction.

‘Comfort’ relates to the sense that the model developers themselves have about their model, which includes the history of model development and the individuals that contributed, the reputations of the various modeling groups, and consistency of the simulated responses among different model modeling groups and different model versions.  This kind of comfort is arguably model truthiness.

Knutti (2007) states:  “So the best we can hope for is to demonstrate that the model does not violate our theoretical understanding of the system and that it is consistent with the available data within the observational uncertainty.”

Based upon the interactions with modelers from other fields, that I have engaged with mostly in the blogoshere (here and also ClimateAudit), there is ‘rising discomfort’ about climate models.  The following concerns have been raised:

  • Predictions can’t be rigorously evaluated for order of a century
  • Insufficient exploration of model & simulation uncertainty
  • Impenetrability of the model and formulation process; extremely large number of modeler degrees of freedom
  • Lack of formal model verification & validation, which is the norm for engineering and regulatory science
  • Circularity in arguments validating climate models against observations, owing to tuning & prescribed boundary conditions
  • Concerns about fundamental lack of predictability in a complex nonlinear system characterized by spatio-temporal chaos with changing boundary conditions
  • Concerns about the epistemology of models of open, complex systems

Verification and Validation

Verification and validation (V&V) is the process of checking and documenting that a model is built correctly and meets specifications and is an accurate representation of the real world from the perspective of the intended uses of the model. Previous posts on climate model V&V are [here and here].

Arguments for climate model V&V:

  • V&V promotes and documents model robustness, which is important for both scientific and political reasons.
  • Lack of V&V is major source of discomfort  for engineers

Arguments against climate model V&V:

  • V&V is overkill for a research tool; inhibits agile software development
  • A tension exists between spending time and resources on V&V, versus improving the model.

V&V for climate models is gaining traction (more on this next week), mainly because climate models are now so complex that no one person can really wrap their head around the code, and much more documentation is needed for the users of climate models, especially community climate models such as NCAR.

So, what kind of V&V makes sense for climate models?  The same V&V used for a NASA space launch isn’t necessarily appropriate for climate models.  Several papers have been written on V&V for scientific models with recommended approaches:

From Roy and Oberkampf (2011) A comprehensive framework for verification, validation, and uncertainty quantification in scientific computing:

“The framework is comprehensive in the sense that it treats both types of uncertainty (aleatory and epistemic), incorporates uncertainty due to the mathematical form of the model, and it provides a procedure for including estimates of numerical error in the predictive uncertainty.”

From Sargent (1998) Verification and validation of simulation models

“The different approaches to deciding model validity are presented; how model verification and validation relate to the model development process are discussed; various validation techniques are defined; conceptual model validity, model verification, operational validity, and data validity are described; ways to document results are given; and a recommended procedure is presented.”

From Pope and Davies (2011) Testing and Evaluating Climate Models:

Range of techniques for validating atmosphere models given that the atmosphere is chaotic and incompletely observed:

  • Simplified tests: against analytical or reference solutions
  • Single column tests for physics components
  • Dynamical core tests e.g. numerical convergence, aquaplanet simulations
  • Realistic climate regimes e.g. compare observations, multiple models
  • Double call tests to assess the impact of model changes
  • Spin up tendencies to evaluate model biases
  • tests of climate models operating as numerical weather prediction models

WGNE Climate Model Metrics Panel Gleckler et al. (2008):

Identify a limited set of basic climate model performance metrics based on observations

Climate Models: Fit for (what?) purpose  

Verification of a model occurs in the context of the intended purpose of the model.  Consider the following different applications of climate models:

  • Numerical experiments to understand how the climate system works; sensitivity studies
  • Simulation of present and past states to understand planetary energetics and other complex interactions
  • Attribution of past climate variability and change
  • Simulation of future states, from decades to centuries
  • Prediction and attribution of extreme weather events
  • Projections of future regional climate variation for use in model-based decision support systems
  • Guidance for emissions reduction policies
  • Projections of future risks of black swans & dragon kings

These applications are by no means exhaustive, but cover a range of territory from purely scientific applications to purely policy applications.  Lets assess how well climate models that are General Circulation Models (GCMs) are fit for each of these applications

• Explore scientific understanding of the climate system

GCMs are certainly used to explore scientific understanding of the climate system.  However a major challenge is that nearly all of the resources ($$ and personnel) are being spent primarily on IPCC production runs, with little time and $$ left over for innovations and scientific studies.  It seems that the main beneficiaries of the IPCC production runs are the impacts area and surrounding sciences (e.g. ecosystems).

What is needed to further our scientific understanding of the climate system is a plurality of models with different structural forms and different  levels of complexity in physical processes.  This is difficult if $$ and personnel are focused on GCMs and IPCC production runs.

 Attribution of past climate variability and change

•  Simulation of plausible future states

•  Support for emissions reduction policies

These applications are at the science-policy interface and are the main applications for the IPCC.  The challenges of using GCMs for these applications include:

  • require adequate simulation of natural internal variability on multi-decadal to century time scales, which has not been achived
  • solar forcing: better understanding of historical 20th century forcing; solar sensitivity studies conducted as part of attribution assessments; investigation of solar indirect effects; development of scenarios of 21st century solar forcing

For these applications, GCMs may be less effective than intermediate models (with lower resolution and a much greater ensemble size) in developing an understanding of climate sensitivity and attribution.

Projections of future regional climate variation for use in model-based decision support systems

GCMs currently have little skill in simulating regional climate variations; it is unclear how much increased resolution will help.  Dynamical & statistical downscaling adds little value, beyond model output statistics to account for local effects on surface variables.

GCM’s are probably not the best approach for supporting regional decision making.  Of equal or greater usefulness for such applications are to

  • improve understanding of historical/paleo regional climate dynamics and black swan events
  • inclusion of a broader range of future scenarios of natural forcing changes (e.g. solar, volcanoes) and natural internal variability
  • creative, regional approach to scenario development,  including population and land use changes and alternative policy scenarios

•  Prediction and attribution of extreme weather events

Challenges for GCMs:

  • Climate models do not currently predict explicitly many types of extreme weather events (e.g. hurricanes, flash floods, tornadoes)
  • Much higher resolution climate models with much larger ensemble sizes are necessary (but probably not sufficient)

Other approaches:

  • Greatest short term contribution would come from regional historical and paleo analyses of extreme events
  • Climate dynamics approach, interpreting past extreme events in context of teleconnection and climate regimes, blocking patterns

• Projections of future risks of black swans & dragon kings

The possibility of future black swans and dragon kings are not only the most interesting from a scientific perspective, but arguably the most important from the policy perspective.  GCMs are currently incapable of predicting emergent phenomena, e.g. abrupt climate change.  It is not at all clear that GCMs will be able to generate counterintuitive, unexpected surprises.  The current GCM’s have become ‘too stiff.’

Other possible approaches include synchronization in spatio-temporal chaos and other theoretical developments from nonlinear dynamics, network theory.

Usefulness for policy making

A conclusion that I draw from this analysis is that a completely general, all encompassing climate model that is accepted by all scientists and is fit for all purposes seems to be an idealistic fantasy.  Instead, we need a plurality of climate models that are developed and utilized in different ways for different purposes.  For decision support, the GCM centric approach may not be the best approach. Given the compromises made for multiple purposes, GCMs may not be the optimal solution for any of these purposes.

A very provocative paper by Shackley et al. addresses the following issues:

“In then addressing the question of how GCMs have come to occupy their dominant position, we argue that the development of global climate change science and global environmental ‘management’ frameworks occurs concurrently and in a mutually supportive fashion, so uniting GCMs and environmental policy developments in certain industrialised nations and international organisations. The more basic questions about what kinds of commitments to theories of knowledge underpin different models of ‘complexity’ as a normative principle of ‘good science’ are concealed in this mutual reinforcement. Additionally, a rather technocratic policy orientation to climate change may be supported by such science, even though it involves political choices which deserve to be more widely debated.”

So, are GCMs especially policy useful?

Main advantages:

  • potential for providing regional climate change scenarios (this potential is currently unrealized)
  • perception that complexity = scientific credibility; sheer complexity and impenetrability, so not easily challenged by critics

Disadvantages:

  • demands massive computing and personnel resources; creates dependency on a few centers and their experts
  • slow to incorporate new scientific insights or understanding
  • precludes conducting extensive sensitivity and uncertainty analyses
  • precludes rapid exploration of different model assumptions and policy scenarios
  • not user friendly for advisory scientists or policy makers

So lets consider the key policy issues, and assess how useful GCMs are.

CO2 mitigation policies:

  • GCMs have a role to play, but large ensembles from lower order models with interactive carbon cycle may be the best solution for determining sensitivity

Regional climate change and extreme events:

  •  natural climate variability is at least as important as AGW, particularly on decadal time scales
  • much to be learned from the climate dynamics of past and paleo regional climates and extreme events
  • regional impact models can be forced by wide range of creatively produced scenarios

Understanding and representing uncertainty

The challenges:

  • Uncertainty and ignorance assessment is a critical element for decision making strategies
  • Parameter and parameterization uncertainty is inadequately assessed for individual models or multi model ensembles
  • Ensemble size for initial condition uncertainty is far too small
  • Uncertainty associated with model structural form is rarely assessed

Other approaches for assessing uncertainty:

  • Stochastic models; stochastic parameterizations
  • Monte Carlo techniques and sensitivity analysis
  • Uncertainty management approaches such as NUSAP

In the climate community, ‘uncertainty’ is something that is regarded as something that needs to be communicated to decision makers.  However, uncertainty quantification, assessment, and/or management is central to scientific understanding and to the overall accountability and usefulness of the models for decision making.

Summary

  • A completely general, all encompassing climate model that is accepted by all scientists and is fit for all purposes seems to be an idealistic fantasy
  • Increasing complexity (adding additional sub models) is less important for many applications than ensemble size
  • We need a plurality of climate models that are developed and utilized in different ways for different purposes.
  • For many issues of decision support, the GCM centric approach may not be the best approach
  • Given the compromises made for multiple purposes, current GCMs may not be the optimal solution for any of these purposes

JC comment:  I realize that this thread is pretty technical, including jargon, etc., and may not be easily understood by denizens who aren’t modelers or who haven’t been following along on the earlier climate modeling threads.  Tamsin Edward’s new blog should help in this regard.

282 responses to “What can we learn from climate models? Part II

  1. I enjoyed the post but I was a little bit flummoxed by your suggestion that one of the advantages of GCM’s for policy purposes was –

    perception that complexity = scientific credibility; sheer complexity and impenetrability, so not easily challenged by critics

    Forgive me if I’ve misunderstood this, but isn’t that a bit cynical? Maybe that was the intention, but if so, the only way – according to your post – in which GCM’s are policy useful is the potential ability to make regional predictions.

    That would seem a scant reward for the billions being invested..

  2. John Kannarr

    Arguments against climate model V&V:

    ■V&V is overkill for a research tool; inhibits agile software development
    ■A tension exists between spending time and resources on V&V, versus improving the model.

    A research tool that is used to promote the ideas that will drive major policy decisions and potentially cause economic havoc all over the planet? This is a lot more than a research tool, and very frightening to think it would not be subject to the most rigorous V&V possible. Where is the pre cautionary principle when we really need it?

    How can we speak of improving the model if we don’t know how well it even works, i.e., V&V?

    • k scott denison

      If we don’t want to spend the time and resources doing V&V on the models, the we should not allow them to be used in informing policy, period.

    • John Carpenter

      In order to improve the model, one needs to do some V&V. They are intertwined with one another.

    • Steve Milesworthy

      False dichotomy. V&V is done on climate models. But the V&V you do on a research tool does not necessarily have to look the same as the V&V you do on a software package that will be directly used by thousands of users, or that will cost billions if it crashes with a divide by zero error just as your expensive spacecraft is about to enter orbit.

      • Latimer Alder

        It ain’t going to cost billions if we rely on it for policy decisions. It’ll be in the mega trillions.

        I want the frigging models tested and verified and validated and torn apart and put back together again several times by the nastiest ugliest dudes on the planet who throw all the sh*t and slime and gunge that they can possibly think of at them…and then some.

        And once the models have successfully predicted actual climate changes for 20 or 50 real years, then maybe, just maybe, it might be time to start to use them to help in policy matters.

        And it that is an inconvenience, or possibly even a hindrance or a setback to ‘the modelling community’, that is just the price they have to pay for a publicly funded career. Tough.

      • In no other arena of public policy, none what so ever, in which models, methods, software, applications and users play a critically important role are research-grade ( playing in the sandbox grade ) tools used for decision support. None.

      • Steve Milesworthy

        Latimer, your second paragraph is what is done (with all due respect to the hansom Gavin). Your third paragraph is not a valid requirement – you have no reason for believing that two identical earths would produce the same answer after 20 or 50 years.

      • Latimer Alder

        @steve m

        Please describe the recruitment process and reward structures for the nastiest meanest dudes on the planet that you assert are asked to kick the sh*t out of the models.

        Because if they are not completely independent from, and motivated by completely different things from ‘the modelling community’, then I fear you are underestimating the task required.

        One technique used in commercial software development (like for serious grown up stuff that makes the internet work. not just desktops) is to have entirely separate QA and development teams. Separate management, separate objectives, separate career structure. Separate employers if needed.

        The QA team is paid for finding the bugs..the development team for not writing them in the first place. There is therefore a ‘creative tension’ (i.e they f…g collectively hate each other – though maybe not individually) between the two that leads to far better quality code and a far lower in-service bug rate than would otherwise happen. The QA team gets nastier and craftier at finding bugs and the development team at writing good code. It is, if you will, an arms race. And they both learn from in-service bug uncovering.

        I use the term ‘bug’ advisedly. Of course it not only covers actual program errors like divide by zero (ouch who tested that one then!), but design flaws and al the other V& V stuff discussed above

        Having seen and read about the way the supposed quality control of ‘peer-review’ has become (in climatology at ;east) no better than a cursory ‘pal review’, I need a lot of convincing that modelling QA is annything like as rigorous as it is in commercial code.

        Happy to be proved wrong.. So please outline the processes that are actually gone through.

      • Latimer Alder

        @steve m

        Sorry to be so dense. The implications of this remark took a long time to get through my pre-teatime brain.

        ‘you have no reason for believing that two identical earths would produce the same answer after 20 or 50 years’

        So if I make predictions for 50 years ahead using one of the models, then you seem to be saying that you have, as a matter of principle, no means of knowing whether the result will bear any relationship to the reality on this earth. Not just that it needs better modelling or more powerful hardware or a finer density. But that it actually is no good at all. The results ‘might’ be true, or they might not.

        Please clarify in case I have misunderstood.

        .

      • A model could accidently match data for the wrong reasons and appears to be correct.

      • Steve Milesworthy

        Latimer, different bits of models are written by different groups. For at least some of the models there is a verification process to detect and remove unintended consequences (code reviews, regression testing and so forth) and a validation process (eg. checking a range of climatological fields in a preagreed way to demonstrate the model is as good as or better than it was).

        The whole model is then used as a tool by a wide variety of different users for different reasons (climate, nwp, seasonal, academic research, IPCC runs, ensembles etc. etc.) So there are a lot of users of the models with vested interest in the models being fit for their purpose. Probably the majority of them don’t give a stuff about the model’s sensitivity – they care more that it does a good forecast, has a plausible ENSO, models cooling due to volcanoes etc. etc.

        Your second point – your 50 year test is just not a good enough or useful enoug test. Simple as that. There are better tests for models, and tests that can more reasonably done like are the winds the right sort of strength in the right direction, does it demonstrate a reasonable monsoon process for the right reasons etc. . You appear to be picking a 50 year test because it is a test that requires delaying action.

      • andrew adams

        Latimer,

        You might find these links interesting

        http://scienceofdoom.com/2010/02/27/models-on-and-off-the-catwalk-part-one/

        http://scienceofdoom.com/2010/03/23/models-on-and-off-the-catwalk-part-two/

        I think it gives some good examples of the strengths and weaknesses of models which we can see without having to wait 50 years.

      • Latimer Alder

        @steve m

        You are making matters worse, not better.

        If I am to understand you correctly, the models are in fact a real dog’s breakfast. Bits are produced at different times and by different groups. To different quality standards (if any). By self-taught self-selected FORTRAN coders of different standards of knowledge and even with totally different objectives or technical understanding. There is no overall systems architecture or design. I’m guessing that even basic stuff like a data dictionary (to ensure that different coders are talking about the same things in the same way) are weakly enforced (if at all). No means of coming to common understandings, and (I’m guessing here again) variable standards of documentation..starting with the ever popular ‘none at all’. Nobody to oversee the various endeavours.

        In IT we use the word ‘architecture’ a lot. It conveys nicely the idea that a building (and a computer system) is designed as a whole as well as being composed of individual bricks and fixtures and fittings and roofing tiles and windows. It is a building..and it can be fit for purpose dependent on its architecture, not just it’s components.They need to fit together to make a harmonious whole You seem to have described an ‘architecture-free’, which does not bode well at all. Aeroplanes with undersized wings or assymetric engines don’t work very well either. You need an architect function and an architecture – even if,, as in open source, – they are collective not individual efforts.

        Then these scraps of code are somehow shoehorned together (seems like a few miracles would be appropriate to make them fit), and then let loose, untested, on the ‘climate community’.

        And I don’t even dare to begin to think about how you go about fixing it when something goes wrong or things change.

        Results are produced (misleadingly called ‘experiments’), papers written and published. They are then (assuming they meet with the approval of the great gods of the field and aren’t heretical or against Phil Jones’ famous ‘instinct’) incorporated into the IPCC reports which then go to influence government and international policy around the world.

        Wow oh wow.

        I am struggling to convey exactly how dismayed – and terrified – I am by the prospect that we rely on this sort of system for anything beyond recreational and casual game-playing. As Dan Hughes so rightly said upthread.

        ‘In no other arena of public policy, none what so ever, in which models, methods, software, applications and users play a critically important role are research-grade ( playing in the sandbox grade ) tools used for decision support. None’

        But what you have described doesn’t even get into the sandbox. It isn’t even old enough to have nappies (diapers).

        Again, please correct me if I have misunderstood.

      • Latimer Alder

        @steve m

        First, I wasn’t applying a 50 year test because it needed delaying action. I chose it because you had referred to the possibility of getting different results in 20 or 50 years.

        Matters are even worse if you take 20 years…it seems that the models are completely f…useless even on that short timescale. I hope to be still alive in 20 years from now, but you are (I think) saying that there’s no reason to believe that it is even theoretically possible to see that far ahead.

        Second, you casually say

        ‘There are better tests for models, and tests that can more reasonably done like are the winds the right sort of strength in the right direction, does it demonstrate a reasonable monsoon process for the right reasons etc’

        which is entirely unconvincing. It may be that these are easier tests to pass. But if we use a model to forecast the temperature – and hence all the other stuff that people get excited about – 20 years out, the key thing it has to get right is the temperature. Not just the winds or the monsoon or the thunderstorms or the snow …nice though these might be, they are not the thing that you are trying to find.

        To use a football analogy, if Exeter City were to set their heart on winning the European Cup, it would be little consolation to remark that they scored a very nice goal against Wycombe Wanderers in the League 1 relegation battle. It may well be true, but it is irrelevant. Apollo 11 would have got few brownie points if it had missed the Moon completely but Neil Armstrong commented on how good the paintwork was looking after re-entry as a consolation.

        So I disagree entirely that these are ‘better tests’ They are more convenient tests perhaps. If you are the programmer they are probably great. You set yourself a low bar and you jump it. then pat yourself on the back and tell the world what a clever chap you are. But they are of very limited practical value when what we want to know is the frigging tmperature. The models should not be constructed (nor tested) with the programmers’ self-esteem as a prime concern. But with the public’s interest i getting the predictions right.

      • WisconsinitesForGlobalWarming

        @Steve M

        So far it appears you have laid out why the models are totally unsuitable for any purpose, let alone the purpose of informing policy decisions that will cost many trillions of real dollars.

        Perhaps you can tell us exactly what the models are built to forecast and how that applies to informing policy.

        Otherwise, I believe you have made the case as to why investment in models should be stopped immediately.

      • Steve Milesworthy

        You got all that from my little reply? Of course you didn’t, so yes you have understood.

        Try taking my comment as a whole instead of picking one part and pretending that it is the only relevant bit. So 1) *relatively* independent groups working on different aspects of the problem, 2) over-arching system standard and testing to prevent unintended consequences getting through (ie. bog-standard coding errors or badly written unreadable code), 3) seriously hard scientific analysis of the outcome starting with the developers’ colleagues and managers, and then moving on to the people producing your forecast and trying to sell the results to a wide group of tight-fisted customers, the collaborator groups doing the same and the academic community with its members each looking to make a name for themselves and of course scientists like Lindzen and Spencer trying to find things wrong with the models.

        Without the first two, the product probably would struggle to meet the third task long term as improvements would be stultified by a growing set of horrible errors, unexpected behaviours and difficulties in maintaining code base. For example, will the model behave in a 1km forecast as well as a 250km low resolution climate configuration without substantial and inexplicable changes (fudge factors) being made to its formulation (use of parametrizations for sub-gridscale processes don’t count because they are explicable).

      • Steve Milesworthy

        WisconsinitesForGlobalWarming, where have I laid out why they are unsuitable?

        If you believe failure of modellers to pray before your particular god of Vandv means the models are no good, then that’s a religious choice not a rational choice. Different procedures apply to different sorts of software.

        If you know exactly what you want to do then follow a waterfall, if you aren’t sure you follow a different procedure.

        If you are paying someone to write software you want to sell then you have very clear specifications because they will write as little as they need to to get the money. If the future of your job and your long term respect from your colleages depends on you and them using the software you are writing for many years, then you enforce standards upon yourself (alongside those forced upon you) because otherwise they won’t use your software and you won’t have a career because your scientific output will be nil.

        Perhaps you can tell us exactly what the models are built to forecast and how that applies to informing policy.

        I’m not a climate scientist, but I understand they are built to provide the best simulation of climatology (average climate) and weather (for nwp) based on how well they model a range of variables whose choice is made using scientific judgement. The application to policy is dependent on the uses the model gets put to – for example IPCC scenario runs feed into mitigation, decadal and downscaling studies feed into adaptation. But this is *after* the model (the core science bit) is built.

      • Latimer Alder

        @steve m

        No time now to read and consider your reply in detail, but just one immediate thought

        How many of the ‘tight-fisted customers’ are a) spending their own (not publicly funded) money and b) have a genuine choice about whether to use the/your models or not?

        Further comments later.

      • Latimer Alder

        @steve m

        ‘If you are paying someone to write software you want to sell then you have very clear specifications because they will write as little as they need to to get the money. If the future of your job and your long term respect from your colleagues depends on you and them using the software you are writing for many years, then you enforce standards upon yourself (alongside those forced upon you) because otherwise they won’t use your software and you won’t have a career because your scientific output will be nil.’

        What an incredible statement. Effectively you are saying that employing professional techniques and professionals to write code to merchantable quality is *necessarily* inferior to a bunch of amateurs because the amateur has a longer term interest in the success of the project.

        Bollocks.

        Lets take it one by one:

        ‘They will write as little as they need to get the money’. Implication that people doing things for money (i.e professionals) are inferior to amateurs who do it for ‘love’. That doesn’t seem to apply to any other field so unless you can come up with very good reasons why, for example Microsoft’s or IBM’s or Google’s programmers are inferior to thise of GISS, I think this remark is garbage.

        Continuing on that line, the three companies – and hundreds and thousands of others – have invested hundreds of thousands of man-years in finding out how to design and write good code, how to organise software projects and how to maintain and support such.It has taken 60 years since the early days of computing and there is taill a way to go. But their products underpin some of the most critical systems we know today…air traffic control, the internet, medical systems, on-line shopping, the banking transaction system…..there are hardly any everyday tasks that you undertake that does not use some or all of these products somewhere along the way.

        And they are not perfect…we can all produce the occasional examples of where they have failed. But it is in their commercail interest to prodcue better code (higher price, less support needed) more quickly (more product to sell), and to do so they have worked out better and better ways to do so. These come form standards and tools and methodologies..which are pretty much understood – at least in outline by now.

        To contrast all that with the idea that ‘the respect of your colleagues mean that you impose standards in yourself’, and that those standards are therefore somehow ‘better’ than thise used by a bunch of (lazy) commercial programmers is ludicrous.

        It may be that the indivdual programmer is a nice guy and loves Gaia and is good to widows and orphans and stray cats. He may try extremely hard and get up early in the morning to do so. He may be a throroughly good chap in every sense. But unless he is using the correct methods and techniques the quality and productivity of his work will be far less than that produced by the professionals.

        One might as well argue that consulting a witch doctor is a better idea than a GP since the latter only does it for the mony, while the witch doctor does it for ‘love’. And learnt it from their direct interactions with nature. Meanwhe the GP was corrupted by having to go to medical school.

        It is abundanty clear form your descriptions that the ways in which the models are developed is a very, very, very long way from currently understood best practice for big sotware projects. And that it is being developed by those who (it seems) actively avoid and are actively opposed to any such practices. Their interest is in getting their ‘academic output’. Which seems to mean writing papers, not doing QA on their models.

        With conflicting objectives (papers vs quality) and limite dresources it is clear that quality will inevitably take a back seat.

        Steve, you may be contetn to fly your amily on holiday in a shaky old DC3 whic has never had an airworthiness certificate, is maintained by a spare time guy with no knowledge of aircraft design and is mostly occupied on his first love of angling, and creed by a partially sighted pilot with no maps.

        But me, I prefer, when doing something where the cost of failure is so high, to go with the professionals who have a reliable track record. of getting there each time amnd who know how to bloody do it.

        I wouldn’t touch a climate model with a barge pole.

      • Latimer

        I think you’ve summarized very well what “V&V” should be for any GCMs, which are used to set wide-ranging “policy”.

        Until you have this rigor of V&V, the GCM outputs should remain within the climate science community as interesting suggestions of what might be going on based on a series of fed-in assumptions, but should not be confused with scientific evidence or fact – and, most of all, should be kept out of any policy deliberations.

        Arguments against “wasting time” for extensive V&V are missing the point.

        The entire GCM model work is “wasted time” (for policy purposes) without it.

        Max

        Max

      • Steve Milesworthy

        Latimer, as usual you have read far too much of your own prejudices into what I say. I think you fail to understand why V&V for an air-traffic control system would need to identify different things from a climate or weather model.

        V&V on most software focuses a lot on individual components, and good architecture seeks to reduce the amount of coupling. Climate and weather models, though, are coupling problems, and furthermore, there is no clear right result, so much more validation needs to be done on the full (highly coupled) system in proportion to the amount of validation done on individual components.

        That models are exploring a highly coupled problem is one of the big reasons as to why the more common V&V techniques are not *sufficient* in most respects and not suitable in others.

        So a scientist developing an improvement to, say, a convection model can test her changes within the existing framework, or in a single-column framework, in order to identify coding issues, unexpected behaviours and so forth. And the system owners can carry out a full range of regression tests and code reviews to spot unexpected behaviours. But such testing is nowhere near sufficient as it is the full model testing that is more important.

        For example, if an air traffic control system crashes or produces a wild result one day a year, then lives are at risk. You can reduce the chance of crashing by testing individual components. You can reduce the wild results by running the full system with example data. But the range of example data is probably quite limited.

        For climate and weather models, running large numbers of trials on different sets of input data should exercise all the code paths and many of the possibilities. But there is masses of example data and you could conceivably run it for 30 model years before a failure was triggered.

        If a climate model crashes or produces a wild result the weather forecaster or climate researcher analysing the output will identify the wild result (a catastrophic tornado in Sussex or glaciers in the Atlas mountains for example) and look to resolve the problem. Some of the models will have been developed and tested for many years (after their orignal basic formulation has been agreed).

        Finally, the AMIP and CMIP model intercomparison exercises lay bare the outputs of all the models, and thereby the credibility of the scientists and the institutions involved. The models are then available for criticism by all and sundry. See for example the debates that happened following Roy Spencer’s Remote Sensing paper.

      • Steve Milesworthy

        manacker. V&V is important, it is done and it is not “wasting time”. I haven’t, though, seen someone recommend a different V&V technique than those that are done, and explain why it would work better. In absence of that there is little to discuss.

    • JC is reporting the arguments advanced here, not advancing them herself! It is clear that she considers these to be excuses and motivations for evading validation.

  3. Why not start with one regional climate model that is capable of modeling climate in the USA.

    WInters are cooling at -4.2F per decade.

    The west coast doesn’t appear to be warming at all.

    http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=425003517650&data_set=14&num_neighbors=1

    etc etc.

    Get the USA right. Take 10 or 20 years. Then try and model all of NA. After 10 or 20 more years try another continent.

    • Peter Davies

      Bruce, your suggestion that sub-grid models could be developed progressively and added to would have superficial merit but the complexities of a GCM would still apply to Regional Climate Models (RGMs) with the added detail pertaining to regional areas needing to be factored in. At least in the case of GCMs the regional factors could be averaged out.

      • The data is being tortured to claim there is some sort of global trend. There are regional and local trends that get ignored because they are all mashed together trying to prove climate is moving in one direction.

        In fact it is moving in all directions. Some station months cool and other warm all the time.

      • “Averaged out” is the circular argument in a nutshell. It assumes a global process independent of the regional climates.

  4. The text comprizes to many word “shell”s….after hearing the text for 3 minutes, it is hard to tell, what the sense is all about….
    I am proposing that GCMs, which now begin in 1850 or 1750, to be extended into the past to include the LIA, thus back to1650, 1550 and further. Then we are able to verify GCMs with observations from history …
    thereby increasing their accuracy….
    JS

  5. Encyclopedic, and damning.
    ===========

    • AGREE! This is JC’s best post yet. My respect for her intellectual honesty and rigour has leapt several notches.

  6. Latimer Alder

    ‘Predictions can’t be rigorously evaluated for order of a century’

    So what do we do in 2111 if we discover that we’ve bet the farm on expensive policies from our 2011 models. And that they were all absolute crap? That in fact we’ve been seduced into doing exactly the wrong thing because we’ve just assumed/convinced ourselves that the models were right.

  7. David Young

    Judith, Earlier today I left a lengthy (and hopefully balanced) post on the Lindzen seminar thread about modeling issues. I will copy it here since its pretty relevant I think.

    This business of modeling complex systems (in fluid dynamics, we do chemistry, multi-phase fluids, thermodynamics and forcings too) is still in its infancy. The issue that I think is underappreciated by climate scientists is how sensitive their results may be to modeling “choices.” Trust me on this, there are thousands of choices. Climate science is probably no worse than others in this area, but it does seem to be rare to systematically look at the sensitivity of results to these thousands of choices. Some of the simpler ones are easy to do, but it gets harder as the models get more complex. Believe it or not, there is a rigorous theory for calculating these sensitivities for systems of partial differential equations in a fast and systematic way. It is becoming more widely used in simple applications like aerodynamics or structural analysis, but even here the field is still dominated by codes that are too numerically sloppy for it to be applied in a meaningful way.

    Once you start to apply this rigorous theory, and there is a big investment in code rewriting required to get to that point, you see all kinds of interesting and informative information. For example, you can actually use sophisticated optimization to determine parameters based on data. This is done for example all the time in geology, where seismic data is used to infer underground properties.

    There is no evidence that I’ve seen that climate scientists are aware of this theory. That’s understandable since they have so many pressures to just make more runs and add more “physics” to their models.

    In any case, I do think Fred would benefit by looking into Reynolds’ averaging if he has the mathematical training to understand it to get a better feeling for how subgrid models are constructed and tuned and how more terms are added over the years and the immense problems of validation and verification are handled (often not very well). It is fine to just repeat the words of others, but real understanding can enable you to go much further.

    I still think that the focus on discrediting Lindzen is strange. Like any scientist, he is clearly wrong about some things. What is strange about it is that he I think he has a perspective that could be very valuable to the field.

    Whether aerosol models and forcings are “tuned” to match trends is a rather narrow issue without much relevance to the larger issue of model tuning and looking at sensitivity to these choices. By the way, Fred seems to have given us no insight into the aerosol interaction subgrid model itself, which surely must be complex and have lots of parameters. Tuning this model can have the same effect as tuning the aerosol forcings. So Schmidt’s comment may be technically true, but of no real significance. At least that is my suspicion, but I could be wrong on this.

    The problem here is that the understanding of complex models is very difficult to acquire. I am constantly learning new things myself. The issue of the models is not well suited to the “communication of science” mode of operation. The communicator inevitably is rather ignorant of a lot of details in other parts of the models. However, the idea of sensitivity of results to inputs or choices is more easy to understand. Then you can present a range of results that conveys uncertainty more effectively. It’s a constant problem, modeling is constantly used in industry and government. Those doing the modeling have a vested interest in certain outcomes and there is an incentive to present the results as more certain than they are. This is also true in medicine even though there are more controls in place there and a wider recognition of the conflicts of interests.

    The bottom line here is that regardless of whether Schmidt’s or Lindzen’s statements are narrowly true or not or maybe half true is a very minor issue except to those nvolved in the climate war as combatants. My suspicion is that both Schmidt and Lindzen have a contribution to make. The larger point is that in fact there are serious problems with the way complex models are built, run, and their results conveyed. This explains the narrow focus on this largely irrelevant issue in this thread, its something we can argue about superficially rather than getting to the real issue, which requires a more serious and rigorous learning experience.

    • David, your statement sums it up very nicely:

      “The larger point is that in fact there are serious problems with the way complex models are built, run, and their results conveyed.”

      • If we could only go back in time maybe…

        And up to today, what has been the total return, on our past global investments into the AGW Sciences Is?…

        A new model might suggest to sell all & go short.
        ‘O’

    • Peter Davies

      +1

    • David,

      You mention a “rigorous theory” for determining the sensitivities to choices, but you don’t mention what this theory is called or where it is described. I’d love to read more about this.

      • To Brian Blais: The rigorous theory is on the market, see:
        ISBN 978-3-86805-604-4 on German Amazon.de….
        In German, but easy to follow, no simulations, transparently
        calculated, the scaring book to all Warmists, therefore they
        keep the mouth shut on its content…..but, you will see,
        soon to no avail….
        JS

      • brian, You can go to google scholar and search for “large scale PDE constrained optimization”. You will find a book I think published circa 2007 by Springer that is a Sandia conference proceedings. This book and the references therein are good. Unfortunately, it is pretty technical. If you have a good math background, it will be no problem. Basically, its the application of an old idea, optimal control, to partial differential equations (which all climate models use). The idea is that you can tell how the inputs and parameters influence the output of the PDE rigorously by means of something called the adjoint operator. Hope this helps.

      • brian, Sorry, I just checked Amazon and the book is available. Google scholar doesn’t seem to find it. The publication date is 2003 and the deiters are L. T. Biegler, Omar Ghattas, Matthias Heinkenschloss, Bart van Blemen Waanders, I actually have a paper in the book, but it not one of the better ones unfortunately.

      • Peter Davies

        Brian my understanding of this issue has to do with the discretising of PDE’s forming the non-linear dynamic system that climate is.

        In normal cicumstances non-linear PDE’s are unsolvable and hence cannot be used in models unless some form of optimal control is used.

    • “This is done for example all the time in geology, where seismic data is used to infer underground properties”

      Agreed – then tested (V&V’ed) with actual drilling

    • Regarding sensitivity analysis and verification. I have links to information relating to sensitivity analysis. A lot of the earlier theoretical developments seems to have been carried out by D G Cacuci. A rough introduction of one method and application to the original 1963 Lorenz system.

      Initial verification of some of the methods used for some of the fundamental models seems to be underway. Examples are

      Towards a public, standardized, diagnostic benchmarking system for land surface models,

      A standard test case suite for two-dimensional linear transport on the sphere,

      The Ocean–Land–Atmosphere Model (OLAM). Part I: Shallow-Water Tests,

      The Ocean–Land–Atmosphere Model (OLAM). Part II: Formulation and Tests of the Nonhydrostatic Dynamic Core.

      However, the Method of Manufactured Solutions ( MMS ) seems to have not yet been employed.

    • “Whether aerosol models and forcings are “tuned” to match trends is a rather narrow issue without much relevance to the larger issue of model tuning and looking at sensitivity to these choices”

      Actually, I think you are completely wrong on this. If you measure actual rates of physical processes you get rate constants +/- few%, pressure terms +/- few%,, temperature terms +/- few%. It is very easy to pick values within the error bars and get the answer you wish.
      In 1930 Chapman published his paper “On Ozone and Atomic Oxygen in the Upper Atmosphere”. where he modeled the formation and destruction of ozone in the atmospheric column.
      The really nice thing is that Chapman’s models suggested that there would be about twice as much ozone as there was in reality. The ‘ozone-gap’ thus suggested that there was a big piece of the chemistry missing. Investigating reactions of nitrogen oxides, then sulphur oxides and finally halogens, produced more chemistry, and a result that was closer to reality.
      Chapman’s model was a big arrow that said something is missing; it converted an ‘unknown unknown’ into a ‘known unknown’.
      Chapman could have ‘trained’ his model, i.e., cherry picked low and high rate constants to generate a curve that better fitted reality. Doing so would have provided no impetus for exploring the chemistry of nitrogen/sulphur/halogen chemistry.

      • David Young

        Doc, I agree with your point. Tuning can be dangerous if it delays fundamental advances. The problem is that chaotic processes require approximate models that have to be tuned.

      • To David: This ” chaotic component” —- This is nothing more
        than the confession, someone does not understand the inherent
        mechanics or driving forces…..all chaotic and we “dont understand nothing mates”…. we are the donkey in front of the grandmother wall clock…..
        pendulum left —-pendulum right….all CHAOTIC and “we only see an
        absence of climate stability” – as they call it….
        whereas, if the pendulum did not swing anymore, THIS would be
        absence of climate stability, chaos, and not the other way around….
        JS

    • AGREE!
      Put bluntly, from Brignell’s “Number Watch” site, in The Laws section:

      The law of computer models

      The results from computer models tend towards the desires and expectations of the modellers.

      Corollary

      The larger the model, the closer the convergence.

  8. Whew! If predicting the future is hard, predicting the unpredictable is going to be a BEAR!

    I honestly believe that general climate models can be very valuable tools, but there should be more angles of attack as you mention with various fields and less group effort that might tend to reinforce common assumption errors.

    • Actually, the between-the-lines conclusion here is that GCMs have gone so far down the road to impenetrable circular complexity that it would be far more useful to torch them and start over with analyses of real-world cyclical events and patterns, and build up from there. Cut our losses, start over with actual competent models built by actual competent modelers, not Jackasses of All Sciences, Masters of None.

      GCMs have eaten far more than their share of money and time, and are now irredeemably associated with futile and disastrous policies and politics.

      • To clarify, real-world regional cycles and patterns, with emphasis on V&V. No “leaps over the known unknowns” into misty generational projections.

  9. Doug Badgero

    “Impenetrability of the model and formulation process; extremely large number of modeler degrees of freedom”

    This coupled with poorly constrained empirical data as inputs means the GCMs are little more than expensive examples of overfitting IMO. This is further supported by their inability to predict future climate states.

  10. Kyoto declared CO2 to be THE cause of global warming. The energy we consume annually emits enough heat to raise the atmospheric temperature by 0.14^F but due to cooling from glacial melting and photosynthesis, and heating of the earth’s crust, the temperature rise was limited to ~0.04*F for the previous10-15 years. CO2 may have some effect, but until the models study and include the separate and joint effects of heat and CO2, they are useless.

  11. John from CA

    Has anyone bothered to check the US technology transfer program to see if the climate model(s) already exist? We’re likely to find numerous in use by the DOD etc. I believe I read a DOD suggestion that all US climate research could be centralized which would allow for centralized verification of the research data etc.

    Teaming up with the DOD and DOE has a number of other advantages.

    • John from CA

      Well, if I’d taken the time to read the support material I wouldn’t have made this comment.

      from William Brinkman, Comments from the Director, Office of Science
      page 24

      DOE Provides End-to-End Contributions for the International Panel on Climate Change (IPCC) Fifth Assessment Report (AR5)
      Community Earth System Model DOE- funded major upgrades include:
       Community Land Model, terrestrial carbon
        Atmospheric aerosols, fast chemistry
        Sea-ice and land-ice
        Ocean

  12. What can we learn from GCMs?
    Take Literature: “Scafetta”s harmonic model vs. all GCMs”…..
    and everybody can clearly see that all GCMs are mistaken,
    nonsense, waste of resources, and soon, after 17 years without temp increase, a clear prove of fallacy of CAGW……
    Poor fools, who give them any credit at all….
    JS

  13. I see no mention of what I consider the most important use of models, which is exploring alternative hypotheses. This is simply not being done, because they are trying to perfect a forecasting tool for policy purposes. There is no basic science being done with the GCMs.

  14. Dr. Curry, Would it make any kind of sense to ‘start from scratch’ on a new comprehensive climate model, with a team of engineering and statistical modelers coupled to unpolarized climate scientists? ….in some ways like the BEST project, but for a new climate model?

    • Peter Davies

      It would make a lot of sense indeed. We got plenty of engineers and statisticians but where do we find some unpolarised climate scientists? :)

    • David Young

      Kip, I’m not Judith, but based on my experience that would be an excellent idea. If you want to do accurate sensitivities it may be a requirement. Just running two simulations with different values of a single parameter is inaccurate and prohibitivey costly. The problem here is that people are usually rewarded for generating more and more runs and adding more and more physics, not for careful work to understand the models.

      • Are there any general results in this literature about tradeoffs between model complexity (appropriately defined) and model performance (accuracy, mse, or however defined)?

        My intuition is that there would be.

        The notion that making models more complex makes them better is counterintuitive to me, and also contrary to the historical track records of different macroeconomic models of national economies.

    • John from CA

      It might make more sense to do an analysis of the models that currently exist in various research institutions that specialize in one or more aspects of climate and then have them create algorithms which can then be imported into a more complex model.

      Or, abandon the entire exercise and focus on industrial solutions that are cost effective and save taxpayers money.

    • Yes, such suggestions have been made but unfortunately not acted on.

      • John from CA

        SorceForge (open source program development) search results:

        – CDAT (Climate Data Analysis Tools) is an open-source, Python-based environment for scientific calculations and graphics with focus on the needs of climate modelers. It is coordinated by the Program for Climate Model Diagnosis and Intercomparison, LLNL

        – The NCPP (National Climate Predictions and Projections) Portal goal is to develop tools and services to facilitate the use of climate model global datasets in the regional and local scale, including regridding, downscaling and conversion to GIS format.

        – The Earth System Modeling Framework provides high-performance software infrastructure and superstructure for the construction and coupling of climate, weather, and data assimilation applications.

        – An equilibrium climate modelling tool, for modelling the effects of greenhouse gasses. This software is built in PHP, requires no database and can be installed on any PHP enabled webserver. A command line version is also included.

        – The Exclaim 2.0 viewer is intended to allow non-specialists easy access to hydrological, socio-economic and environmental models and to allow easy assessment of the effects of landuse change and climatic variation.

    • Kip, good idea, although it has failure built into it as an idea. Once non-idealogues with experience of modelling come to realise they are trying to model a coupled non-linear chaotic system they will tell you it can’t be done and that you are p**ssing away your money. Then where would we be?

  15. Rob Starkey

    Judith noted– “A tension exists between spending time and resources on V&V, versus improving the model. “

    That seems ALWAYS true in model development. Imo it is a development process.

    Developers get to continue to “play” with the development model until it has to go through the validation phase prior to delivery to a “customer”. In the case of a climate model, the customer are those using the output of the model to make judgments/decisions for government policy.

    Prior to delivery to a customer the model should be able to demonstrate specifically what criteria it is designed to be able to forecast within what margin of error over what timescales.

    For climate models or extended weather models the list of criteria is not very long. The models are very complex to develop, but the list of expectations from users is pretty short.

    The also is one of the issues with GCM’s. They are not designed specifically to be able to predict those criteria that are important to the formation of government policy. If they were, there development process would potentially be easier.

  16. Where do I begin:

    Never does the climate pass twice the same space.

    Climate has periods of stability and then abrupt change.

    The ability to predict is science; projections are guesstimates, and worthless.

    Money spent to predict weather out further than two weeks is better spent than to imagine what might be a hundred years from now.

    All models are wrong; the longer they are extended to the future, the more wrong they become.

    Are there other fundamentals I have overlooked?

    • And the period of stability may be useful for predicting abrupt change :)

      That is an interesting characteristic of non-linear dynamic systems. You may not be able to predict the degree of change, but if you can isolate change points you can begin to unravel parts of the puzzle which will unveil new parts of the bigger puzzle. Learning what you don’t know is better than thinking you know what you don’t.

      • Peter Davies

        Capt said “Learning what you don’t know is better than thinking you know what you don’t.”

        Classic line :) You would be great to go fishing and chew the cud with.

        The trick is to have an open mind and just simply watch what is happening and to work out what relationships may exist within the data and to come up with hypothesis for everyone to falsify – if they can!

      • “It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.”

        Attributed to Twain.

    • I started my professional career as a forecaster for a telephone company. We were forecasting such simple things as population, the number of new telephone numbers required, etc.. I came across a list of the 10 laws of forecasting (wish I had filed that list better). Some of those laws are:

      – If you don’t forecast well, forecast often (i have noted how many different projections have been issued by the IPCC as a strong indicator that they don’t “forecast” well)

      – Give them a date or give them a number; never give them both.

      – Those who live by the crystal ball soon learn to eat ground glass.

      Models are fun. To get paid to play with models is even better. The only things models seem to teach us best is that there is much we don’t know what we don’t know. If only those involved with the GCMs would realize this and not rely upon the models to call for massive upheavals to our economic and social systems.

  17. When the best models of the 20 odd under development by various groups can accurately reproduce the last 50 years for which we have very good high resolution data, and accurately predict 5 future years, then we will be getting somewhere. Hopefully we can get there in the next 5-10 years so that we can begin to make extremely costly policy decisions that world leaders will support.
    We do know for certain that, business as usual, a billion people are going under the sea. We desperately need to know when Only models will be able to tell us. That is the nature of physics.

    • “a billion people are going under the sea”? That’s an absurd statement, partly because the data to seriously predict a sea-level rise which would displace a billion people in a short space of time does not exist, partly because people are not stuck in place like barnacles. Humans have always relocated as conditions changed and will continue to do so. Any retreat from rising sea levels will take place over many decades and be related to many other changes, e.g. in location of economic opportunities.

      • John from CA

        NASA is predicting a 1 foot seal level rise by 2050. Assuming its even that much, pleanty of time to build the arc or take 1 step back /sarc

      • The biggest problem with this is that the sea level is no longer rising.

      • John from CA

        Herman Pope,
        That isn’t completely accurate. It depends on the region and ENSO conditions.

        NASA has done a great job with their Visualization Studio and I really like their Eyes On The Earth 3D effort but feel that Climate Scientists should not be allowed to use the color Red. It would also be helpful if the visualizations didn’t key on extreme events like the 2007 El Nino which are misleading in the context of a dynamic system.

        NASA Goddard’s Scientific Visualization Studio
        http://svs.gsfc.nasa.gov/

        Global Climate Change
        http://climate.nasa.gov/

        Eyes On The Earth 3D
        http://climate.nasa.gov/Eyes/

      • John from CA
        I am talking about ocean water volume and land ice volume.
        local sea levels can be different because the land is going up or down.
        I am talking about overall ocean level. It is dropping.
        Leap Second Data does show this to be true.
        http://popesclimatetheory.com/page28.html

    • Latimer Alder

      @Ross Cann

      You state

      ‘We desperately need to know when’

      Why do we desperately need to know this? If it happens, it will be a very slow process. 1 foot by 2050 is about an inch every thousand days. It won’t make an iota of practical difference to know the rate more closely than this.

  18. Obviously I mean the coastal areas where they now live. The question is how many decades. The last time CO2 was as high as it is now was 20 million years ago and sea level was 200 feet higher than it is now. That is a fact.

    • Peter Davies

      There is already enough water to cover all land (except the highest Himalayan peaks). The tectonic plates that consist of all solid matter encrusting the Earth’s mantle are continuing to shift and that the oceans are located in the low lying areas.

      It will be movement in these plates that will determine where the ocean stands wrt the current continents, NOT weather or climate which just recyles water and gases between the various storage media of atmosphere and oceans.

      • John from CA

        very interesting point if we consider the earth was a different shape and rotated at a much faster rate 20 million years ago but it doesn’t account for ice age transitions and the related and dramatic drop and rise in sea level.

        15k bp we had a sea level 140+ metres lower than today at the Bering Strait. Its highly unlikely it was due to plate tectonics.

      • 15k bp we had a sea level 140+ metres lower than today at the Bering Strait. Its highly unlikely it was due to plate tectonics.
        This was due to water being on land in the form of ice.

    • The assumptions which underlie this thinking are so many and so thick they would strangle an elephant. Any evidence to support it, unfortunately, was last seen drifting off in the clouds.

    • Hector Pascal

      For historic sea level, google “sequence stratigraphy”. The Wikipedia article gives a useful introduction and has a diagram showing both Hallam’s and Exxon’s sea level curves. Maximum sea levels were a little more than 200m above present level, not enough to “cover all land (except the highest Himalayan peaks)”.

      The high frequency variations are related to orbital variations. The (much greater) sea level increase from the Permian to the Cretaceous, and subsequent decline is caused by changes in ocean basin volume. Volume change is a function of sea-floor spreading rates and the ratio of hot (expanded) to cold (contracted) ocean floor basalt.

      • Peter Davies

        Hector you said “Maximum sea levels were a little more than 200m above present level, not enough to “cover all land (except the highest Himalayan peaks)”.

        If all the land can be seen as it was when it was called Gondwanaland and Laurasia you will see that the ocean around them is quite capable of covering them, by more than 2.5km in fact.

        (http://en.wikipedia.org/wiki/Earth)

        You are assuming that water will be coming from thin air and raising the oceans more than 200 metres on our present land formation.

      • John from CA

        Variations in the Earth’s Orbit: Pacemaker of the Ice Ages

        A model of future climate based on the observed orbital-climate relationships, but ignoring anthropogenic effects, predicts that the long-term trend over the next seven thousand years is toward extensive Northern Hemisphere glaciation.

        http://www.sciencemag.org/content/194/4270/1121

      • Hector Pascal

        @Peter Davies (below)

        “You are assuming that water will be coming from thin air and raising the oceans more than 200 metres on our present land formation.”

        You should try reading what I wrote. I made no such assumption. The volume/area of continental crust is fixed, and the freeboard wrt oceanic is a function of the difference in density. It is known as isostacy. Feel free to search for the Pratt or Airey models. How the continents are arranged makes no significant difference to the area of continental crust, apart from crustal thinning at divergent plate margins.

        The 200+ metres comes from the geological record for the Phanerozoic. Part of the water making up that volume is from water that is currently stored on land as ice. The remainder is water displaced from the oceans during periods of rapid sea-floor spreading, i.e. following breakup of the super continents.

        I don’t know where you get 2.5km of flooding from. It’s definitely not from the geological record.

      • Peter Davies

        Hector it seems that we have indeed been talking at cross purposes and I apologise if this has annoyed you. You have introduced another aspect to the study of ocean levels that seems to be based on more recent (less than 20 million years) paleontological studies whereas I have been referring to the much longer timescales of plate tectonics.

        My original point to Ross Cann was trivial in that I was merely referring to the fact given the the relative volumes of water and land on the Earth the land would be almost wholly covered by the water already in existence. I never said that the land was ever flooded by 2.5 kilometres, only that it could be. Please refer to my Wikipedia reference and look at footnote 12 if you want to see the basis of my post.

        Obviously if the Artic icecap melts there would be no overall change in ocean levels but even if all the ice on land were to melt at once, I am not sure if the volume of water in the oceans would change that much in terms of the figures of relative land and ocean mass shown at footnote 12.

      • Hector Pascal

        Peter Davies

        We are indeed talking at cross purposes. Thank you for your clarification. I follow that your point was illustrative of the relative volumes of water in the oceans vs. the atmosphere.

        I do understand plate tectonics. As an undergraduate, my geophysics lecturer was Prof. Fred Vine. Fred wrote the paper (Vine & Matthews 1963) which demonstrated that geomagnetic reversals of sea-floor basalts supported the theory of sea-floor spreading, one of the foundations of plate tectonic theory. Fred is an all-round good egg, and his lectures were models of clarity and organisation. He was an inspiration to me.

      • Hector Pascal

        Answering another point. The plate tectonic signal from the Exxon sea level curve shows sea level rising from about present levels in the Late Carboniferous to 200m plus in the Cretaceous (about 200 million years), and falling to present level over the following 100 million years.

        Sea floor spreading rates are the fundamental control on sea level. Glaciation and Milankovitch cycles are 2nd/3rd order controls.

      • Peter Davies

        Thank you for your contribution Hector. This type of thing is most interesting and I agree that good university lecturers are few and far between.

  19. incandecentbulb

    DOE, EPA… These are modern-day versions of the Benedict Arnold story.

    • You view on Benedict Arnold depends on whether you think the colonial revolutionaries were correct to take up arms to overthrow a legitimate government or not. George Washington swore an oath to the Crown and then broke it, yet you do not think him a traitor.
      The DOE and EPA serve legitimate functions, the USA would be far worse off without an organization like the EPA. One could argue that the pendulum has swung too far in the case of the EPA in all areas, but you do not amputate your leg for an ingrowing toenail.

  20. The only true way to do V&V for a climate model is to wait for the future to unfold and see how it did. No amount of testing on past climate can give confidence in a future projection given that climate is entering a regime not seen before. How would you propose V&V any other way that gives absolute confidence about the future projections for 100 years? It is a unique situation for climate models, not faced in engineering, that only the future holds the truth.
    Having said that, the problem is not as daunting as it seems, because a century of climate change is only 10% of the annual season change in most parts of the world, or 10% of the difference between tropical and polar climates, and the GCMs can simulate these differences quite well, so they are within their standard operating range for a 3 degree rise in temperature. I suspect that the confidence is based on this view of climate change as just a perturbation to the earth system, and V&V of current climate suffices for future climate.

    • Riiiggghht. And what, pray tell, is the ongoing “V&V of current climate” to which you airily refer? GCMers are desperate, as far as I can see, to avoid exactly that, or anything vaguely resembling it!

      • They evaluate the GCMs against surface and upper air observations and analyses, and against satellite data, and publish their results. What else do you want?

      • Latimer Alder

        From the pollies and the general public’s perspective, the only thing that really matters is the frigging temperature. It is (supposedly) a ‘Global Warming’ crisis, remember.

        The alarmist thesis is :

        Temperature goes up a wee bit, thermageddon results, wipe out of all known civilisation and the poor polar bears too. Sob, End of Act 3. Let wailing and gnashing of teeth begin.

        So what we all want to know about is the temperature. That is the absolute numero uno key variable. Everything else is ‘nice to know’.

        And we have already got a rudimentary way of observing the GAT year on year. With huge faults admittedly, but the infrastructure and the processes exist already. All we have to do is remove the process from the hands of the entirely untrustworthy CRU and chums, do it honestly and transparently and we are up for a real test of whether the models are any use at all.

        Easy to do. For each model, publish the prediction of what next year’s GAT will be in advance. And then we can all check it when it occurs.

        Example: Prediction +0.020C, reality 0.019C = good job. Prediction +0.020C, reality -0.012C = bad job.

        Easy to understand – even for the least educated person…and most convincing if it is right. Do this for a few years and we get to see whether the runners and riders are any frigging good or not. And in a muti-model field, which ones are really crap. And which are better.

        But modellers have an inbuilt averison (nay horror) of making any such predictions. And are brilliant at coming up with a lot of handwaving arguments about why such things aren’t possible or would be invalid or would divert the science or mislead the public or they haven’t got the resources and anyway its time for their annual break in Cancun or Durban or any other pile of old BS to avoid actually putting their much-prized prediction cojones on the line and be tested publicly.

        But when it comes ot predictions 50 years out, model simulations that cannot be done for one year mysteriously become ‘experiments’. The results become ‘data’.

        And other gullible fools then feed this data into yet further unetsted and unvalidated models and – to nobody’s surprise – the results (whatever they may be) are univerally ‘consistent with the output of climate models’ , indicate that ‘urgent action is ‘required’ and that ‘further work is needed’. And so the self-perpetuating gravy train continues.

        But where in this whole circularity of academic backscratching is the public interest represented?

        We are the ones who pay for the whole frigging boondoggle. Effectively we are employing the climatologits to do the best professional job to advise us on the future climate. I see very little evidence of the best professional practices being used…every time we look closely at any of their work we find gaping holes…sufficient to get one severly reprimanded or banned for life from any true ‘profession’. And a huge resistance to change anything from their own cosy little ways, increasingly ill-supported by the failing mantra ‘We are Climate Scientists, Trusyt Us!’

        Ok – I’ll start trusting you and your frigging models when you start publishing your predcitions for next years GAT in advance. And get them right to within 10% 10 years in a row. Then we’ll have some evidence that you do know what you’re doing, rather than being just a bunch of amateurs buggering about with Fortran.

  21. I’d be fascinated to know, given your very thorough detailing of possible sources of uncertainty in GCMs and of other approaches to analyzing the very complex variables involved here, of your view of the views developed in http://rockblogs.psu.edu/climate/2012/02/responsible-skepticism-lessons-learned-from-the-climate-disinformation-campaign.html.

  22. “V&V is overkill for a research tool; inhibits agile software development”

    I have yet to see code in climate science where the programmer would be at the level of writing tests first etc (Agile), never mind V&V. That sounds to me like someone heard a buzzword and thought it would make a good excuse. In the code I’ve seen, the programmer doesn’t have the skills it would take – that level of programming requires really high level programming skills and a much much more disciplined mind set.

    “A tension exists between spending time and resources on V&V, versus improving the model.”

    If you are just casually writing code without at least a high level of ‘rigor’, then you aren’t improving the model.

    As a programmer it has been a very exciting 30 years. One thing you really notice is when something that was not doable becomes doable. You never need to wonder if it is working when breakthroughs happen because it is so obviously a breakthrough. You saw it all over – from search with Google, nuclear reaction simulation, speech recognition, and automated driving. All of these were very complex problems, had many novel approaches, but most importantly they were obviously verifiable. That is super important when developing software.

    On the other side, you have things like financial models, and climate models. They are orders of magnitude more complex, the inputs are not even all known, never mind understood. Then it is almost impossible to verify even if you would have it right. Unsurprisingly the predictive power, even on historical data is dismal, and people wonder out loud if it means anything.

    There is zero chance that you can rely on these models for anything from a computer perspective. They are still worth doing, and one day they will probably get there – but for confidence, if 20% of it comes from models, then there is a maximum possible of 80% confidence overall.

    • Robin, you think it would be easier to predicatively model the outcome of a baseball game or horse race (where there are huge, definitive, performance data-sets readily available) or the Earths climate.
      This is not a joke, I am asking in all seriousness.
      My guess is that predicting baseball would be far easier, and given the possibility of outcome betting, lucrative.
      A reasonable predictive Baseball score simulation would wipe-out the bookies in a very short period of time; Yet they are still doing business.
      My guess is that predictive programs for non-linear system are a very long way off.

      • I think in general sports are never going to get better than general odds, due to not being able to quantify the most important inputs, like ‘how bad do you want to win today’. Then there is injuries. Then there is cheating.

        To answer your question though, I think baseball would be easier if you are thinking a season or series – just because the more matches you have the more likely the statistically better team is going to come out on top.

        Predicting a chaotic system without all the inputs known and controlled is much like predicting the future, literally. People doing the programs often get lost in the task of modelling it and miss the big picture.

      • k scott denison

        Doc, you forget that the BOOKIES are modeling the outcome of sporting events. So your analogy doesn’t work as the requirement to beat the bookies is to out model them. Given they do this professionally, and have been for years and years (first in their heads, now with computers), anyone would be hard pressed to out model them.

        Of course, some do in that there are professionals who make their living off of sports betting.

      • k scott denison

        Doc,

        ps – watch the movie “Moneyball”, the true story of how the Oakland A’s used computer modeling to build a winning baseball team cheaply.

  23. [Sorry, sent off previous comment too quickly!] The link is referred to by Peter Gleick–yes, that Peter Gleick–in the blog you refer your readers to, All Models Are Wrong. See the amazingly (if perhaps inadvertently!) timely exchange between the blogmaster and Gleick at http://allmodelsarewrong.com/all-blog-names-are-wrong/. The arguments in the first link should surely be considered in light both of Gleick’s comments linked here and what Gleick has now admitted to doing.

    http://allmodelsarewrong.com/all-blog-names-are-wrong/

  24. From Dr. Curry’s post:

    The focus is now on understanding how quickly the climate is changing, where the key changes will occur, and what their impacts might be.

    Given that climate change is completely normal, did they express a bias as to whether they believe the current period has been abnormal, is abnormal, and will be abnormal, and whether they believe it is warming or cooling going in to the period of remedy they will surely recommend, and having settled that science, apparently, did they provide evidence that anything but adaption as a remedy was in the works? My theory being anything but full frontal adaption suggests complete buy-in of the GHG model of a super-heated world if we don’t get my granny out of her Chevy.

  25. In my day job I spend a lot of time heading off technology push – and that’s exactly what I keep feeling is happening with GCMs.

    The answer instead is to address the problems being faced by those interested in the climate’s behaviour, and use the most appropriate/useful techniques. For the politician I suspect that quite simple stochastic models using simulations are likely to suffice. The material variables and their relationships will probably be good enough, given the degrees of uncertainty.

    Forecasters need more sophistication, and those that want to gain insights into longer term behaviour no doubt find GCMs and other techniques helpful within limits.

    But when people start arguing about the usefulness or otherwise of climate models per se I’m reminded of the tourist seeking directions to Dublin and being told “I wouldn’t start from here”.

  26. Pielke Sr. has a post today.
    http://pielkeclimatesci.wordpress.com/2012/03/01/my-comments-on-a-new-paper-climate-drift-in-the-cmip3-models-by-sen-gupta-et-al-2012/
    Sen Gupta et al, 2012: Climate Drift in the CMIP3 Models. Journal of Climate;doi: http://dx.doi.org/10.1175/JCLI-D-11-00312.1

    The abstract reads [abstract added]
    “Even in the absence of external forcing, climate models often exhibit long-term trends that cannot be attributed to natural variability. This so called ‘climate drift’ arises for various reasons including: perturbations to the climate system on coupling component models together and deficiencies in model physics and numerics.”

  27. Judith, I am pleased to see you at least mention “Monte Carlo” techniques.
    My career experience involved building and testing a lot of financial models, mainly for mining projects of various kinds. Early on, I discovered that simply by pushing the main input variables a percent or two in one direction, we could push the model results around very significantly.

    I developed in interest in Monte Carlo simulation. We used the ‘@risk’ program (an add-in for Excel – an excellent program) and applied probability distributions for key input variables based on engineering estimates included as a matter of course in the Feasibility Studies. We would then run the model for, say, 1000 iterations, and examine the results.

    Depending on the characteristics of the project, the Monte Carlo results were most instructive, primarily in giving a way to estimate the reliability and usefulness of the model. And the project.

    For some high cost projects in an industry with a volatile price experience (any of the base metals for example) the Monte Carlo results were usually essentially flat kurtosis – we interpreted this to mean pretty much equal probability of any outcome. I recall, for example, one project where the CAPM NPV was $33 million, but the first standard deviation was +/- $75 million. The model therefore was not very useful. Or more accurately, the model usefully told us that the project itself was poorly conceived and unlikely to be successful. At the least it would be horrendously difficult to finance.

    In contrast, there were other projects that had strong operating margins and steady prices (diamonds for example) that gave a very “peaky” result curve. An example was a diamond project that had a CAPM NPV of $1100 million, and the first standard deviation was +/- $55 million. This was reflecting a very sound, robust and financeable project.

    Had we not used Monte Carlo, we would have not had a means of assessing the reliability of the model. In the first instance a 1SD of more than 200% is very telling, as is the second instance where a 1SD of around 5% attested to the quality of the project AND the usefulness of the models in analysing it. But standard CAPM would have you believe that it has already taken into account (using beta) different applicable discount rates and no more need be done. However, it always seemed to me to understand that one project had a 1SD of the outcome value of 200% and another project had a 1SD of the outcome value of 5% is telling you something very important, and needed to be taken into account.

    It seems to me that climate modellers would do well to explore (if only privately) the outcomes if they chose, say, the top 10 key variables in their models, applied probability distributions for each, and ran the Monte Carlo models. I guarantee that a lot of modellers will suddenly feel less confident about the value of their models.

    • mondo, +1.

      Monte Carlo work totally changed my research life and everything I thought about. As it happens, economic applications too, but of a very different kind.

      However… I wonder to what extent uncertainty in these GCMs is easily explored by Monte Carlo techniques. I get a sense that (1) a single run with any vector of parameters drawn from a distribution is very expensive in a time sense, and (2) the dimensions of these vectors is very large. Those two facts could conspire to make Monte Carlo evaluations of uncertainty impractical.

      It may be that the methods referred to earlier by David Young are better suited to these kinds of situations, but I am not familiar with them.

      • Of course, on second thought, sometimes simulation methods actually ease the curse of dimensionality, but I mostly know of that in the context of estimation, as with simulated maximum likelihood and simulated method of moments estimations, which do also produce estimates of uncertainty in the estimates.

        Since optimal control (mentioned by David Young) IS optimization, just as ML and GMM are, they might be wedded with simulation methods to produce relatively cheap (in a time sense) evaluations of uncertainty in complex models.

    • I think if I were going to look at financial industry models for lessons on confidence, Monte Carlo wouldn’t be the first thing I’d look at : ).

      I find climate science models task to be similar to what financial models are doing in a programming task sense. The difference is the financial industry has people who have a much better grasp on the math (world class), much more supervision, and still tend to get the answers they want to get. When you have unknowns that matter in your input, the output tends to do what you want.

      It would be interesting to see where financial models break down as they scale up, I assume it is something like when the error bars overtake the signal, and the error bars are being ignored. To be fair to finance, their system is self disruptive as well, all the harder.

      • Robin,
        I think you misunderstand my point. What I am trying to say is that the value of the model is a direct function of the nature of the underlying problem. In financial models, the key determinant is the volatility of the operating margin. If a business has stable pricing and wide operating margins, then the volatility of the margin is very small. That will be demonstrated in a Monte Carlo analysis by a “peaky” results curve, with low standard deviations.
        In contrast, a business that has volatile pricing and/or narrow operating margins will have a very volatile margin, which much of the time may be negative. Run a Monte Carlo on that sort of model, and you will get a flat outcome, meaning pretty much equal probability of any outcome. The standard deviations will be very wide relative to the “result”.
        I am sceptical/cautious about using standard CAPM for projects with volatile margins or cyclical revenue streams. In effect Monte Carlo as described above is a direct measure of the reliability of the model, in the situation it is being applied to. In the case of the low volatility company, the model is reliable. In the case of the company with volatile margins, it is not. As a business executive, wouldn’t you want to know that?
        Another problem with CAPM is that beta, the volatility measure used to derive the discount rate, is based on the volatility of the market price of the company’s securities, which is a secondary measure. You can measure the volatility of the revenue stream directly, and also the volatility of the cost stream, so why use a secondary derivative that is affected by other issues.
        If Monte Carlo shows flat kurtosis and wide standard deviations, I would counsel a board to be very cautious in approaching that project. Certainly it should be financed by equity rather than debt for example. If it were to be financed at all. Most boards I worked on placed more emphasis on payback, cash flow volatility analysis, and competitive cost position than the CAPM NPV. For good reason in my view.
        In relation to climate models, they are just numeric models too. And I am sure that my point is sustained. If known probability distributions were put on key inputs to the model, and a Monte Carlo were run, it would (I am sure) show a result with flat kurtosis and wide standard deviations. Thus, we know that those models are actually not very useful.

        My take home message is that models are applicable in some situations, and not in others

      • Mondo:

        I do understand what you are saying, and don’t disagree at all. I think like in many systems the financial models work well in small systems where the inputs are approximately direct and accurate. As it scales error eventually becomes the dominant signal, which leads competent people with good ideas the underestimate larger risks (like say with CDS’s). Not to blame the modelers of course, they just were getting results they were asked to get I’m sure – just once uncertainties start to dominate the model’s value is essentially zero.

        “My take home message is that models are applicable in some situations, and not in others”

        For sure – tiny uncertainties on input or process understanding lead to chaos upon scaling. People have a hard time letting go of something that worked so well on a smaller scale.

        btw, I think you also confirmed my point that programmers in the financial industry are world class with the nuances of math : ).

  28. Paul Vaughan

    ” perception that complexity = scientific credibility; sheer complexity and impenetrability, so not easily challenged by critics”

    ” slow to incorporate new scientific insights or understanding”

    What’s most important: Can they reproduce EOP?
    Response: Silence.

    • Peter Davies

      EOP = Earth and Ocean Physics?

    • Peter Davies

      No; Earth Orientation Parameters?
      Maybe both? :)

      • Paul Vaughan

        EOP = Earth Orientation Parameters
        (not to be confused with earth orbital parameters)

        For example, see concerns I recently shared here:
        http://judithcurry.com/2012/02/28/jc-interview/#comment-178704

        For those lacking background, I give sketches here:

        1. Solar, Terrestrial, & Lunisolar Components of Rate of Change of Length of Day
        http://wattsupwiththat.com/2011/04/10/solar-terrestrial-lunisolar-components-of-rate-of-change-of-length-of-day/

        2. Semi-Annual Solar-Terrestrial Power
        http://wattsupwiththat.com/2010/12/23/confirmation-of-solar-forcing-of-the-semi-annual-variation-of-length-of-day/

        In these online climate discussions there’s too much exclusive focus on “global temperature” and WAY too little attention to wind & hydrology.

        People think the “global temperature” should simply follow the solar cycle and this is just nuts on so many levels if one takes the time to ACTUALLY KNOW the cross-scale morphology of the data. People think if they just average over 11 years, they’ll have done all they have to do to know solar-terrestrial relations. It doesn’t work that way and the reason is simple: The YEAR is dominant on Earth. [ See p.4 here: http://wattsupwiththat.files.wordpress.com/2011/10/vaughn-sun-earth-moon-harmonies-beats-biases.pdf ] People think the solar cycle effect should be uniform across the globe; the data say this abstract conception IS DEAD WRONG. Might as well claim 1+1=3, up is down, left is right, and “there are 4 Lights”. The data show with crystal clarity that the effect is on GRADIENTS.

        So if the models can’t reproduce AAM (Atmospheric Angular Momentum) & EOP, they’re simply wrong and unworthy of public attention. I see no indication that climatologists are serious about getting the models right. If they were serious, they’d be all over this and it would be aggressively front & center in the climate discussion every day from now until the day of cracked ENSO DNA.

        The deafening silence we get from the climatology community on crystal clear EOP & AAM solar-terrestrial encoding [ http://wattsupwiththat.files.wordpress.com/2011/10/vaughn1.png & http://wattsupwiththat.files.wordpress.com/2011/12/image10.png ] is more than a little creepy.

        Maybe the enlightened few climatologists feel like deer in the headlights, frozen terrified in fear of the stunningly monstrous workload implied (necessary to correct abstract conception & modeling).

        As for the less enlightened climatologists: It’s clear they simply don’t have a clue. Their functional numeracy & judgement is severely inadequate. (Since I’m not on the funding train I have no reason to extend collegiality here – quite the contrary is demanded by the circumstances, very unfortunately.)

  29. Arcs_n_Sparks

    Well, DOE has been funding LLNL for their “Program on Climate Model Diagnosis and Intercomparison.” That sounds like what the desired goal here is: assessing the quality of, and between, models. Is it producing any results???

  30. There is some pretty rigorous analysis going on in the validation and verification area by Ivo Babuska from University of Texas at Austin. Ivo is one of the founders of the finite element method for solution of partial differential equations and should be paid careful attention to because he actually does rigorous proofs. One of the jokes that goes around is that “God gave the ten commandments to Moses through the burning bush, and he gave finite element theory to man through the Babbling Bush.” Check it out if you are interested in rigorous analysis of validation.

  31. Chief Hydrologist

    While I have posted somewhat on the instability of the Navier-Stokes partial differential equations of fluid motion – the solutions of these diverge in unpredictable ways as a result of the range of feasible values for input variables. This implies that part of the ‘validation’ of models should involve a systematic evaluation of varying inputs to explore the range of possible solutions. Until this is done we are entitled to suggest the plausibility of model projections into the distant future is determined qualitatively on the basis of a posterior solution behaviour. That’s right – they pull it out of their arses which have been programmed up to the kazoo with esoteric climate knowledge that you wouldn’t understand.

    Perhaps more interesting is the behaviour of the Earth climate system itself which I should perhaps stop calling chaotic in the sense of the N-S pde – but is more of a dynamically complex system. Here it is the influence of control variables – CO2 or orbits for instance – driving interactions of ice, cloud, dust, biology, atmosphere, heliosphere and pedosphere that result in sudden departures from a state. A phase that caught my attention was of tremendous energies cascading through powerful mechanisms. This is far from a decorous ‘random walk’ on a Sunday afternoon. It is more like coming to the end of the path on a cliff top – and falling off. I am not saying that stochastics don’t work – just that it doesn’t tell you when you are on the cliff top and safe and when you are a bloody mess at the bottom.

    ‘Recent scientific evidence shows that major and widespread climate changes have occurred with startling speed. For example, roughly half the north Atlantic warming since the last ice age was achieved in only a decade, and it was accompanied by significant climatic changes across most of the globe. Similar events, including local warmings as large as 16°C, occurred repeatedly during the slide into and climb out of the last ice age. Human civilizations arose after those extreme, global ice-age climate jumps. Severe droughts and other regional climate events during the current warm period have shown similar tendencies of abrupt onset and great persistence, often with adverse effects on societies.

    Abrupt climate changes were especially common when the climate system was being forced to change most rapidly. Thus, greenhouse warming and other human alterations of the earth system may increase the possibility of large, abrupt, and unwelcome regional or global climatic events. The abrupt changes of the past are not fully explained yet, and climate models typically underestimate the size, speed, and extent of those changes. Hence, future abrupt changes cannot be predicted with confidence, and climate surprises are to be expected.

    The new paradigm of an abruptly changing climatic system has been well established by research over the last decade, but this new thinking is little known and scarcely appreciated in the wider community of natural and social scientists and policy-makers.’ NAS (2002) Abrupt Climate Change: Inevitable Surprises

    As Hurrell and colleagues (A UNIFIED MODELING APPROACH TO CLIMATE SYSTEM PREDICTION by James Hurrell, Gerald A. Meehl, David Bader, Thomas L. Delworth , Ben Kirtman, and Bruce Wielicki) have suggested – the modelling problem becomes a whole lot harder. ‘The global coupled atmosphere–ocean–land–cryosphere system exhibits a wide range of physical and dynamical phenomena with associated physical, biological, and chemical feedbacks that collectively result in a continuum of temporal and spatial variability. The traditional boundaries between weather and climate are, therefore, somewhat artificial.’

    The problem becomes one that requires initialisation, about 3 orders of magnitude more computing power and lots more – and more certain – climate data. I am not saying it shouldn’t be done – but I am not expecting to hear back from them within 30 years. ‘I am going outside now and may be some time.’

    So what we would like to see in the interim is some other methodology of predicting where the cliff top is. Perhaps it will look something like Sceffer et al (2009) – http://www.nature.com/nature/journal/v461/n7260/full/nature08227.html – or perhaps it will be something different. As of now – we are wandering about in the dark and it could get very cold very soon.

    ‘Most of the studies and debates on potential climate change have focused on the ongoing buildup of industrial greenhouse gases in the atmosphere and a gradual increase in global temperatures. But recent and rapidly advancing evidence demonstrates that Earth’s climate repeatedly has shifted dramatically and in time spans as short as a decade. And abrupt climate change may be more likely in the future.’
    http://www.whoi.edu/page.do?pid=12455

    Before some idiot suggests to me that I am implying a high sensitivity to climate – and therefore some unspecified risk that must be dealt with by massive government interventions immediately. Please – take it that I am aware that climate is highly sensitive at what can be called equivalently ‘a phase transition, a bifurcation, a catastrophe (in the sense of Rene Thom), or a tipping point’ (Sornette 2009) and insensitive at other times. Sensitivity otherwise is a concept that has no meaning – model derived or not.

    Take it also that we would prefer pragmatic (read actual) solutions that meet the needs of people for economic and social development. That is a debate for every day. The true path forward for humanity is in the realisation of our common human enlightenment heritage of democracy, free markets, science, individual freedom, economic progress and the rule of law. Hi-ho bluey.

    My kindest regards,
    Captain Kangaroo

    • David Young

      Chief, As always you are very insightful. The sensitviity analysis I’m suggesting would help in identifying these “tipping points” into rapid climate change. This is one of Lindzen’s points that I think is excellent. “Climate is always changing” is a statement that we must really focus on adaptation because change is inevitable. Fear of change is a classical conservative doctrine that ironically has been adopted uncharacteristically by the left in the last 30 years. The primary characteristic of the left is the desire to control change to conform with ideological doctrines that are generally pseudoscientific. Freedom is a fundamental good. Leftist ideology is an excuse to reduce freedom.

  32. Captain Kangaroo

    I am going to have to stop this – I am getting a terminal case of identity crisis.

    Hi-ho Shibboleth

  33. Dr. Curry,
    Any thorough discussion of what we can learn from climate models has to include the perspective of scientific forecasting, an area of science which has been studiously ignored by the climate community. Scientific forecasting is a young field- about the same age as climate science. It’s conclusions about the reliability of climate models are exactly the opposite of climate science. No one in climate science is even looking at the evidence put forward by scientific forecasting, evidence that has a direct bearing on the issues such as V&V, forecasting and reliability.

    Until climate science comes to grips with this field of science the climate models will never have the respect of many people.

  34. Judith,

    A fascinating post. Normally, I agree a great deal with your posts, at least
    in the gist, if not always in every aspect of the detail. In this post, you
    display your extensive knowledge of Climate Science as I would expect of the
    Chair and Professor of Earth and Atmospheric Science at Georgia Tech., and more
    importantly, as the blogger of Climate Etc who has won my respect and that of
    many others for both your rigorous and broad knowledge of Climate Science and
    the Maths underlying it.

    Obviously, there is a “but” coming. And that is, I have yet to come across a
    Climate Scientist that has even a passing knowledge of Computer Science and how
    complexity is handled within that field. Something that is absolutely
    fundamental to the development of computer models in general and GCMs in
    particular.

    Let me start with an assumption; The only purpose of a computer model that uses
    number crunching by brute force ala most aspects of GCMs is to derive linear or
    ( relatively ) simple polynomial equations that allow us to predictively model
    the climate. There will never be enough computing power to create infinitely
    complex climate models at infinitesimal resolutions.

    All of the current, publicly available GCMs are written in Fortran. A language
    developed in the 1950s. This is long before we understood what was needed to
    to create a computer language that allowed us to control and structure
    complexity in software. On top of the poor choice of programming language,
    Climate Scientists have not seen the need to document the structure of their
    GCMs. Something that in the commercial world is seen as essential to maintain
    an investment in software.

    Rather than continue with criticisms of the current GCMs lets move to how
    Climate Models should be developed. There are only two modern main stream
    programming languages that match the performance requirements ( similar to
    Fortran ) either C or C++. That’s just a given.

    More interestingly, how should be build scalable climate models. Lets start
    with a simple example:

    We can start with a cubed sphere griding scheme as the centre of a dynamic core.
    We can use an Earth with no oceans and no atmosphere i.e just rocky land and
    using 1365 WM2 ( IPCC Value ) and statistical albedo can we validate a grided
    model against a simple Pi/4 model. We also write a test suite that ensures
    that every code branch is tested. The combination of the two is a V & V suite.

    This assumes a solar model of a constant “Parameter” 1365 WM2. the software
    should modularise this constant Parameter to encompass a model of TSI which
    to the best of my current knowledge is not yet writeable I.e. we don’t yet have
    a predictable solar model. We should also be able to replace this solar module
    with a time series of observed solar records.

    The principle of this is a GCM should consist of many interconnecting modules
    that could contain a Parameter/constant, a time series, or a mathematical or
    computer model. Each can be reasoned about independently and tested standalone
    and then linked together to form a GCM.

    We should be able to develop sub-grid radiative convective models to look at
    clouds and plug them into the tropics and see waht happens without exceeding
    computational limits.

    In summary, a GCM should consist of simple independent modules that have test
    suites and validation software, that are plug compatible, and alternative
    modules that are observational data. And it should all be very well documented
    both as computing and climate science.

    Judith, you wrote a post entitled “polyclimate”. This is something we could do.
    We could host the software at sourceforge with the software under version
    control. Sourceforge also provide a web site and email list hosting so starting
    the project could require zero funding. As we moved forwarded somewhere along
    the line we would eventually need more than individuals home computers to run
    the GCM. At that point you would need to draft a funding proposal for use of
    a grid computer. We could easily start we something that comes in at around
    $50,000 to $100,000 to do some serious modelling.

    I’d be interested in your thoughts?

    Regards

    /ikh

    • Peter Davies

      Judith? A wiki perhaps with an agreed praxis? With volunteers to start with and build on it step by step. Climate science will benefit and the AGW debate can be sidelined until the hypothesis is falsified or not falsified by the new GCM.

      • Brandon Shollenberger

        Peter Davies, some time back I decided something like that would be a great resource for the hockey stick debate, though it couldn’t be an open Wiki. My idea was there would be individual pages on specific issues (as in, each point of contention), and each page would state the “right” answer. There would then be a “road map” of sorts which outlined how they all connected.

        Doing it well would probably involve a lot of effort, but it would give people a clear map for discussions. It would allow people to easily figure out what they agreed and disagreed about. Imagine if a conversation started with, “We would agree with each other on this issue except I think you’re wrong on point 3.A.II.”

        Of course, it would be far, far easier to do for a single issue like the hockey stick debate than for global warming as a whole. For that debate, it would be relatively easy to boil down basically all disagreements to disagreements to their factual issues, and thus be able to settle them. I don’t think that could be done for global warming as a whole, but a lot of progress could still be made.

      • Peter Davies

        It would stop the current circular debates that we currently have Brandon and that real progress can be made on a series of specific issues. Agreed that the wikis cant be open to everybody because it would just lead to argumentation.

      • Brandon Shollenberger

        Peter Davies, you may have too much faith in people. I’m confident no amount of clarity could stop circular debates as a whole. Some people are just incorrigible. On the other hand, I suspect many people would stop being dragged into those arguments (or being mislead by them) if they were given a clear source like we’ve discussed. In that way, the idea wouldn’t be to stop the pointless arguments, but rather, to give a way for a new type of argument to take place.

        What I find amazing is that such a thing has never been made before. The closest there is to it is SkepticalScience, and that should tell you how bad the situation is. If I were trying to convince people there was a serious threat to all life on the planet, one of my main goals would be to set up the best source I could possibly set up for spreading information. Just imagine if the amount of effort put into sites like RealClimate and SkepticalScience was instead put into actual attempts at education.

        Give whatever reasons you like; the simple reality is the people promoting global warming concerns have failed to do some of the simplest things they could possibly do to get people to believe them.

      • Peter Davies

        Brandon. I agree with you that the AGW proponents did a poor job of selling their hypothesis but obviously the economic policymakers have heeded what they had to say. Its those damned sceptics who continue to doubt!

      • The meme that AGW promoters have done a bad job communicating their fear and hype is an amazingly persistent myth held by the AGW community.
        No movement in modern time has received more money, favorable press, government support, artistic and media support than AGW.
        To claim over and over that if the communication was just somehow better those ignorant unenlightened non-believers would start believing in the climate apocalypse borders on delusional.

    • David Springer

      “all the GCMs are written in Fortran”

      How quaint. Fortran is to software as vacuum tubes are to hardware.

      • Latimer Alder

        Fortran????

        Jeez, FORTRAN was old hat when I stopped writing Theoretical Chemstry models over twenty years ago.

        Do they still make FORTRAN compilers? I’d have thought that the last guy who knew how to do it would be well-retired by now.

      • Steve Milesworthy

        There has been an attempt in the USA to develop a model framework based on C++ called ESMF. After about 10 years of funding it has not yet been fully adopted by any of the major climate models because basically it is regarded as too cumbersome, not so well optimised, and doesn’t really add much when one is solving a highly coupled system (for which the benefits of C++ in many cases do not help).

        All the major HPC vendors are still supporting Fortran.

        If you want to suggest Java or some other more “modern” language then please wait till I’ve finished my coffee.

  35. Anyone familiar with this simple model/relation?

    IR = A + BT
    A = 204 W/m2, B = 1.93 W/m2/K
    IR = total outgoing infrared radiation

    What would be the physical mechanism for this? Linear relationship between IR and T?

  36. “What can we learn from climate models?”

    Garbage In, Garbage Out! The world has been misled by this GIGO for decades now and time for the gullibles to wake up.

  37. What a magisterial survey, Dr. Curry. Kudos! Nothing left out.

    I’m not a fan of models, but agree with those who have said that if they do not produce something unexpected and counter-intuitive, they are probably just ‘shrills’ of those creating them.

    There are examples of models in other fields (where data and parameters are more manageable) which have done this, and totally surprised the researchers and given them new insights.

    Until one of the climate models does this, they aren’t science so much as confirmation crutches for we-know-who. Hence the discrepencies between the model predictions and actual reality are becoming more obvious each year.

    So what is their purpose? To produce surprising new scientific insights into what happens when the pieces of the jigsaw are supposedly put together? I haven’t heard of too many examples of this. Are there some?

    Or just to make warming projections? GIGO. 2*CO2 = 3C. If you believe that, just use a raspberry pi and do the arithmetic.

  38. Climate doesn’t kill people, weather kills people.
    What’s the point in knowing what the average global temperature might be in a hundred years, if we’ll never know where that hurricane or flood or drought might strike?

    The best that can be hoped for is that at some stage in the future we may have enough knowledge of the hydrological cycle to be able to ‘predict’ with some ceratinty the chance of a flood or a drought in given regions in given seasons.
    Even then, the only thing that will best help is affluence. This is what empirical evidence shows. And empirical evidence also shows affluence is attained on the back of cheap accessible energy.

    So, to paraphrase a moron musician, “Feed the world….with cheap energy”.

    • Bob Ludwick

      Dr. Jerry Pournelle often says, accurately: “Cheap, plentiful energy is the key to freedom and prosperity.”

      What he and I have not yet come to an agreement on is why our current administration, since day one, has gone all out to restrict the supply of energy and increase its price.

  39. Tomas Milanovic

    ■Ensemble size for initial condition uncertainty is far too small

    Even though this is just a single bullet point among dozens, it opens a whole can of worms which is, in my opinion, vastly understudied.
    There are 2 aspects to this point.

    – The first one is invoked by David Young.
    It is the question of convergence of a given numerical scheme (algorithm) on a given FINITE grid to exact solution of the underlying PDE (partial differential equations).
    It is impossible to prove that GCM converge to the solution of the underlying PDE for the simple reason that they don’t.
    Much work had been done on non linear PDE and especially on Navier Stokes which is the most important dynamical equation set for climate dynamics.
    It is known that the converging direct numerical integration of N-S needs insanely high resolution.
    And even then it needs “reasonable” initial and boundary conditions – this requirement resonates with Chief’s notion that when a system is “near” (the closeness being given by the field metrics in the phase space e.g the L2 norm) to a special state comparable to a phase change point, it will behave in an extremely non linear and unexpected way.
    Now it is physically impossible to have a 3D grid over the whole Earth such as the resolution guarantees the convergence of a numerical scheme.
    Not today and not ever.

    From that follows directly that the numbers produced by the GCM on a grid which is many orders of magnitude below the necessary resolution may be as far from the real solution of the given PDE as one wishes.
    I am absolutely convinced that the inability of GCM to produce realistic regional predictions is directly linked to this problem.
    The fact that the produced numbers don’t look completely stupid is due to the energy and momentum constraints which are the least that one has to do with a GCM.
    That’s why when one starts in a state which is real and physical and then evolves (almost arbitrarily) under conservation constraints, the results will stay, at least for a certain time, plausible.

    Now as an ensemble is constituted of several lists of such numbers, and regardless its size, because of the lack of convergence you are not able to tell how far is every element of the ensemble from the real solution.
    There is certainly no reason to believe that while all the numbers are wrong, they should all stay at the same distance from the real states.
    Some may be very near and some others very far so that an average would be meaningless.
    These problems are sofar largely ignored.

    – The second one is much more serious and I dedicated a post about ergodicity to it.
    Even if one supposed that the elements of an ensemble are all near to the converged solution, and we have seen above that they are not, the most important question would be the one of the statistical interpretation of this ensemble.

    One possibility would be to consider that when the size of the ensemble increases, the distribution of the states converges to an invariant probability distribution.
    Of course there is still a huge technical problem of the metrics allowing to measure the distances between computed states.
    Rigorously the right metrics is the one for an infinite dimensional Hilbert space, e.g the invariant PDF has an infinity of variables.
    So one would have to find reduced finite metrics which are computable and yet still invariant. It is during this attempt that one would have to answer among others the question that you ask in your bullet point – how large must an ensemble be to converge reasonably to the invariant PDF.

    But the other possibility would be to consider that there does NOT exist an invariant state distribution. The distribution would depend on the (arbitrary) selection of the initial states. In other words the system would be NON ergodic.
    This case can of course not be excluded because ergodicity is a property that some complex systems have and some others have not.
    Nobody has a clue whether our dynamical Earth system has this property or not.
    So clearly as long as the first problem that I call the convergence problem and the second problem that I call the ergodicity problem are not solved, any use of “ensembles” would be just an unsavory mix of naivete and wishful thinking.

    Objectively we have barely scratched the surface as far as these 2 problems are concerned with regard to the GCM.

    • Peter Davies

      Tomas you have a strong technical background in physics and math and your posts are appreciated if sometimes not wholly understood by me and quite a few others on Judith’s blog.

      I understand that GCM’s certainly poses numerous problems when it comes to modelling but wonder if regional climate modelling may prove to be more tractable?

      I mention this in view of the fact that raw climate data is based on regionally located data stations and it has been suggested that more localised models would be more servicable for shorter term weather forecasting up to say 3 months.

      The context of the discussion so far has been modelling for the purposes of (a) understanding the physical relationships abounding in the climate data we already have and (b) using it to formulate appropriate hypotheses as David Wojik has suggested upthread.

      Because of the two issues that you have raised, it seems hardly likely that any climate model should be used for prediction.

  40. There was an opinion piece by Rob Wilby – “Imagine a world without climate models” – in RPielke snr blog on January 17, 2011.

    Would we be better off without climate models? What if the money & resources had gone into better data collection instead? Would we have taken the data and built scenarios from the bottom-up rather than imposing a top-down approach?

    Nice quote: “Practitioners would not be bogged down with computationally demanding, pseudo probabilistic analysis, generated from ever larger ensembles of climate and impact model permutations. We would still know that the future regional climate is errr … highly uncertain …”

    ——–

    If we are to have models, here is a user request: an online open-source model where all the key assumptions and parameters are changeable. A sort of ‘be your own climate scientist’ toy. Wiki-climate, Then every group could alter the components to reflect their own take – for example, those with a ‘it’s the sun stupid’ view could put their own propositions and feedbacks in, and reduce the CO2 influence. This would at least crystalise many of the arguments.

    Day dreaming….

    ———-

    Most online models I’ve looked at are more propoganda tools, such as the GISS EdGCM used to convince school students of AGW. Publically-funded brainwashing.

    • Joe's World

      This is where the excuse of the “climate is chaotic” as no one is actually studying the planet but studying data gathering.

      Keeps the excuse alive of uncertainty when scientists choose to be ignorant.

  41. Climate models = Wishful thinking

  42. As a non-technical, non-scientist denizen, I found Dr. Curry’s article very understandable. As far as GCM’s, do any of them consider the effect of deep-sea volcanism? I read somewhere that they keep discovering undersea volcanoes and have to think that they can affect the climate by heating the ocean in its deepests depths and also, by depositing lava on the sea floor, displacing sea water, possibly raising sea level. Also, it seems that there are periods in the Earth’s geological history where planetary volcanism increases and certainly exploding volcanoes, especially at high latitudes, affects climate. Whether it’s because of orbital or axial variations, gravitational forces, or other phenomena, unless a GCM can account for undersea and surface volcanism, it will not be accurate. Thoughts, anyone?

    • Chuck L,

      I think Watts Up With That has a list of hundreds of things that may have some impact at some time on climate. Models can pretty much only consider the things that should have a significant impact. If the model accurately simulates their impact, then anomalies from the simulation should indicate impacts of the things not properly simulated. The models have to learn. So the models will never be accurate unless every possible influence on climate is understood completely. That is the point of good models, learning the impact of the things not considered in the model.

      The frustration many “skeptics” share is that over confidence in the models violates the purpose of the models to begin with. Original model runs should be compared to subsequent model revisions to compare the progress of the model development. Each model should also be consistent compared to the standards that applied to each generation.

      For example, comparing first generation models to first generation temperature averages. When the temperature average is adjusted, then the standards no longer apply. Since we are approaching HADCRU4 and what HADCRU1 was based on is lost, there is less confidence in any comparison of the original model projections since they cannot be accurately evaluated against the original standard with more recent data.

      Most modelers know this, but most climate scientists don’t seem to realize it is important to compare apples to apples when evaluating models :) So there is a debate.

    • Steve Milesworthy

      Putting it simply, because there is no evidence of increasing deep sea volcanism, and because the amount of energy emitted by volcanism is actually extremely tiny compared with the energy in the system, modellers would conclude that their impact would be too small to be concerned with relative to say Pinatubo type volcanoes or solar and greenhouse gas changes.

  43. Re NUSAP – see NUSAP.net

    NUSAP – The Management of Uncertainty and Quality in Quantitative Information
    Jerome R. Ravetz & Silvio O. Funtowicz,

    First, we must insist on risk calculation being expressed as distributions of estimates, and not as magic numbers that can be manipulaed without regard to what they really mean. We must try to display more realistic estimates of risk to show a range of probabilities. To help do this we need tools for quantifying and ordering sources of uncertainty and for putting them in perspective.

    (W.D. Ruckelshaus)

    The name “NUSAP” is an acronym for the categories. The first is Numeral; this will usually be an ordinary number; but when appropriate it can be a more general quantity, such as the expression “a million” (which is not the same as the number lying between 999,999 and 1,000,001). Second comes Unit, which may be of the conventional sort, but which may also contain extra information, as the date at which the unit is evaluated (most commonly with money). The middle category is Spread, which generalizes from the “random error” of experiments or the “variance” of statistics. Although Spread is usually conveyed by a number (either + , % or “factor of”)it is not an ordinary quantity, for its own inexactness is not of the same sort as that of measurements.

    This brings us to the more qualitative side of the NUSAP expression. The next category is Assessment; this provides a place for a concise expression of the salient qualitative judgements about the information. In the case of statistical tests, this might be the significance-level; in the case of numerical estimates for policy purposes, it might be the qualifier “optimistic” or “pessimistic”. In some experimental fields, information is given with two + terms, of which the first is the spread, or random error, and the second is the “systematic error” which must estimated on the basis of the history of the measurement, and which corresponds to our Assessment. It might be thought that the “systematic error” must always be less than the “experimental error”, or else the stated “error bar” would be meaningless or misleading. But the “systematic error” can be well estimated only in retrospect, and then it can give surprises. . . .
    Finally there is P for Pedigree. It might be surprising to imagine numbers as having pedigrees, as if they were showdogs or racehorses. But where quality is crucial, a pedigree is essential. In our case, the pedigree does not show ancestry, but is an evaluative description of the mode of production (and where relevant, of anticipated use) of the information.

  44. I am no scientist. I would answer the question (are models fit for purpose) in a sort of every day ‘sense check’ manner.

    I would not like my country to fight a war using a complex war-simulator, and I would not put my shirt on a stock market predictor.

    All three are insanely complex subjects and it would not suprise me to discover that climate is the most complex of all.

    • Michael Hart

      I agree. Everybody can get lost in the technical details at some point. But those who may never be able to judge the intricacies can still judge the outcomes. If they cannot judge a model to be wrong, they are certainly able to judge if it is useful. Ask for meaningful predictions, wait, and then judge.

      As to meaningful predictions, plenty of people may say the stockmarket may go up tomorrow. Plenty may say it will go down. Neither of those demonstrate any significant predictive skill. If they correctly told me what would happen every next month then I would sit up and take notice. But if they really knew, would they tell me?
      We’ve heard a lot in recent years about predictions for cataclysmic globally rising temperatures. I predict many people will go on predicting that. Mainly, as far as I can tell, because they want to. Not because they have demonstrated any predictive skill that I can see.

  45. Dr. Curry,
    The question is raised in your excellent post, “Why do climate scientists have confidence in their models”?
    I believe a very good answer is that they have confidence in them because many cliamte scientists have resisted and avoided serious critical reviews of them for decades. It is easy to have confidence in something if you reject criticism of it.

    • hunter,

      Climate models are like rock lyrics. They mean whatever you want them to mean. If you want to have confidence in them, it’s cool. That’s what they’re there for.

      Andrew

      • BA,
        That is a fun analogy. sort of like “Stairway to Heaven”, lol.
        I am just starting a 897 page attempt to make Philip K. Dick’s last big idea into a comprehensible novel. It is called “2-3-74”. the name comes from the date he received his great revelation. he wrestled with what became a fairly incoherent paranoic vision that is strangely imbued with a deeper hope of redemption. some of those who have plowed through Dick’s thousands of pages of notes and manuscripts have actually gotten religious messages from his writings. I see alot of this AGW issue in a simlar light to the PK Dick magnum opus: As one where AGW believers are looking at what is clearly not very significant data and projecting great meanings to it.
        The book, so far, is very heavy sledding, not very different from his later published fiction that was so full of poorly applied concepts and paranoia.
        But since Philip K Dick nearly always proves to be worth the effort, I will slog on.

    • Norm Kalmanovitch

      limate modellers have confidence in the models because they are honest scientists typically with math physics background and are absolutely certain that all equations used in the climate models are valid.
      The problem is that they are completely unaware that the CO2 forcing parameter is a complete fabrication as is the concept of downward forcing from CO2 which if it actually exists is trivial in comparison to the downward forcing from clouds and water vapour.
      They are also not aware that the sun heats the Earth surface and the surface heats the atmosphere so it is a physical impossibility for the atmosphere heated by the Earth surface to turn around and heat the Earth surface to a warmer temperature than it was which is the fallacy of “greenhouse warming” that underlies the AGW issue.
      The climate modelling community must go back and verify all the concepts behind the stated effects from CO2 increases according to proper scientific justification principles. If this is done climate modellers will still have confidence in climate models but zero confidence in climate model projections based on CO2 increases.

    • Latimer Alder

      Because they are some of the very few remaining believers in the Climatologists Mantra

      ‘Trust me, I’m a Climate Scientist’

      and because their credibility and careers depend on having confidence in them.

      It is also true that he guys who write the models are the last people to use as mistake spotters spotters becuase they won’t spit ony mostikes. That’s why proofreaders ain’t writers.

  46. There have been huge changes to CO2. There have been changes to the tilt and wobble and eccentricity of the earth. Sometimes the Northern Hemisphere has been closer to the sun in summer and sometimes further. There has been a 30 percent increase in sun intensity since earth was born. All the while, earth temperature remained relatively constant.

    Earth is blessed with a lot of ice and water. Ice and Water have a set point. When anything warms the earth, it melts ice, exposes more water to the atmosphere and it snows more and cools the earth. When anything cools the earth, the water surface is frozen and cut off from the atmosphere and less snow allows the sun to warm the earth and reduce the ice area.

    The stability of earth’s temperature comes from the set point of a lot of ice and water.
    Ewing and Donn and Wysmuller and Pope.

    Please read my Pope’s Climate Theory and send or tell me your thoughts.
    http://popesclimatetheory.com/

    • Earth temperature has been tightly bounded for ten thousand years.
      The best climate model for the next ten thousand years is this data for the past ten thousand years. Build a model that will reasonably recreate the last ten thousand years and you will have a model that will stay in bounds for the next ten thousand years.

      • Be careful. The past can be interpolated. The future is always an extrapolation.

    • When earth is warm it snows more, when earth is cold it snows less. This is not correctly done in the Climate Models. If it was correctly done the models would not all diverge outside the temperature limits that the data shows.

  47. Norm Kalmanovitch

    We seem to have forgotten what GCM’s actually do. General circulation models start with initial conditions and through a series of complex interactions between cells predict the movements of the various components of the climate system a few days, a week or even a month into the future; but in no way are capable of projecting any further than that so models are completely incapable of projecting global temperature 50 or 100 years into the future as has been done to create the global warming issue!
    What has been done is the initial conditions and the concentration of CO2 have been projected into the future and this projection of CO2 concentration has been wrong so all model predictions based on it have also been wrong.
    CO2 is only increasing at 2ppmv/year so by 2050 the concentration will have only increased by 76ppmv to 468ppmv from our current 392ppmv.
    If this was input into climate models even with the 5.35ln(C/C0) CO2 forcing parameter this will result in just 0.948W/m^2 by 2050 and if we add another 100ppmv for the value of CO2 concentration in 2100 we get forcing of 5.35ln(568/392) = 1.984W/m^2.
    The climate model dating back to model number 4 in Hansen et al 1981 paper produces 2.78°C of warming from a doubling of CO2 from 300 to 600ppmv. This is 5.35ln(2) = 2.71 times a climate sensitivity factor of 0.75°C per W/m^2 which gives 2.71 x 0.75 = 2.78°C!!
    If we use the same 0.75°C/W/m^2 climate sensitivity the climate models will produce warming of just 0.948 x 0.75 = 0.711°C by 2050 and just
    1.984 x 0.75 = 1.488°C by 2100 neither of which can be considered catastrophic enough to warrant the economically crippling action taken to create a carbon trading market which is some how supposed to reduce atmospheric CO2 concentration growth.
    What the climate models tell us is that the initial conditions input into the models are all wrong!
    On closer scrutiny the climate models tell us that the fabricated CO2 forcing parameter makes no physical sense!
    The climate models also tell us that the fabricated climate sensitivity factor of 0.75°C/W/m^2 which was based on an increase in CO2 from 280 to 380ppmv producing 0.6°C of global temperature change is completely faulty.
    5.35ln(380/280) = 1.6338W/m^2 and if this produced 0.6°C the climate sensitivity factor should be 0.6/1.6338 = 0.3672 which is only half of the 0.75°C used!
    This would reduce the global warming predictions for 2050 and 2100 to 0.355°C and 0.744°C respectively so the climate models also tell us that the climate sensitivity factor was doubled above what it should have been based on the measurements that it was based on.
    The IPCC 1990 FAR shows natural warming since the LIA of approximately 0.5°C/ century so the 0.6°C over the 100 years during which CO2 increased from 280ppmv to 380ppmv represents 0.5°C of natural warming with only 0.1°C possible attributable to the 100ppmv increase in atmospheric CO2 concentration.
    This simple fact demonstrates that both the CO2 forcing parameter and the climate sensitivity factor used in the climate models are six times greater than they should be because of the LIA explaining why Mann et al had to remove the LIA with the hockey stick.
    So ultimately the climate models tell us that the entire climate change issue is based on fabricated inputs into climate models.
    The real value of climate models is what they were designed for and we should be appreciative of the improved extended forecasts that we now have courtesy of GCM’s

  48. No mention has been made of “data mining”, is it considered a useless process for climate exploration?

  49. Dr. Curry,

    I followed some of your links and it is now apparent that we have (at least one) whole government departments whose primary mission statement is ‘To collect climate data, refine climate models, write reports, and make recommendations in support of the central axiom of Climate Science.’

    To whit: ‘The climate of the Earth is warming at an unprecedented rate due to the injection of CO2 into the atmosphere as a byproduct of our use of fossil fuels to supply our energy needs. The effects of this rapid increase in planetary temperature range from unpleasant to catastrophic and pose an existential threat not only to humanity specifically but to the biosphere in general and can only be ameliorated by giving governments worldwide absolute control over all aspects of energy production and consumption.’

    The reason that climate models ‘don’t do well’ is that they ALL start with the central axiom and are adjusted iteratively to ‘prove it’.

  50. I thought Norm Kalmanovitch’s comments above were refreshing, until he wrote that the Sun heats the Earth’s surface and the surface then heats the atmosphere. That is simply not true.

    Over a year ago (on January 12, 2011), I wrote the following recommendations in a comment on this blog:

    “Too much wrong has been done. First, bring on the political revolution, to STOP “implementing climate policy”. Those who would implement know not what they do, get them stopped. Second, cast all of those defending the IPCC consensus, or even peer-review, out of their comfortable “authoritative” positions, because theirs is the rottenness in climate science. Third, set up a new, independent authority, of hard scientists OUTSIDE of climate science, not in it, with the sole task of winnowing out the chaff that now inundates climate science, and identifying once and for all the true nuggets that should be built upon …”

    I still say, climate science needs to be given over to non-climate scientists, to be redone. It has failed, completely, judging from Judith Curry’s BERAC talk.

  51. Jeroen van der Sluijs sent me this paper from the NUSAP group, which is very relevant

    Application of a checklist for quality assistance in environmental modelling to an energy model

    http://www.marine.csiro.au/~ris009/pubfiles/checklist.pdf

    Large, complex energy models present considerable challenges to develop and test. Uncertainty assessments of such models provide only partial guidance on the quality of the results. We have developed a model quality assistance checklist to aid in this purpose. The model checklist provides diagnostic output in the form of a set of pitfalls for the model application. The checklist is applied here to an energy model for the problem of assessing energy use and greenhouse gas emissions. Use of the checklist suggests that results on this issue are contingent on a number of assumptions that are highly value-laden. When these assumptions are held fixed, the model is deemed capable of producing moderately robust results of relevance to climate policy over the longer term. Checklist responses also indicate that a number of details critical to policy choices or outcomes on this issue are not captured in the model, and model results should therefore be supplemented with alternative analyses.

    The paper coins an amusing acronym: WYGIWYE (what you get is what you expect)

    • What you get is ‘Confirmation Bias’. CF, or Climate F-word.
      ============================

    • Large climate models have their problems concerning reliability, and so have large energy models and integrated assessment models. It’s, however, difficult to find anything deeper than superficial similarities between these issues.

      The paper of Risbey et al states correctly that the system models that they consider have essential input from assumptions that are explicitly highly value-laden. There are differences in the values of climate scientists as well and that may in affect their work, but the actual input is not value-laden in the same sense.

      I had a lengthy discussion with Fred and others in the Lindzen’s Seminar thread on mechanisms through which the expectations of climate modelers may influence the resulting models, but while I consider this an important issue, the climate models are built upon well known physical equations and conservation laws. Many of the system analytical models have nothing comparable as their starting point. There are certainly some balance equations which have the nature of a conservation law, but there’s nothing like the Navier-Stokes equation to form even rough basis for the dynamics.

      I consider models very useful in both fields and I do also believe that there are a lot of caveats in both fields. The problems are, however, different in details and therefore very difficult to compare.

      I have also difficulties in understanding, how the checklists given in the paper could help very much in evaluating the models. If checklists are applied with any rigor they are likely to tell that any large model has so many problems that it’s value is totally questionable. At the same time the models can be highly useful tools for their developers and others who know them thoroughly. Modifying model parameters or adding new equally well justified mechanisms to the models may, however, in most cases change the results totally. Some models may be so restrictive that they give more robust results, but then that’s only because plausible modifications are excluded rather arbitrarily.

      • I had a lengthy discussion with Fred and others in the Lindzen’s Seminar thread on mechanisms through which the expectations of climate modelers may influence the resulting models, but while I consider this an important issue, the climate models are built upon well known physical equations and conservation laws.

        Climate models are built on models of well-known physical equations and conservation laws. Almost all mathematical models of wicked-problem grade physical phenomena and processes, for both the natural environment and complex engineered equipment, are based on models of fundamental laws. The complete un-altered forms of the fundamental laws lead to equations systems that are intractable on a practical basis. Assumptions and associated simplifications are always necessary. The number of physical components in the Earth’s climates system, the enormous spatial and temporal scales, the very large number of phenomena and processes that must be accounted for in the modeling, and the limited spatial and temporal resolution that can be attained with available computer resources leads to the process-based modeling used in al GCMs. Note, however, that when it comes to in-depth analyses of response functions when presented with peta-bytes of data, bigger faster is not always better.

        Even some computational physics problems involve some aspects of assumptions and approximations. Few pure computational physics problems are of interest. Computational physics problems can be identified by the fact that the equations being solved will contain only parameters that relate to the properties of the material of interest. And these will generally be few in number. Process models, such as GCMs, will contain parameters that are related to previous states attained by the material.

        GCMs for example use the steady-state, hydro-static balance for the model of the momentum component in the vertical direction: the most simple form of a momentum component that can be stated. And one that is far removed from the Navier-Stokes equations.

        GCMs are additionally far removed from the un-altered fundamental equations due to the lack of spatial, and in some cases temporal, resolution that can be handled. All phenomena and processes of critical importance are handled through sub-grid modeling and these will contain large numbers of parameters. Some of these fall into the known unknowns category. See Climate System Modeling and Parameterization Schemes: Keys to Understanding Numerical Weather Prediction Models.

        The lack of the complete momentum equation in the vertical direction means that motions in the vertical direction and associated important phenomena and processes are handled by special-purpose modeling and parameterizations. The some-what incomplete momentum balance equations also have significant ramifications relative to turbulence in the atmosphere: the GCMs model two-dimensional turbulence at meso-scale.

      • “The complete un-altered forms of the fundamental laws lead to equations systems that are intractable on a practical basis. Assumptions and associated simplifications are always necessary.”

        Sounds like computable general equilibrium (CGE) modeling in economics, typically used for policy analysis. The fully general equation system, nonparametric and typically undeterdetermined, is swapped for something parametric with a unique and computable solution. Is that about the same?

      • David Young

        Dan, you give an excellent summary. Simple problems like structural analysis of simple structures can be done from first principles. But as soon as you get into complex structures with fasteners, etc. subgrid models creep in. Most interesting systems are impossible to solve in their “equations of motion” form. I hadn’t realized that the vertical direction in climate models was handled this way though. Perhaps given the role of convection with its widely varying length scales, its inevitable that subgrid models must be used here.

    • Michael Hart

      “The paper coins an amusing acronym: WYGIWYE (what you get is what you expect)”
      Ah, but YCAGWYW (“You can’t always get what you want”) :)

  52. tetragrammaton

    Actually, one of the important things we learn from climate models is how to build your own climate model. Here is an easily-followed step-by-step do-it-yourself recipe, largely an extension of the splendid material to be found in the “Harry Readme” portion of Climategate I.

    1. Learn a little Fortran. (But don’t bother to get overly familiar with Excel).
    2. Learn pi to at least three significant figures (necessary for the next step).
    3. Divide the earth’s atmosphere into many equally-sized tiny cells, making sure that the total number of cells exceeds the capacity of anything that will fit on on a personal computer.
    4. Find some meteorological data to stick into the cells. (See Harry about this)
    5. Insert an algorithm to blow data out of each cell into the next, preferably from the Southwest. Now you have the beginnings of a Global Circulation Model (GCM).
    6. Make up some more algorithms to heat and cool cells (look in Wikipedia under INSOLATION and RADIATION). Spend some time doing this, so that you are able to follow Dr. Curry’s advice to create the “perception that complexity = scientific credibility; sheer complexity and impenetrability, so not easily challenged by critics”.
    7. Find a supercomputer, or get a U.S. government agency to buy you one. (Best to do this before November 6, 2012). Test-run your new CGM, adding fudge factors so the output for future temperatures wiggles up and down a little.
    8. Now inject special new algorithm, so that global temperature output wiggles with carbon dioxide concentration (see Wikipedia or take a trip to Hawaii to get this). Set this fudge factor to at least 3 for each doubling of carbon dioxide.
    9. Back-test your GCM, making output more-or-less match 1970-98 historical patterns. Add nudge factors as necessary. To get anywhere close to 1940-70 reality, you may have to invent some new historical data on sulfur and soot — call this the smudge factor.
    10. Do some more computer runs out to 2100, and mail the outputs to IPCC.

    Now wasn’t that easy?

    • /sarc on. Unfortunately, tetragrammaton, your recipe is too close to the truth to be funny. I am sure that you have described just how climate models are actually produced. /sarc off

  53. A bullet from jcurry’s post: “Insufficient exploration of model & simulation uncertainty” To which I respond, thank you thank you and thank you. This is the component of the climate debate that often causes me to exceed my normal background level of nuts.

  54. Meanwhile, while GC models continue to point upward, observed global temps have recently fallen back to 1980 levels. How low can they go?
    http://www.drroyspencer.com/wp-content/uploads/UAH_LT_1979_thru_February_2012.png

  55. I think one of the first things you will need for climate models, is a total heat measurement that is correct. I have no idea how you could get this, maybe measuring the ‘earth heat’ reflecting off a new moon… yeah, no idea.

    Until then the only thing that can validate a model is “we know we are causing the earth to drastically heat up with CO2” – which is what the models are trying to find out in the first place. You can put all the tests you like in that, you still have a program that uses its hypothesis to prove its hypothesis.

  56. If majority of climate scientists were not such ‘big-headed’ bunch, one quick look at this formula (available from numerous websites)
    http://www.vukcevic.talktalk.net/NFC7a.htm
    at any time in the last 8 years, they would have the most important parameter, the solar activity right, but not everything is lost, it provides further 50 years of extrapolation.
    In their arrogance solar scientists are not too far behind, they invented term ’cyclomaniac’ to attach to the author of the above, but by now it’s a different story, they discarded their ‘precious’ models and have jumped on the bandwagon spreading their newly acquired wisdom.

  57. http://science.energy.gov/~/media/ber/berac/pdf/20120216Meeting/Weatherwax_Feb_2012.pdf
    Chart 20 of 27. Correlation is not causation. This is the kind of junk science that turns many of us engineers and other people against Consensus Climate Science. We have accounts of periods of severe weather many times in earth history before man was around in large numbers. We have data that indicates of periods of severe weather many times in earth history before man was around. Warmer water and more exposed ocean do cause more precipitation. There is nothing in that that proves manmade CO2 is at fault. This is extreme junk science.
    I did get to Chart 20 before I found a major problem.
    Chart 21 of 27. More junk science. This assumes we need to do carbon mitigation.
    “Improved representation of tropical systems will help global climate predictions and thereby inform energy and carbon mitigation and adaptation strategies for the US”

    • Warmer water and more exposed ocean do cause more precipitation.
      This is elementary school physics. This is really preschool physics.
      This has nothing to do with CO2 that has been supported by real data.

  58. JohnofEnfield

    My experience in this area is limited to Physics degree and some design & modelling work on electronic circuits – so all fairly tangential to the subject.

    I have a number of concerns: –

    1. The earth’s climate system is a chaotic one. It is fundamentally impossible to forecast for more than a few weeks ahead by modelling the chaotic non-linear sub-systems which make it up. The concept of projecting forward accurately for a hundred days let alone a 100 weeks, months or years is risible.

    2. The models are built on the things we think we currently know about the atmosphere and some of its interactions with the sun’s radiation. Our knowledge does NOT include additional sub-systems that come into play as some of the systems tend to extreme values. Nor do they include sub-systems where the science is very sparse e.g. the interaction between our atmosphere, the oceans, the lithosphere and our solar system. They are self-fulfilling.

    3. The measurements we have of the things we claim to know about – such as average global temperatures – are extremely limited in accuracy, geographic spread and time (is the last 30 years good enough to understand a system which has been developing for billions of years and which has existed in approximately its current state for many millions of years?).

  59. http://science.energy.gov/~/media/ber/berac/pdf/20120216Meeting/Weatherwax_Feb_2012.pdf
    Chart 21 of 27. More junk science. This assumes we need to do carbon mitigation.
    “Improved representation of tropical systems will help global climate predictions and thereby inform energy and carbon mitigation and adaptation strategies for the US”

    http://science.energy.gov/~/media/ber/berac/pdf/20120216Meeting/Geernaert_CESD_Feb2012.pdf
    Chart 17 of 24. More junk science. Sea Level is not rising. Sea Level cannot rise. It is snowing too much.
    They use the words: “realistic dynamics and physics” BS!
    It ain’t happening and it will not happen.
    Chart 18 of 24. The permafrost has not had a problem during the past ten thousand years. It is not going to have a problem in the next then thousand years. More junk science
    Chart 22 of 24. They say “destabilize the permafrost! “
    This has never happened, why do they think it will happen now?

  60. steve fitzpatrick

    Judith,
    I am a little surprised that there is no discussion of the ocean portion of climate models. Argo is providing ever accumulating ocean heat content and heat distribution patterns, both of which ought to be reasonably represented by any accurate CGCM. My understanding is that most of the coupled models predict much higher rates of ocean heat accumulation than Argo data supports. Which suggests to me that there are likely significant errors in at least the representation of stratification and mixing in the oceans, and perhaps of the atmosphere as well.

  61. Mike Edwards

    Dr Curry,

    These two points caught my eye:

    – V&V is overkill for a research tool; inhibits agile software development
    – A tension exists between spending time and resources on V&V, versus improving the model.

    The first point appears to misunderstand the aims of agile software development and some of the techniques used in its execution. While “agile” aims to develop software quickly, it also aims to ensure that the programs are correct – hence practices such as “test driven development” where testcases are written before the program itself is created – with the aim of ensuring that the program produces correct results from the word go. It is typical of modern software projects for there to be large test suites that are executed each time a new version of a program is built – to ensure that any changes or extensions have not introduced errors.

    Frankly, without such test suites, I cannot imagine how it is possible for any non-trivial software project to proceed without significant bugs.

    For the second point, about spending time on V&V, there is the old adage: “program in haste, debug at leisure”. If you don’t spend time on V&V, then how on earth do you know that your program is doing anything remotely like the complex system it is modelling. Forget the physics for a moment – what on earth is the program itself doing, what bugs lurk?

    Modelling the climate system is a tough tough task – without good V&V it appears impossible. Even 30 years ago at CERN, we built massive Monte Carlo simulations of the data we expected to get from our detectors in order to test out the analysis software built to handle the data from the real detectors – long before the real detector hardware was complete and running. We could not afford analysis software that did not do its job properly.

    Perhaps more professional software engineering expertise is needed in climate modelling projects…

    • agreed that these two points are absolutely not convincing

    • “Perhaps more professional software engineering expertise is needed in climate modelling projects…”

      But if the professional software engineer’s got the wrong answer………..

      • Latimer Alder

        At least with a professionally structured and designed system when errors are found you have a fighting chance of finding them and putting them right with reasonable time and effort. With the way that Steve Milesworthy describes you need to spend a great deal more. It will be harder and more costly in time and resources.

        Aeroplane analogy again. Modern planes are in constant contact with the ground engineering control If something’s going wrong with it, the appropriate spares, people and other resources can be waiting at the next convrnient maintenance stop. Result: quicker fixes, more reliable planes, better timekeeping and lower operating costs. Everybody wins.

        Do the planes cost more to have this capability? sure…somebody has first to write the code that does it. But the savings over a 20+ year life of a plane are massive. so that extra investment of upfront time and money pays dividends in the long run.

        Same with software. If you are going to keep it for a while, it is much much better to get it right first time, even if you seem to be doing a bit of extra work upfront.

        And, purely on a philosophical level, if a model can’t pass a simple V&V test, why should anybody pay any credence to its results? I might as well make them up in my head and publish a paper about my musings. There’s no essential difference. It is an old cliche, but still true that a computer just produces garbage more quickly. But it don’t make it any truer.

    • totally agree. The debate is always around whether the models are telling the whole story, but looking at the methodology there is very little chance they are even correct in the bug free sense. That they tend to agree with each other then suggests they are tuned to agree with each other. I would be interested in the calibration process for the models, that certainly should be documented somewhere.

      There is hope with things like BEST that we can one day get to clean data and code.

      • Lord Beaverbrook

        A good example to look at is the UKMO decadal forecast:
        http://www.metoffice.gov.uk/research/climate/seasonal-to-decadal/long-range/decadal-fc

        A graph is presented showing three periods of decadal forecast starting 1985,1995 and 2005 with confidence levels indicated by red plumes around the forecast white line. Taking away the effects of Mt Pinatubo and the 1998 super El Nino the first two decadal forecasts would be truly acceptable whereas the period starting 2005 whilst still remaining within the 90% confidence level is visibly not as accurate .
        What isn’t obviously stated is that the first two decadal periods are hindcasts from 2005 starting with the initial conditions from those years and the period starting 2005 is the true predictive forecast.

        The thick blue line on the graph is the latest prediction from Sept 2011 indicating a 0.2C rise by end of 2014.
        To a layman such as myself this begs the questions:
        1.) Is this a reasonable suggested rise in temperature over the period without some unpredicted event happening?
        2.) Is the inference of the new prediction on achieving the original model output or matching the observed data?
        3.) If the model is tuned to give such a good response in the previous two decadal periods, has something significantly changed at the turn of the century that hasn’t been accounted for?

      • To Mr. Lord Beaverbrook:
        The GMT models do not grasp the real underlying warming mechanism,
        which has nothing to do with CO2. They unscrupulously project hindcast warming into the future, without having detected the cyclic tipping point
        of 2001 and the onset plateau ever since for the coming 30 years…..
        Better take Lit: Nicola Scafetta (2011) : “Harmonic model vs. CGMs of the
        IPCC”, also displayed and commented on WUWT at the right hand side of their home page….
        Here you are better off and you can compare MetOffice – (wasted taxpayers
        funds in nonsense- GCMs) to a real HARMONIC forecast….
        JS

      • Steve Milesworthy

        Lord Beaverbrooke, the decadal forecasting system is obviously a research activity so it is well accepted that the results produce more questions than answers. For example, is the performance of the forecast bad just by chance (because some of the forcings changed unexpectedly more than for the hindcasts) or is it bad because somehow the previous hindcasts were inadvertently tuned. If the system is improved to deal with any short-comings of the forecast, will it be a real improvement or will it be yet another post-hoc tuning. These are all scientific judgements which is where the focus on V&V is not helpful.

        Again, though, V&V *is* done. Test-driven development is used for regression testing but is also a key aspect particularly if you are trying to make sure that the forecasts or the climatology of your new/updated model produces are at least as good as the previous model. This latter aspect of the test-driven development, though, is less automatic than for many applications because it requires a degree of scientific judgment to assess the outputs.

        The data you have is, of course, the weather and climate observations (which may not be perfect). Interesting to note the above example of using the output of a Monte Carlo analysis to test the detector was built right. Surely that is to put models above observations! Of course having used Monte Carlo simulations in research I don’t believe that.

      • To Steve:
        In all GCMs is missing the harmonic, astronomical component.
        They are therefore wrong from the very start, and it is not a matter of putting, like in a bakers dough, more or less of this or the other ingredient in it and doing a more intense hand/computer stirring ….
        Better see Nicola Scafetta: ” HARMONIC model vs. IPCC GCMs”,
        which is superior to all principially faulty circulation GCMs….
        I endorse in this paper the yellow, the HARMONIC component only – variant of Scafettas model, both in hind- and forecast.
        Lets throw all circulation GCMs straight into the waste basket……
        no sense in fiddling and tuning….
        JS

      • “Interesting to note the above example of using the output of a Monte Carlo analysis to test the detector was built right. Surely that is to put models above observations!”

        I think you misread that, Steve. He said the Monte Carlo analysis was used to test if the analysis software was built right, not the detector itself. As I understand it, they were trying to make sure that once they got the real data from the detector, the analysis software applied to that data wouldn’t introduce spurious errors.

      • Lord Beaverbrook

        ‘the decadal forecasting system is obviously a research activity so it is well accepted that the results produce more questions than answers’

        Beautiful answer for an academic, But could you consider that the results of the forecasts affect political policy that increases the amount of tax that individuals have to pay? There is a direct relationship, you may not realise it but overall household expenses are increasing through the cost of energy subsidies for low carbon technology. If it’s necessary then fine, prove it and put my mind at rest because basically all that I have seen is theory.

      • Steve Milesworthy

        kcom, technically you are correct, but remember you are using your analysis software to analyse data from a detector and validated by other analysis software. None of these three components (monte carlo, detector or analysis software) have primacy. It is the self-consistency (or lack of) of all three components that gives you your level of confidence.

        Lord Beaverbrooke, the output of the decadal forecast you quote has little if no direct link to my level of taxation. If anything it has probably reduced taxation because its central prediction was not the real outcome. I doubt that levels of taxation were in the thoughts of the people who generated the forecast.

      • Lord Beaverbrook

        Steve Milesworthy

        ‘Lord Beaverbrooke, the output of the decadal forecast you quote has little if no direct link to my level of taxation. If anything it has probably reduced taxation because its central prediction was not the real outcome. I doubt that levels of taxation were in the thoughts of the people who generated the forecast.’

        I agree with your final sentence but when the forecast of the world leading UKMO adds weight to the political process that introduced the 2008 Climate Change act into law and thus introduced a decarbonisation schedule with massive subsidies for wind and solar technology from tax revenue, then my taxation has certainly been affected. The fact that as the forecast period is now coming to a close and the observation data differs so significantly from the central prediction it beggars belief that the silence from the scientific community and public media is so deafening.
        Perhaps this is where we will see a split form between the American and European scientific bodies as there is a vast difference in political influence between the two.
        What will be the scientific advice to UK government in 2014? My guess will be that uncertainties will have a far greater role than they have previously in the conclusion, an extremely costly lesson to be learned.

    • For the ones that are open sourced, the software engineers can have a look. I have looked at the publicly available code for ModelE. Not a work a software engineer would be proud of – looks like backyard Fortran – lots of code, hard to see a lot of logic. Didn’t see any obvious evidence of robustness.

      • forgot to mention that hard-coded parameters.

      • Steve Milesworthy

        I agree that ModelE is absolutely horrible. Much more so than another model with which I’m more familiar (which is OK at a subroutine level and manageably horrible at an architecture level). The ECMWF model is getting to the level of being absolutely horrible, with 2 million lines of code and developers needing to instigate major projects to implement minor improvements. But they still continue to be leaders in NWP indicating that horribleness of code can be countered with a dedicated workforce incentivised by fat tax-free salaries.

  62. How a global circulation model ever be right, when we really don’t fully understand all of the input parameters and what they do. Why can someone build a model confined to a smaller cell, say the size of a city adn surrounds and get some proper measurement of all the inputs in the cell and its boundaries and see if it can predict climate and weather.

    If we don’t have the computer power on earth to properly model clouds with a small enough cell size, there is some chance of getting it to work for a much smaller area at appropriately high resolution.

  63. Peter Major

    The case for modelling weakens as key indicators increase, particularly in respect of public policy. Plankton reduction leading to lower rate of replenishment of fish stocks and increasing methane emissions from melting permafrost are key indicators that can now be measured where once they were merely predicted. Measuring key indicators over several years will lead to better indication of trends. If more predicted key indicators become capable of measurement then this will have a significant impact on public policy. If they do not become capable of measurement then the requirement for models in shaping policy will reduce but increase as a method of scientific study.

  64. Well, you can find every opinion under the sun when climate science is the subject.

    Discussion of “A comparison of local and aggregated climate model outputs with observed data” A black eye for the Hydrological Sciences Journal.

    Citation: Huard, D. (2011) A black eye for the Hydrological Sciences Journal. Discussion of “A comparison of local and aggregated climate model outputs with observed data” by G.G. Anagnostopoulos et al. (2010, Hydrol. Sci. J. 55 (7), 1094–1110). Hydrol. Sci. J. 56(7), 1330–1333.

    Abstract
    A paper published by Anagnostopoulos et al. in volume 55 of the Hydrological Sciences Journal (HSJ) concludes that climate models are poor based on temporal correlation between observations and individual simulations. This interpretation hinges on a common misconception, that climate models predict natural climate variability. This discussion underlines fundamental differences between hydrological and climatological models, and hopes to clear misunderstandings regarding the proper use of climate simulations.

    Hydrological Sciences Journal Oct 2011, Vol. 56 Issue 7, p1330-1333.

    My em and bold

    • Peter Major

      Beware of Greeks bearing gifts (cheap). If the predictions had come from models I would have to concede the point but they came from observed measurements of effects on thermohaline circulation. Merely speculation that later proved correct. What does that prove? You make your best guess, you take measurements, sometimes you are right and sometimes you are wrong. Often you learn more by being wrong.

      It is just as Judith says about modelling, models tell you the things to observe and measure; your best guesses. When the model proves correct you can identify key indicators. The more key indicators to measure, the better handle you can get on the problem and the better you can make the model.

      • Peter Major

        I left out the point that it is the key indicators that become the drivers of public policy whilst the improved model remains a scientific tool.

    • David Young

      Dan, I think this is exacly the point Curry and Lindzen are making. In Lindzen’s words, to say that you need CO2 you have to exclude everything else, including natural variability, which has been pretty large in the past. Of course, that was not all “internal variability” but mostly driven by forcings and feedbacks. This artifical distinction Fred tries to make between internal variability and forced variation is not useful. Natural change must by definition include the ice ages, the Mideval warm period, and the little ice age when even advocates acknowledge human influence was minimal. It must include solar variability, etc.

      • The original paper, the Discussion comment, and the authors’ Reply present an interesting discussion on natural variability and climate models and chaos and predictability. Necessarily brief, but interesting.
        There seems to be differences of opinion :-)

        The original paper.

        A comparison of local and aggregated climate model outputs with observed data
        G. G. Anagnostopoulos, D. Koutsoyiannis, A. Christofides, A. Efstratiadis & N. Mamassis

        Abstract
        We compare the output of various climate models to temperature and precipitation observations at 55 points around the globe. We also spatially aggregate model output and observations over the contiguous USA using data from 70 stations, and we perform comparison at several temporal scales, including a climatic (30-year) scale. Besides confirming the findings of a previous assessment study that model projections at point scale are poor, results show that the spatially integrated projections are also poor.

        Citation: Anagnostopoulos, G. G., Koutsoyiannis, D., Christofides, A., Efstratiadis, A. & Mamassis, N. (2010) A comparison of local and aggregated climate model outputs with observed data. Hydrol. Sci. J. 55(7), 1094–1110.

        I haven’t checked for a free version online, maybe there is one available.

        The complete Discussion and Reply are available
        Huard’s Discussion.

        DISCUSSION
        Discussion of “A comparison of local and aggregated climate model outputs with observed data”∗ A black eye for the Hydrological Sciences Journal
        David Huard

        Abstract
        A paper published by Anagnostopoulos et al. in volume 55 of the Hydrological Sciences Journal (HSJ) concludes that climate models are poor based on temporal correlation between observations and individual sim- ulations. This interpretation hinges on a common misconception, that climate models predict natural climate variability. This discussion underlines fundamental differences between hydrological and climatological models, and hopes to clear misunderstandings regarding the proper use of climate simulations.

        Citation: Huard, D. (2011) A black eye for the Hydrological Sciences Journal. Discussion of “A comparison of local and aggregated climate model outputs with observed data” by G.G. Anagnostopoulos et al. (2010, Hydrol. Sci. J. 55 (7), 1094–1110). Hydrol. Sci. J. 56(7), 1330–1333.

        Authors’ reply.

        REPLY
        Scientific dialogue on climate: is it giving black eyes or opening closed eyes?
        Reply to “A black eye for the Hydrological Sciences Journal” by D. Huard

        Citation: Koutsoyiannis, D., Christofides, A., Efstratiadis, A., Anagnostopoulos, G.G. and Mamassis, N. (2011) Scientific dialogue on climate: is it giving black eyes or opening closed eyes? Reply to “A black eye for the Hydrological Sciences Journal ” by D. Huard, Hydrol. Sci. J. 56(7), 1334–1339.

    • I will describe a very dangerous situation with models, but this is a very common occurrence:

      In my field of work (not climate studies), I have seen models set up and tests performed on them by matching them with certain observational (physically measured) data sets. Many times these tests “succeed” with model results matching the observed data, but in order to have a successful match, the modellers either fudged the input data or the mathematical processes in the model. Unfortunately the successful tests generate a false confidence in the models and the modellers become emboldened to use their “successfully” tested models to make other predictions (or projections). This is where it becomes dangerous.

      I have seen models succeed under a certain amount of testing, and then become proven failures after more comprehensive testing is performed.

      • Peter Major

        Very true. Which is precisely why key indicators are so important.

      • “Many times these tests “succeed” with model results matching the observed data, but in order to have a successful match, the modellers either fudged the input data or the mathematical processes in the model”

        It is VERY easy to ‘fudged the input data or the mathematical processes’ even if you are trying your damnedest not to cheat. The problem is that you are always working at the edge of the envelope, always assuming a whole bunch of things and ignoring ‘unknown unknowns’.

    • Peter Major

      One criticism of the paper might be that there were no measurements taken north of south of the 60 degree parallels. However, the conclusion of the paper was encouraging.

      “Do we have something better than GCMs when it comes to establishing policies for the future? Our answer is yes: we have stochastic approaches, and what is needed is a paradigm shift. We need to recognize the fact that the uncertainty is intrinsic, and shift our attention from reducing the uncertainty towards quantifying the uncertainty. Obviously, in such a paradigm shift, stochastic descriptions of hydroclimatic processes should incorporate what is known about the driving physical mechanisms of the processes. Despite a common misconception of stochastics as black-box approaches whose blind use of data disregard the system dynamics, several celebrated examples, including statistical thermophysics and the modelling of turbulence, emphasize the opposite, i.e. the fact that stochastics is an indispensable, advanced and powerful part of physics.”

  65. I appreciate Judy’s efforts to clarify the situation for us. It seems to me that many issues surrounding the use or miss use of models boils down to problems with how we think about these kinds of simulations rather than the simulations themselves, we as modelers “fall in love” with our simulations, and forget that they ARE simulations – not data. Of course we’ve actually known this for decades, but we tend to forget.

    Here is another reminder, from one of the most unlikely of all sources [someone who was enthusiastic about modeling btw] dating back over twenty years.

    We model, but we also fall in love with these models, and it is the falling in love with the model that then turns it into an agenda where it was not a free form projection of a flow of facts towards a conclusion, but then it becoms instead an agenda, a synthetic creode, high walls down which you expect to see a process poured and confined.
    ~Terence Mckenna, History ends in Green, © Mystic Fire, 1990

    W^3

  66. All I can say is that the climate model debacle makes economic modelling look like precision NC engineering. What an utter joke. What utter arrogance. Nerds on PCP.

  67. Once again, Kudos, Judith. A masterful analysis. Kudos, kudos.

  68. A surface energy balance thought experiment: Gradually add GHGs to a planet which originally had no GHG’s in its atmosphere. At first, when the surface receives more DLR, surface balance can only be restored by increasing surface radiation, which requires surface warming. At a certain point, the surface will become so warm that an unstable lapse rate develops. From that time on, some of the increasing DLR from GHG’s will leave the surface by increasing convection and some by increased surface radiation following surface warming. As the atmosphere becomes optically thicker, a greater percentage of the energy from increasing DLR will leave the surface by convection.

    From the perspective of surface energy balance, future climate change depends on what fraction of the energy from DLR leaves the surface by convection, rather than radiation following surface warming. Our current models are too coarse to properly represent convection and may not accurately calculate evaporation from oceans and evapotranspiration on land. Why do we expect current models to properly partition the departure energy from increasing DLR into convective and warming/radiative pathways?

    How Much More Rain Will Global Warming Bring? Frank J. Wentz, Science, 317, 233 (2007) DOI: 10.1126/science.1140746
    http://www.ssmi.com/papers/wentz_science_2007.pdf

    Lorenz, D. J., E. T. DeWeaver, and D. J. Vimont (2010), Evaporation change and global warming: The role of net radiation and relative humidity, J. Geophys. Res., 115, D20118, doi:10.1029/2010JD013949.
    http://tenaya.ucsd.edu/~tdas/data/review_iitkgp/2010JD013949.pdf

  69. So what do people think this ModelE module does?

    MODULE NUDGE_COM
    !@sum NUDGE_COM contains all the nudging related variables

    • I’m looking at the tests in that code (they start with test_*), not to surprising. Good thing extra rigor didn’t inhibit them from doing Agile software development.
      http://map.nasa.gov/ModelE_html/html_code/src/

    • Its name is pretty self-explanatory – it’s used to add “nudging” capabilities to ModelE. Nudging is a technique used to force a more realistic or observed state of the atmosphere in a dynamic model. For instance, if I were going to go back and study the dynamics of Hurricane Katrina using a model like the WRF, I’d use nudging and re-analysis data to ensure that the large-scale environment in my model matches as closely as possible that observed during Katrina.

      You can’t nudge when integrating long climate-runs because there isn’t anything available to nudge towards.

  70. Latimer Alder

  71. If a GCM counts years as 365.0 days starting from 1951, how many days out will it be in 150 years? How might that affect the model output?

    If temperature change is computed in adjusting forcing runs, what is an “adjustment”?
    “!@var Tchg Total temperature change in adjusted forcing runs
    REAL*8, ALLOCATABLE, DIMENSION(:,:,:) :: Tchg”

    If surface fluxes include sensible heat, evaporation, thermal radiation, and momentum drag, where does conduction and convection count?

    ” SUBROUTINE SURFCE
    !@sum SURFCE calculates the surface fluxes which include
    !@+ sensible heat, evaporation, thermal radiation, and momentum
    !@+ drag. It also calculates instantaneous surface temperature,
    !@+ surface specific humidity, and surface wind components.
    !@auth Nobody will claim responsibilty”

    • Steve Milesworthy

      Good questions. If you are running a model as an energy balance model to estimate the impact of increasing CO2 then the precision of the calendar is not that important. Some models use a 360 day calendar because it makes the arithmetic easier for calculating monthly and seasonal averages.

      If you are wanting to do seasonal and decadal forecasts then getting the calendar right is important so you need to input more accurate forcing data (eg. solar input) and you want to more easily compare outputs with observations.

      Conduction is tiny compared with the other processes listed. Convection would typically be done within a different subroutine of the model from the surface and boundary layer calculations – so you’d need to look elsewhere in the code.

      • There can only be one temperature driving heat flow and it knows not the proportions which will flow by which mechanism to a cooler place.

        How can a model be consistent with the second law of thermodynamics if it doesn’t have all the different modes of heat transfer in one place as part of one temperature based heat flow calculation???

      • You have part of the model that deals with the land. It puts the heat and moisture into the atmosphere. The atmospheric part of the model has to deal with convection and other processes that move these around. Also, sensible heat flux starts as conduction in the lowest millimeter, but after that it is transported by eddy motions. The net effect counts as sensible heat flux.

  72. On Aug 12, 2007, there is a total solar eclipse that transverses the USA from the Pacific to the Atlantic
    Across its paths aircraft could monitor the steady state light flux from the ground and the atmosphere. Ground stations could monitor the temperature and radiative fluxes pre, during, post the loss of sunlight. .
    It would be so damned easy to experimentally measure if clouds increase influx or efflux. All sorts of steady state transitions, which are vitally important to models, could be measured. here there will be actual experimental data of what happens to temperature when influx is altered.
    What do the climate scientists do, nothing.

    • Peter Major

      I was told that there was a flight planned. Has it been cancelled?

      • Quick, garbage, the vehicle needs fueling.
        ============

      • ‘A flight’.
        Don’t know.
        But this is the obvious way to calculate radiative fluxes and temperature.

  73. Gareth Williams

    I have never tried to model the climate, but I have modelled two-phase fluid flow in an engineering context, and have spent most of my life in science and scientific programming.

    The are a couple of things about the IPCCs use of GCMs that have always struck me as strange.

    A) There are two basically different ways of using computer models in science. The first is to make accurate predictions. The second is to explore our understanding of phenomena. The first is possible where the underlying science is very well understood, and there are many examples today, e.g. orbital mechanics, or the analysis of engineering structures. You can spends hundreds of millions on a space probe and be sure it really will slingshot around several planets and get where it is supposed to go (barring a rocket motor failure, or someone confusing newtons with foot-pounds!) . You can design a bridge or a skyscraper and be pretty sure it will support its design load. But when you hear scientists are modelling earthquakes or the formation of galaxies, they are not making predictions you can rely upon. Rather they are modelling the consequences of a particular theory in order to compare it with observation or experiment. They are testing the theory. I suspect the former use of computer models is quite well understood by the public and politicians, but the latter much less so.

    When it comes to GCMs, it is clear that we are nowhere near the point of being able to *predict* the future climate. We don’t know which are the most important phenomena to model. We don’t know which feedbacks are important or how big they are. These are exploratory models that need to be compared with real climate data in order to resolve those questions. Yet they are being presented as predictions.

    B) The IPCC treats GCM results is if they were experimental measurements. It thinks that if several different programs produce similar results, that provides confirmation of the theory. It even presents the average and spread of many models as providing an estimate and error bars on the actual future climate. This is voodoo. If all the models are based on wrong assumptions they will all be wrong. There is no reason the believe their errors will “average out”. In science, models must be compared with experiment or observation, not with other models. (And it is not as if this is a controlled Monte-Carlo over certain parameters. That would be valid – but would be just another model result. We are talking about many models with different basic assumptions about climate mechanisms).

    C) Retrofitting the climate is not very impressive with models of this kind. There must be hundreds of parameters that can plausibly be varied. (When I was in astronomy we used to talk about “breaking the 10^-5 barrier” on the basis that, when a complicated analysis of megabytes of data is involved, any decent physicist would be able to get an expected result at an apparent 10^-5 significance from random numbers, just by plausibly varying the analysis. The same must be true of GCMs). I am not talking about deliberate fraud here – just about picking values for parameters that are not know with certainty, and for which some plausible post-hoc justification can be made. But the fact that the grant money depends upon everyone believing there is dangerous AGW is not irrelevant. Prediction, not retrodiction, is the only true test.

    D) Far too little attention is paid to historical and paleoclimate data, which strongly indicates that there is no imminent “tipping point” and that at least a couple of degrees of warming is not a problem, or even beneficial, for human and other life. Let’s not even get started on nonsense like corals dying out.

    E) Until about 1998 the IPCC case was (at least from my perspective) weak but not obviously false. But none of the GCMs predicted the ongoing hiatus in global warming. For a couple of years, it was just “weather”. Now, I am afraid, it is climate.

    “It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.”
    –Richard Feynman

    Here my analysis is very rough-and-ready. But sometimes that’s not a bad thing. Before 1998 the GCMs attributed essentially all late 20th C warming to CO2 and its feedbacks. But if there is AGW, it must have been countered since 1998 by one or more cumulatively as large natural effects. And since natural phenomena are usually cyclic rather than appearing out of nowhere, those phenomena were probably in a warming phase before 1998. That implies about half the late 20th C warming was natural, and that requires revising downwards the CO2 feedbacks, and the estimates for future warming, roughly by a factor of two.

    I know that is not an exact calculation. The point is the temperature data since 1998 should have led to a major rethink of GCMs. But it looks to me like all we have seen is some fiddling around with parameters in order to stay as close as possible to the previous predictions. Sooner or later someone will have to break with the pack and admit that AGW is far less significant than had been thought – possibly to the point that it is not a cause for concern at all. (In fact I am guessing this is an open secret – the Indians and Chinese, who would be far worse affected by warming than us – seem to have no interest in doing anything to stop it).

    Having insisted for so long that “the science is settled” that is going to be a huge embarrassment, not just for climate science, but for the whole public perception of science, and for bodies such as the Royal Society that have hitched their wagon to climate alarmism.

    Anyway Judith thanks for the article, in which I think you are agreeing with at least some of the above. Or am I misunderstanding?

    • Gareth, you hit the nail on the head. Thanks for your comment.

    • what an excellent comment. +1

    • Yes, skeptics, please look at Paleoclimate for a clue. The last time CO2 was over 500 ppm was the Cretaceous, and it was over 1000 ppm in the Jurassic. Was it warmer then? Were sea levels higher? This is the question you need to answer for yourself. It is another strand of evidence independent of GCMs and simple energy balance models.

      • That evidence is very weak. Such accuracy is physically implausible. On the other hand, we have thousands of direct measurements, which are discarded by the consensus. Not very scientific.

      • Were there dinosaurs then? Oops, so much for your strand of evidence.

      • I don’t think there is anyone who denies that the Mesozoic period was warmer and had that much more CO2, but maybe you can find an expert to support an opposite contention. The weak evidence is that it was otherwise from this expectation.

      • The other strand I forgot to mention is obviously the observed surface temperature rise. BEST has 0.3 degrees per decade for the last three decades for land areas.

      • Jim, i only dispute the constant pre-industrial CO2 level (~300 ppm), not that Mesozoik was warmer and had more CO2.

      • Captain Kangaroo

        Loony tunes space cadets – land/sea warming was a mere 0.1 degree C/decade after incompletely removing oceanic influences.

        http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view&current=ensosubtractedfromtemperaturetrend.gif

        Most warming was the result of albedo variability.

        http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view&current=ensosubtractedfromtemperaturetrend.gif

        Was the mid holocene warmer and was CO2 higher at that time? Do we know what albedo was doing or anything else with much accuracy. As for the Jurassic for Christ’s sake – this sort of evidence has been described by he NAS as being like investigting by feel in a dark room. GCM are pointless blunders – and simple energy balance in CERES and earlier record suport albedo change. Also in Projecct Earthshine – http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view&current=ensosubtractedfromtemperaturetrend.gif ISCCP-FD in black, Project Earthshine in blue and CERES in red.

        This is what comes from giving clueless space cadets the idea that they understand anything at all if they are vaguely mouthung the words of the hive masters – in a stolen expression.

        Kindest regards
        Captain Kangaroo

      • CK, or should that be CH, I was supporting the idea of a skeptic to look at paleoclimate. Warmer Mesozoic climates are a useful lesson because their CO2 values, so it is a good idea to understand them more, and you don’t need to know anything about GCMs to do that. Also, yes, albedo is important. Arctic amplification is mostly due to albedo reduction, and probably dwarfs anything clouds are doing. I don’t know the fascination people like Lindzen and Spencer have in the tropics where the Arctic is where all the action is. Black carbon and forest fires in a drying climate: there could be another positive feedback there. Think about all the factors, not just the negative ones.

      • Captain Kangaroo

        Jim,

        As I have explained elsewhere – it is no longer about science if it ever was. I am afraid that formal hostilities have been declared in the climate wars and I have enlisted in the army of freedom, free markets, economic development, science, democracy and the rule of law. You are obviously of the party of sandals and lentils one day a week.

        You typically invent and infer more than you say. Yes you say – albedo is important – in the arctic. But the evidence clearly shows dominant effects in the marine stratocumulus areas of the tropical and sub-tropical Pacific especially. ‘Probably dwarfs’ is simply nonsensical rhetoric. Most climate ‘action’ happens in the bloody tropics for any number of reasons. Black carbon from forest fires? Did you just invent that one? Why the hell would there be less rainfall in a warmer world? Evaporation equal precipitation within a couple of days (The Earth radiation balance as driver of
        the global hydrological cycle, Wild and Liepert 2010.)

        There is no science from you guys – just stupid narratives that you invent on the spot.

        Arctic amplification has nothing to do with anything – http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view&current=chylek09.gif – we are talking about climate cycles that are obvious in the Arctic and North America – http://bobtisdale.blogspot.com.au/2009/06/contiguous-us-gistemp-linear-trends.html. Wait for the graph to shift – real cute.

        I know more than enough about computers and modelling – I commenced my programming career with punch cards. I programmed 4th order numerical solutions of differential equations with cubic spline interpolation in an XT clone with 64kb of memory and no hard drive. I have used numerous models over the years – including those that use the Navier-Stokes partial differential equations. These equations describe the movement of fluid in 3 dimensions. Solutions to the equations diverge. If there are 2 starting points close together – the solutions move further apart over time. This influences the shape of the statistical probability distribution.

        http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view&current=sensitivedependence.gif

        Believe me Jim – I don’t give a rat’s phallus what you think. It is just space cadet stuff. It is just providing ammunition for my side.

        Best regards
        Robert I Ellison

      • CK, no, science is the issue. What happens when the land warms faster than the ocean? Do we expect droughts and hot summers or what? What happens if cloud feedback is positive, not negative, as is also within the error bars? What happens if aerosol albedo doesn’t continue to climb as fast as the greenhouse effect? What happens if Arctic sea ice continues to disappear? What about ocean acidification, or methane outgassing from Arctic areas, forest fires and soot? It is all about science at this point. Science has to be used to evaluate the consequences of all this, and not just paint a rosy picture.

    • Peter Davies

      Good overview and argument for scepticism in respect of AGW.

  74. Judith Curry

    Another interesting thread with a lot of “red meat” for your denizens.

    Your presentation, “What can we learn from climate models?” starts with the ”DOE Climate Change Research Program Strategic Plan”

    This plan states that priorities have moved beyond determining if climate is changing and if there is a human cause (in other words “the science is settled”) to how quickly it will change in the future.

    Policies concerning GHG emission reductions should now be our priority.

    This is based on the stated premise that the science of climate modeling has matured and that future models will be even better at simulating our planet’s climate.

    But is this really so?

    In your presentation, you conclude that ”there is misguided confidence and ‘comfort’ with the current GCMs” and that ” GCMs may not be the most useful option to support many decision-making applications related to climate change”

    You go on to list several “rising discomforts” with the models. You suggest that robust verification and validation (V&V) of GCMs is required, and then conclude that they could be useful in:

    • Attribution of past climate variability and change
    • Simulation of plausible future states
    • Support for emissions reduction policies

    [I’d personally take issue with the last bullet until some of the basic problems can be resolved.]

    You also conclude that they are not suitable for:

    • Predication and attribution of extreme weather events
    • Projections of future regional climate variation for use in
    model-based decision support systems
    • Projections of future risks of black swans & dragon kings

    [This seems like a no-brainer to me.]

    You then go into some very interesting recent solar studies (Shapiro, Lean), a discussion of severe weather events plus the use of GCMs for regional forecasting, which I won’t go into here.

    Tamsin Edwards has a new blogsite putting the issue of model validity into question:
    http://allmodelsarewrong.com/all-blog-names-are-wrong/#comment-372

    She got a strong response from none other than Peter Gleick, who objected to her questioning the validity of the climate models, and generally more positive comments from several of your denizens here.

    To me the most significant constraint on the validity of any computer model is the old “GIGO” problem.

    As Richard Lindzen put it in his recent presentation to the House of Commons:

    ” The notion that models are our only tool, even, if it were true, depends on models being objective and not arbitrarily adjusted (unfortunately unwarranted assumptions).”

    But there are also other limitations, as Lindzen states:

    ” Models cannot be tested by comparing models with models. Attribution cannot be based on the ability or lack thereof of faulty models to simulate a small portion of the record. Models are simply not basic physics.”

    Lindzen states that physical observations do not support the model outputs as they are today:

    However, models are hardly our only tool, though they are sometimes useful. Models can show why they get the results they get. The reasons involve physical processes that can be independently assessed by both observations and basic theory. This has, in fact, been done, and the results suggest that all models are exaggerating warming.

    Willis Eschenbach, who knows a bit about computer modeling, has put it more directly, citing several specific examples: the idea that “temperature change equals forcing change times climate sensitivity”, as programmed into the models, sometimes just doesn’t work out that way.
    http://wattsupwiththat.com/2011/10/20/the-alligator-model/
    http://wattsupwiththat.com/2011/09/30/my-oh-miocene/
    http://pielkeclimatesci.wordpress.com/2012/02/14/on-self-regulation-of-the-climate-system-an-excellent-new-analysis-by-willis-eschenbach/

    Eschenbach points out that models are notoriously unable to simulate the behavior of clouds (which even IPCC concedes), which makes them essentially worthless for predicting future climate change.

    On this site, he stated the basic problem even more clearly:

    Instead of being modeled as anything approaching the complexity of a natural planetary-scale heat engine which contains a host of self-organized emergent phenomena, it [our planet’s climate] is modeled by a simple one-line equation … and you folks truly believe that? Truly? You really think we can reduce the climate to a one-line equation?
    So no, I don’t agree, but not with the things you ask.
    I disagree at a fundamental level with the basic assumptions. I disagree with the bogus math used to claim that the only variable in Q is T. I disagree that climate is like a ball on a pool table, free to respond linearly to forcings. I disagree that there is anything remotely resembling “climate sensitivity”, it is a component of an incorrect and oversimplified understanding of what’s going on.

    You have hinted at this in your presentation, but it appears to me that, before we use model outputs to set policies, we need to get the basic model input assumptions right.

    And it also appears to me that we are a long way from there today.

    Max

    • Max, I like Willis’ post on the black box analysis:
      http://wattsupwiththat.com/2011/05/14/life-is-like-a-black-box-of-chocolates/

      I get the impression that the answer resulting from a GCM is akin to sending a toddler on a tricycle down a bobsled run. It may not look pretty or precise or refined, but they will get to the bottom somehow. Model parameters and nudge factors ensure the model stays between the lines, no matter if they are correct or not. It almost doesn’t matter what happens in the model, as the result will be near the same.

      I think the models are far more complex than a one-line equation (ModelE as example), since there are squillions of parameters attempting to model land, water, ice, vegetation, various heat transfer modes, clouds, etc,etc etc. The problem is that in between the inputs and the answer, we have no idea what went on along the ride (hence the toddler on a bobsled run analogy). The importance of V&V cannot be overstated.

      ModelE ATMDYN.f
      ” SUBROUTINE DYNAM
      !@sum DYNAM Integrate dynamic terms
      !@auth Original development team
      […]
      #ifdef NUDGE_ON
      CALL NUDGE_PREP
      #endif
      […]
      C**** INITIAL FORWARD STEP, QX = Q + .667*DT*F(Q)
      […]
      #ifdef NUDGE_ON
      CALL NUDGE (UX,VX,DTFS)
      #endif
      […]
      C**** INITIAL BACKWARD STEP IS ODD, QT = Q + DT*F(QX)
      […]
      #ifdef NUDGE_ON
      CALL NUDGE (UT,VT,DT)
      #endif
      […]
      C**** ODD LEAP FROG STEP, QT = QT + 2*DT*F(Q)
      […]
      #ifdef NUDGE_ON
      CALL NUDGE (UT,VT,DTLF)
      #endif
      […]
      C**** EVEN LEAP FROG STEP, Q = Q + 2*DT*F(QT)
      […]
      #ifdef NUDGE_ON
      CALL NUDGE (U,V,DTLF)
      #endif
      C**** ACCUMULATE MASS FLUXES FOR TRACERS and Q
      !$OMP PARALLEL DO PRIVATE (L)
      […]
      END SUBROUTINE DYNAM”

      Cloud modelling may not be robust, so maybe that where the nudges are needed:
      MODULE CLOUDS
      […]
      ! find unpermittable data…..
      !
      do ilev=1,nlev
      if (cc(ilev) .lt. 0.) then
      print*, ‘ error = cloud fraction less than zero’
      JERR=1 ; return; ! stop
      end if
      if (cc(ilev) .gt. 1.) then
      print*,’ error = cloud fraction greater than 1′
      JERR=1 ; return; ! stop
      end if
      if (conv(ilev) .lt. 0.) then
      print*,’ error = convective cloud fraction less than zero’
      JERR=1 ; return; ! stop
      end if
      if (conv(ilev) .gt. 1.) then
      print*,’ error = convective cloud fraction greater than 1′
      JERR=1 ; return; ! stop
      end if

      if (dtau_s(ilev) .lt. 0.) then
      print*,’ error = stratiform cloud opt. depth less than zero’
      JERR=1 ; return; ! stop
      end if
      if (dem_s(ilev) .lt. 0.) then
      print*,’ error = stratiform cloud emissivity less than zero’
      JERR=1 ; return; ! stop
      end if
      if (dem_s(ilev) .gt. 1.) then
      print*,’ error = stratiform cloud emissivity greater than 1′
      JERR=1 ; return; ! stop
      end if

      if (dtau_c(ilev) .lt. 0.) then
      print*, ‘ error = convective cloud opt. depth less than zero’
      JERR=1 ; return; ! stop
      end if
      if (dem_c(ilev) .lt. 0.) then
      print*,’ error = convective cloud emissivity less than zero’
      JERR=1 ; return; ! stop
      end if
      if (dem_c(ilev) .gt. 1.) then
      print*,’ error = convective cloud emissivity greater than 1′
      JERR=1 ; return; ! stop
      end if
      end do
      […]

    • Max, Wills on the White roof project http://wattsupwiththat.com/2011/10/20/the-alligator-model/ says the alligator model predicts white roofs will add to warming. It may be counterintuitive, but that doesn’t make it right either. I don’t see any evidence of urban cool islands caused by loads of dark roof and bitumen roads.

      If some genius said we needed to paint all the roofs white, I’d be happier to do that than pay a carbon tax.

  75. Having discussed what we can learn from climate models opens up two other questions:

    – How can we take best advantage of the lacking knowledge that we can get from climate models and climate science more generally?

    – Where do we have the most serious gaps in the knowledge about climate change that wise decision making requires and what can we do to fill up or get around those gaps?

    The explicit or implied answer of many skeptics is that it’s best to just sit down and wait until an essentially better level of knowledge has miraculously been reached, but that’s logical only if they trust they actually know the answer (AGW worries will turn out to be greatly exaggerated). If we don’t have that trust in knowing what was declares as unknown, the conclusion is not justified. Then we must really consider the knowledge already when it’s lacking.

    Right now the two real world policy alternatives appear to be:

    1. Wait and see.

    2. Do anything that can be agreed upon on the international level setting targets based on political reachability (meaning that setting the target is politically possible, not that the target is necessarily realistic in real world).

    The basic attitudes concerning the risk are correspondingly
    the risks are so small that there’s no problem in postponing all action
    the risks are so large that we will do too little whatever we try.
    Neither side wants to study seriously the possibility that the risks would be between these extremes and the proponents of the second view have the additional problem that their approach makes it difficult to prioritize alternative ways of acting and to avoid solutions that are far from cost-efficient.

    • After writing the above message I read lengthy quotes about the Precautionary Principle from Cass Substein’s article “The Paralyzing Principle.” in messages of Dixie Pooh

      http://judithcurry.com/2012/03/02/week-in-review-3212/#comment-180985

      I agree fully with the quoted text. As stated there The weak versions of the Precautionary Principle are unobjectionable and important., but the strong versions are extremely problematic and difficult to apply in practice.

      My latter paragraphs are related to this problem: the principle is valid, but it’s misused so badly that the outcome may be highly on the negative side. The only way to better results is trying to quantify costs and benefits of proposed actions. It’s no more right to dismiss the whole principle than to use it without careful analysis that’s made as quantitative as possible. Making it as quantitative as possible means that also information obtained by lacking tools like GCMs must be used when it adds on what we know by other means.

      It’s certainly not possible to reach accurate estimates, and different people reach different results (just compare Stern and Nordhaus without going to more extreme views). Even so that’s the only way of improving the support for decisions.

    • Captain Kangaroo

      Pekka,

      I am very much more militant today having taken a commission in the climate wars. There seems very little political compromise possible. Somehow there is a feeling that if a scientific case, howver simplistic, is made this translates into the policy arena. It all seems bound up with limits to growth, negative economic growth and lentils every Thursday. These ideas tie neatly into the idea of progressively constraining carbon emissions.

      You talk about cost-effective solutions – but it is the wrong idea. Energy costs feed directly into productivity and economic development and there is a great need for maximum economic development. The right idea is lowest cost energy. Great swathes of the human race lack the freedom and wealth we take for granted – they lack clean water and sanitation, health care and education. They yet lack these essential advantages that come from the great society built on the principles of freedom, free markets, democracy and the rule of law. If we forget that enlightenment heritage and the rights of people to economic and social security – as ideologues of the left and right have now and in the recent past – then we risk it all.

      So you see that there is no possible rapprochement between the sides. At this stage the warming seems minimal, the risks minor and the defeat of the warministas the primary objective as they seem the greater risk by far. As the world stubbornly refuses to warm over the next decade – we would be foolish not to take a strategic advantage.

      Robert I Ellison

      • Rob,

        You appear to understand cost-efficiency as a much more limited concept that what I have in mind when I use that term. To me all indirect influences on the well-being are part of the balance. In practice some limits are always set for the analysis, but they should be recognized and taken into account in decision making.

        I know that I discuss an unrealizable ideal, not one where scientists tell what to do, but one where they really act as honest brokers and one where the interface between science and policy is also analyzed to build bridges over gaps in the abilities of each participant to understand others. Such an ideal cannot be reached but I do still dream that it would be possible to get a bit closer to that.

      • Captain Kangaroo

        Pekka,

        We run into the idea introduced by Heyak into the theories of economics. You imagine an ideal state – run perhaps by a Laplacian demon. In reality this is no ideal knowlege in government and this can only be realised by a large number of players in the market wo have a better idea of their own wellbeing than even some well intentioned retired physics lecturer. It is a philosophy of government and economics that is a minimum for the classically liberal. This is the old cold war of left and right for which there is no middle ground.

        We have the information that is needed – at most the warming is 0.1 degree C/decade. We have made the policy decision that there is little reason to make market interventions. We would support multiple objective strategies such as those of Bjorn Lomberg or the Breakthrough Institute – but there is little appetite for this on one side and the well of compromise has been so poisoned for the other that any mention of carbon reduction is reflexively resisted.

        So what is a classically liberal cowboy to do but take the least worst path when faced with multiple constraints.

        Cheers

      • Rob,

        I give the free market much credit. All central planners and less intrusive bureaucrats are much more prone to serious errors on very many issues. But I do also believe that the tragedy of commons and other failures of the market are real and some of them really serious. In some cases they are so serious that some other decision making mechanisms must be given the say.

        When there are strong enough arguments to tell that the market mechanism is virtually certain to fail in solving some particular issue, it may be wiser to choose something else through the political mechanisms of democracy.

      • Captain Kangaroo

        Pekka,

        You have overbalanced into climate alarmism with the tragegy of the commons. I have already explained that we consider the risks of the proposed market interventions to be greater than the risks of the minor climate change seen – even if it could be attributed to carbon dioxide.

        I keep going back to the evidence for albedo change in ERBS, ISCCP-FD and ERBS – which is complelling.

        http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view&current=Wong2006figure7.gif

        ‘Figure 7 gives a direct interannual comparison of these new ocean heat storage data from 1993 to 2003 against those from the 12-month running mean ERBE/ ERBS Nonscanner WFOV Edition3_Rev1 and
        CERES/Terra Scanner ES4 Edition2_Rev1 net flux anomalies. The CERES/Terra Scanner results are global and the ERBE/ERBS Nonscanner WFOV results cover 60°N to 60°S (or 87% of the earth’s surface).’
        http://journals.ametsoc.org/doi/full/10.1175/JCLI3838.1

        Regardless we have made a policy decision – and really it is time to move on.

        Cheers

  76. Markus Fitzhenry

    WHAT CANE WE LEARN FROM CLIMATE MODELS!
    HAVEN’T WE ALREADY LEARNT ENOUGH?

    THEORIES
    “We should believe the most what we can doubt the least, after trying to doubt, using experience and consistent reasoning based on empirically supported (so far) best beliefs.”

    When Newtown invented Non-Relativistic Mechanics we believed it because it works consistently to describe Kepler’s observational laws that work well to describe a lot of quotient experience on a lesser scale than planetary orbits.

    After few hundred years, Einstein’s theory ran into trouble in consistency trying to describe blackbody radiation, the spectra of atoms, even the photoelectric effect (which won Einstein the Noble) when Maxwell added to the equations of electricity and magnetism and the understanding of light as an electric wave.

    Many other physicists successively invent modifications that make space-time far more complex and interesting in both fields of relativity and mechanics, theories far more complex than Newton could ever have dreamed.

    These changes didn’t happen by scientists being active in a political endeavour, they came about because the data could not be explained by classical Eucidean theory. Classical flat-space mechanics was outdated by Maxwell when he wrote the correct equations of electrodynamics. The United Field Theory checked out empirically to phenomena accuracy, and yet when applied to cases where it had to work, certain of its predictions failed. If Maxwell’s Equations and Newton’s Law were both true, the Universe itself should have existed for less than a second before collapsing in a massive heat death as stable atoms based on any sort of orbital model were impossible.

    This scientific process continues today. Astronomer’s observed and has corrected previous observations. Careful studies of neutrinos lead to anomalies, places where theory isn’t consistent with observation. Precise measurements of the rates at which the Universe is expanding at very large length scales don’t quite add up to what the simplest theories predict and we expect. Quantum theory and general relativity are fundamentally inconsistent, but nobody knows quite how to make a theory that “both” are in the appropriate limits.

    In bridging the scientific gap between what is known and what is unknown people try to come up with better theories, ones that explain everything that is well-explained with the old theories but embrace new observations that are discovered and explain them as well. Ideally, the new theories predict new phenomena entirely and a careful search reveals it as the theory predicts.

    And all along there are new experiments discovering high temperature superconductors, inventing lasers and masers, determining the properties of neutrinos (so elusive they are almost impossible to measure at all, yet a rather huge fraction of what is going on in the Universe). Some experiments yield results that are verified; others yield results that are not reproducible and probably incorrect. A Higgs particle that seems to appear for a moment as a promising bump in an experimental curve and then fades away again, too elusive to be pinned down. Hypotheses of dark matter and dark energy abound that might explain some of the unusual cosmological observations. The “dark” bit basically means that they don’t interact at all with the electromagnetic field, making them nearly impossible to see, so far.

    PLAUSIBILITY
    Physicists therefore usually know better than to believe the very stuff that they peddle. A good physicist will tell you up front “Everything I’m going to tell you is basically wrong, but it works, and works amazingly well, right up to where it doesn’t work and we have to find a better, broader explanation.” A good physicist will also tell you not to believe anything they tell you just because they are saying it but because it makes sense to you , corresponds at least roughly with your own everyday experience, and because when checked in the labs and by doing computations that can be compared to observations, they seem to work. Physicists also know that they should be believed with a grain of salt, because further experiments and observations will eventually prove it all wrong.

    Still, that is not to say you shouldn’t believe in some things strongly, gravity for example, not perfect or consistent with quantum theory, at the smallest and largest scales, but it works so well in between and it is almost certainly at least approximately true, true enough in the right milieu.

    Yet, if argument arose that gravitation is otherwise then it is explained as a perfect mutual force attracting two bodies, and deviations are responsible for observed anomalies in galactic rotation, then you would have to listen. What if it could explain unexplained phenomenon whilst still predicting previous evidence, and it explains the anomaly, would you think it could be true? What if it predicted something new and startling, something that was then observed, it might even be promoted to more probably true than Newton’s Law of Gravitation, no matter how entrenched Newton’s theory is.

    In the end, it isn’t aesthetics, it isn’t theoretic consistency, it isn’t empirical support, a strong argument should be a blend of all three, something that relies heavily on common sense and human judgement and not so much on a formal rule that tells us truth.

    You should be a sceptic in the climate debate. All the particle accelerators in the known Universe would fail miserably in their engineering if relativity weren’t at least approximately correct. Once you believe in relative causality it makes some very profound statements about things that might well make all known physics inconsistent if it they were found to be untrue.

    But if a neutrino in a European particle accelerator seems to be moving faster than it should ever be able to move then the whole theory might have to be revised.

    THE EVIDENCE
    The AGW debaters have never really fronted up for a debate. The podium never faces the opposition and is not capable of defending their answers against a knowledgeable and sceptical questioner.
    Good science defends itself against its critics and demonstrates consistency, both theoretically and with experiment. Good science admits its limits, and never claims to be “settled” even as it does lead to defensible practice and engineering where it seems to work, for now.
    The AGW debate is predicated from the beginning on one thing. We know what the global average temperature has been like for the past N years, where N is nearly anything you like. Whatever times scales it can be any in millennium, megaannum or gigaannum frames.

    In truth, we have moderately accurate thermal records that aren’t really global, but are only a small sample of the globe’s surface exclusive of the bulk of the ocean for less than one century. We have accurate of the Earth’s surface temperatures on a truly global basis for less than forty years. We have accurate records that include for the first time a glimpse of the thermal profile, in depth, of the ocean, that is less than a decade, even the satellite data is far from controversy, as the instrumentation itself in the several satellites that are making the measurements do not agree on the measured temperatures terribly precisely.

    In the end, nobody really knows the global average temperature of the Earth’s surface in 2011 within less than around 1K. Nobody should claim so, it isn’t even clear that we can define the global average temperature in a way that really makes sense It is also unlikely that our current measurements would in any meaningful way correspond to the instrumentation of the 18th and 19th century.

    Then there is the use of tree ring reconstructions in place of the best geological proxy reconstruction. Tree rings are not accurate thermostats and their measurements are biased by their localised environment. Plotting tree ring thicknesses over hundreds of years, there might be a small signature thermal in nature, but it is only obtained from a small sample of the territorial surface and not accounting for the 70% of the Earth’s surface that is covered by the ocean.

    The twentieth century has experienced warming. From fairly reliable records there are good measurements from roughly 1975 to the present, but there were lots of things that made the 20th century unique. World wars, nuclear bomb testing resulting in radioactive aerosols, deforestation and other events but moreover, the sun appeared to be far more active than it had been at any point in the direct observational record, and possibly for over 10,000 years. It isn’t clear what normal conditions are for the climate something that perpetually is slowly changing, but yet climate science is very clear indeed that the latter 19th through the 20th centuries were far from normal by the standards of the previous ten or twenty centuries.
    Climatologists have taken a dogmatic stance by claiming to have found a clear anthropogenic global warming signal. This certainty in their scientific relativity is at the outliers of acceptable principles in critical thinking. The certainty of their hypothesis with precise predictions and conclusions as nonconvertible cannot be sustained.

    THE MODELS AND PREDICTABILITY
    Their solution to a set of coupled non-Markovian Navier-Stokes equation with a variable external driver and still unknown feedbacks in a chaotic regime with known important variability on multiple decade or longer timescales, uses inaccurate thermal records and dubious dendrochronological proxies.

    Certainly, if the modelled data compares with previous records then it is possible they are correct, but if the modelled data diverges from observed reality, as it has, then their predictions and theory of causation are very likely to be incorrect. Accurately predicting the future isn’t proof that they are right, but failing to predict it is pretty strong evidence that they are wrong.

    Such a comparison fails. It actually fails to predict or explain the cooling from 1945 to roughly 1965-1970. It fails to predict the little ice age. It fails to predict the medieval climate optimum, or the other periods when the world was as warm it is today. More so, it fails to explain the years where we have had reliable satellite record, where there has been no statistically significant increase in temperature. January of 2012 was nearly 0.1C below the 33 year baseline.

    The models have predicted that the temperature would be considerably warmer, on average, than they appear to be. This is evidence that those models are probably wrong, that some of the variables that are unknown are important, that there are incorrect parameters and incorrect feedbacks. How much longer before that their entire model is declared fundamentally wrong, badly wrong.

    CATASTROPHE THEORY
    A catastrophic story is widely told, to keep people from losing faith in a theory that isn’t working the way that it should. The acceptance of alternatives sources of energy as the only avoidance of a so far imagined catastrophe is a vain and costly exercise of human endeavour.

    When the debate has opened up, acknowledging the uncertainties, a welcome contradictory theories, and stop believing in a set of theoretical results as if climate science is some sort of religion, then it might return to science. At this time in the evolution of the knowledge about the climate phenomena, there is no justification for the policies of minimisation thus far promoted. It is a problem that may well be completely ignorable and utterly destined to take care of itself long before it ever becomes a real problem.

    Eventually, sheer economics and the advance of physics and technology and engineering will make fossil-fuel burning electrical generators as obsolete as steam trains. Long before we reach any sort of catastrophe assuming that CAGW is correct the supposed proximate cause of the catastrophe will be reversing itself without anyone doing anything special to bring it about but make sensible economic choices.

    In the meantime, one phase of the debate should be lost. Science is never “settled”.

    Reference; Why CAGW theory is not “settled science” by Dr. Robert Brown, Duke University Physics Department.

  77. @ Max (and Willis)

    “Instead of being modeled as anything approaching the complexity of a natural planetary-scale heat engine which contains a host of self-organized emergent phenomena, it [our planet’s climate] is modeled by a simple one-line equation … and you folks truly believe that? Truly? You really think we can reduce the climate to a one-line equation?”

    It may be out of context but the climate system is definitely NOT modeled as a one line equation. Is there a link to an original source?

    “I disagree that there is anything remotely resembling “climate sensitivity”, it is a component of an incorrect and oversimplified understanding of what’s going on.”

    If he means that sensitivity is a function of state and that there are feedbacks then these are rather trivial points. If he says that the system is nonlinear , it’s also trivial. If he doesn’t agree that a small perturbation in forcing leads to a well-defined change in a given climate equilibrium then he needs to justify that. If he made it into a question it would have been a good one. As a statement it requires a difficult defense.

    • “As a statement it requires a difficult defense.”

      Not really. I think you missed the memo. At least 3 of the GCM’s have been tested against a simple one-line emulator based on the linear feedback equation – GISS-E, GFDL and CCSM3. For the input forcings used by the specific model, the one-line equation will reprodudatace the mean global temperature results from the model with a correlation of better than 0.99. In the case of GISS-E, where OHC data is available,it will simultaneously match temperature and OHC with a similar level of fidelity.

      In all three instances, the climate sensitivity comes out low: 1.3, 1.2 and1.5 deg C/w/m2 respectively. It is laughable to use the term “well-defined” change in the context of a new climate equilibrium response to a perturbation. In all three GCM cases the stated ECS is far higher than the values I quote above. The reason is that final ECS values in the models come from NONLINEAR extrapolation of the relationship between the rate of energy gain and the temperature. The temperature change over the instrumental period is too small to reach the non-linear part, so the extrapolation outside this range is not based on any actual data. So the model ECS is effectively an arbitrary invented number or based on the well-founded physics in the model – depending on where you wish to stand on the matter. What it is NOT, is a value derived from matching the instrumental data.

  78. “Priorities for climate change research have moved beyond determining if Earth’s climate is changing and if there is a human cause. The focus is now on understanding how quickly the climate is changing, where the key changes will occur, and what their impacts might be. Climate models are the best available tool for projecting likely climate changes, but they still contain some significant weaknesses.”

    I don’t think so. First I think that for the topic climate model the focus should be directed to the solar system and the physics on the Sun because there are many significant facts that solar functions echoes on the global temperatures; and not the hysteric view on the quickness is a focus, but the causes of the frequencies in the frequency spectrum between month and millennia. Because frequencies are locked to geometric dimensions, the models must include as well saw tooth cycles of many 10 ky but also non sinusoid functions of month. Second, traditional climate models have no rudiment for the global temperature frequencies from the proxies but a math trojaner, which is not science in the way to come to a physical mechanism in total. Maybe these are the weaknesses, but they are not explicitly formulated.

    It is not true that traditional climate models are the best available tool for projecting climate changes, because models based on heliocentric astronomical functions have shown that terrestrial climate function like the oscillating sea level or the global temperature is phase locked to a main solar tide function: http://www.volker-doormann.org/images/uah_temp_4_rs.gif. This means nothing more than that solar tide functions, time coherent with thermal relation on Earth, are proved. Expanding the solar tide function to distance objects in the solar system to frequencies of about 1000 years^-1, known temperature reconstructions (like F. Ljunqvist 2010) can be verified with that model, and moreover it can predict terrestrial climate for the next 1000 years with time resolution of month, because the NASA ephemerides are precise known until 3000 CE.

    V.

  79. Judith Curry says “Climate models do not currently predict many types of extreme weather events…” Actually, climate models do not predict. They “project.” While climatologists commonly conflate the idea that is referenced by “project” with the idea that is referenced by “predict,” the two ideas are distinct.

    A feature of a prediction that is not a feature of a projection is susceptibility to being statistically tested. As they make only projections, modern climate models are not susceptible to being statistically tested but if we conflate the idea referenced by “projection” with the idea referenced by “prediction,” we can reach the false conclusion that they are susceptible to being statistically tested. It follows from the lack of susceptibility of the IPCC’s models to being statistically tested that the methodology of the IPCC’s inquiry into AGW is not a scientific one.

    • Point taken, but seasonal climate models do predict some times of extreme weather events. Climate models of the IPCC variety do simulate some types of extreme weather events, but not others.

  80. I share the view of several of your posters that the GCMs need to be herded up and put back into academia before they do more damage. They are just patently not in a fit state to be used for policy decisions.

    I developed and applied complex dynamic simulators for several decades, before letting my smarter juniors take over. If one of them had come to me with the quality of history match observable in the GCMs and suggested that we take a major capital investment decision on it, I would have shown him the door. Now, we are being asked to take decisions affecting macro-economic health on the basis of pure unadulterated horse manure. Bad data is generally worse for decision-making than no data at all.

    Typically, as a necessary but not sufficient condition for validation, any model must simultaneously match key observed data (vectors) to some pre-declared acceptable tolerance level. With the current state-of-the-art, even the broadest attribution studies are open to question.

  81. Confidence derives from the theoretical physical basis of the models,
    _______________________

    No, Judith, there is no valid “theoretical physical basis” contained within the models, because such models …

    (a) totally fail because they liken the Earth’s surface to a near perfect blackbody, which it is not

    (b) Ignore the dependence of the lapse rate upon the acceleration due to gravity, not water vapour and trace gases

    (c) Assume that energy transfers happen when they cannot do so

    (d) Ignore the most significant climate stabilisation mechanism altogether

    (e) Ignore the absorption of incident IR solar radiation

    and a few more reasons you may read in my writings and soon-to-be-published work with which the reviewers are well pleased.

    • Gareth Williams

      Doug Cotton,

      I had a quick look at your website, I am afraid your argument that back-radiation cannot heat the surface of the earth is missing the point.

      You would not dispute (I assume) that the mirrored surface of a thermos flask slows the rate at which the coffee cools down. It could not heat the coffee above the temperature it started at. But if there is another source of heat for the coffee (eg a heater inside the flask), the back-radiation means that equilibrium will be reached at a higher temperature than if there was not a mirrored surface. The back-radiation does not “heat the coffee” but it does (in those circumstances) result in it being hotter then it would have been without the back-radiation.

      The earth of course is heated by sunlight (mostly at visible and infra-red wavelengths). Absorption and back-radiation at infra-red wavelengths results in an equilibrium temperature higher than if the infra-red was not absorbed. That is the greenhouse effect.

      As you will see if you read my other comment on this page, I agree there is a lot wrong with AGW theory. But there is a greenhouse effect.

      • If I were writing it, I’d rephrase your last sentence for greater accuracy. As rephrased it would read “Ceteris paribus (other things being equal) there is a greenhouse effect.” Generally,other things are not equal and thus it cannot be concluded that the equilibrium temperature is higher.with absorption and back radiation than without it. This being the case, “the greenhouse effect” is misnamed, for the purported effect of a rise in the equilibrium temperature does not necessarily exist. As the equilibrium temperature is not observable, whether the equilibrium temperature rises or does not rise cannot be empirically determined; this feature of the “greenhouse effect” places this notion outside science.

      • Gareth, I am sorry but there is no equilibrium in your thermos. Your coffee will reach equilibrium temperature when it is at room temperature. Delaying the time taken to reach equilibrium does not change the equilibrium state. Now how one might slow radiation is something for the fourth dimension, since we thing radiation travels at the speed of light. The existence of a “greenhouse” effect in the atmosphere is no different from the existence of a “god of the sun/wind/fire/earth” – a device used by humans to explain something they do not understand.

  82. The statistical population underlying the IPCC’s inquiry into AGW does not exist. It follows that: a) IPCC Working Group I’s models can be neither statistically falsified nor statistically validated and b) the IPCC’s inquiry is not a scientific inquiry, though the IPCC represents it to be one.

  83. Volker Doormann | March 7, 2012 at 3:51 am
    “It is not true that traditional climate models are the best available tool for projecting climate changes, because models based on heliocentric astronomical functions have shown that terrestrial climate function like the oscillating sea level or the global temperature is phase locked to a main solar tide function: http://www.volker-doormann.org/images/uah_temp_4_rs.gif.

    This means nothing more than that solar tide functions, time coherent with thermal relation on Earth, are proved.

    Update + astronomical model.

    V.

  84. Dr Curry –

    With all due respect, this question makes no sense on at lesat two fronts.

    1. Models cannot possibly teach us anything. WE need to teach the models – i.e., we need to code the model properly so it can learn from what we know. In this regard, the model teaches is that we either have or have not taught it enough.

    2. Until we have seen a model correctly reverse forecast AND correctly forecast the near term future, it has no standing in science. How IS it that people with their models don’t understand this? ONLY when it can do those two can we learn anything from a model. Otherwise it is trying to learn history from watching Hollywood movies.

    The fact there there are many models out there tells us that no one has created a model that actually works. You know “works”, right? As in science describes our surroundings in a thorough enough way it that lets us predict reliably what will happen under certain circumstances? We could learn one thing from one model and something different or contradictory another from another model – and how is that useful? If they were both actual science they would not disagree. Until that time they are like concepts in different researchers’ minds – which aren’t weeded out until at all are weeded out one by one (perhaps all of them).

    If reliability doesn’t exist, it is not science, and it is not anything to learn from. In other words, with models there is no ‘there’ there.

  85. Judith Curry

    Thanks for re-posting the link to your DOE BERAC slides “What Can We Learn from Climate Models?”.

    After going through these, this is what I have “taken home”:

    Good news:
    GCMs are better than they were: more computer power, more physical processes included, more comparisons with physical observations, good results for regional applications over short time scales

    Bad news:
    too many uncertainties in assumptions, too many “science gaps”, poor prediction capability, overconfidence in results, poor for global applications or over longer time scales, lack of formal model verification and validation (circularity)

    For what purposes are models fit?
    Yes – explore scientific understanding
    Maybe – attribution of past climate changes
    No
    – prediction/attribution of extreme weather events
    – prediction of global climate changes, due to non-predictable factors and abrupt climate changes (“black swans” and “dragon kings”)

    Some conclusions (without going into all the detail):
    – GCMs should be used more for understanding natural (solar) forcing and natural climate variability
    – need a plurality of different GCMs for different purposes
    – GCM-centric approach is not the best for “decision support”

    This presentation clears up a lot of the confusion surrounding the validity of GCMs as they are today in attributing past climate change or predicting (or projecting) future climate or extreme weather trends.

    The slides do not state specifically whether or not GCMs are suitable for estimating climate sensitivity (but it hints that this is not so).

    After reading this presentation, I remain rationally skeptical that GCMs are much better than simple educated guesses in telling us why our climate behaves as it does over longer time periods, or in making projections for the future that have any real significance for “policy makers”.

    Max

    PS If I have misinterpreted or missed out on any critical points, please straighten me out. Thanks

  86. “A tension exists between spending time and resources on V&V, versus improving the model.”

    As a pioneer modeller from way back, I find the above statement hard to digest. Model validation and model improvement go hand in hand. When you find a discrepancy between the simulation of a distinct physical process (a single equation or a group of equations) and the real life historical or experimental data, you are prompted to consider every possibility for resolution. Surely every scientist would have to regard this outcome as an opportunity for new discovery.

    One other point worth making here for those unfamiliar with modelling of complex systems is that modelling is a team activity. It starts off with someone with the right experience in physics, mathematics and modelling developing a conceptual model and writing the equations. He or she is the team leader. A multidisciplinary team is then assembled consisting of expert scientists in the various fields of science and computer programmers who will translate the equations into code for efficient and accurate solution on the machine. This will be an unusual computer because previous attempts have failed owing to inadequate machines, so we should beware of following to closely in the footsteps of Hansen, Hadley Centre or the IPCC. Learn from them but not follow.

    The model should be designed from the ground up for validation against experimental or real life historical data. Forget about validation against historical climate at this stage. Validate the model of each physical process separately before attempting the overall climate verification which is the ultimate measure of success. Because of random processes in the model and in real life, the final validations might only show that the model run outputs belong to the same distributions as real life. Obviously the model must run many times faster than real time or this latter step will last for ever.

    • Joachim Seifert

      Alexander: What you describe is how it once WAS….
      Today’s models are faulty in their INPUT, where 5 macro-forcings are omitted on purpose….they do instead tinker with purely mini-forcings.
      Therefore: As long as they do not straighten out their INPUT, the model
      OUTPUT cannot be better…..surely, a cook is able to cook up a storm
      from low quality ingredients and the output can be high…..
      But this does not apply to climate modelling and your explained
      stages will not cook a better stew….JS