by Judith Curry
I think I am gaining some insight into the debate between scientists versus engineers regarding climate model verification and validation.
For background, the previous relevant posts at Climate Etc. are:
- The uncertainty monster
- The culture of building confidence in climate models
- Climate model verification and validation
Of particular relevance is the post Climate model verification and validation, where the starkly different perspectives of Steve Easterbrook and Dan Hughes are contrasted.
Steve Easterbrook’s new post
Steve Easterbrook has a new post entitled “Formal verification for climate models?” His post starts out with this question:
Valdivino, who is working on a PhD in Brazil, on formal software verification techniques, is inspired by my suggestion to find ways to apply our current software research skills to climate science. But he asks some hard questions:
1.) If I want to Validate and Verify climate models should I forget all the things that I have learned so far in the V&V discipline? (e.g. Model-Based Testing (Finite State Machine, Statecharts, Z, B), structural testing, code inspection, static analysis, model checking)
2.) Among all V&V techniques, what can really be reused / adapted for climate models?
Excerpts from Easterbrook’s response:
Climate models are built through a long, slow process of trial and error, continually seeking to improve the quality of the simulations (See here for an overview of how they’re tested). As this is scientific research, it’s unknown, a priori, what will work, what’s computationally feasible, etc. Worse still, the complexity of the earth systems being studied means its often hard to know which processes in the model most need work, because the relationship between particular earth system processes and the overall behaviour of the climate system is exactly what the researchers are working to understand.
Which means that model development looks most like an agile software development process, where the both the requirements and the set of techniques needed to implement them are unknown (and unknowable) up-front. So they build a little, and then explore how well it works.
Now, as we know, agile software practices aren’t really amenable to any kind of formal verification technique. If you don’t know what’s possible before you write the code, then you can’t write down a formal specification (the ‘target skill levels’ in the chart above don’t count – these aspirational goals rather than specifications). And if you can’t write down a formal specification for the expected software behaviour, then you can’t apply formal reasoning techniques to determine if the specification was met.
Dan Hughes responds
Dan Hughes posted a comment a Easterbrook’s blog, which was censored. Hughes posted his entire comment on his own blog “Censored at Serendipity”, some excerpts:
Steve Easterbrook at Serendipity has censored a comment that I made.
The following was cut by Steve Easterbrook
Instead of bold, un-supported, mis-characterizations of the material, from a position of authority, would you point to a single aspect of either the material that I have posted, or in either of the books by Roache et al., that is not appropriate to scientific and engineering software. Or, have I mis-read the title of the books.
I find your assertions completely at odds with all the material in those books, and in the hundreds of other peer-reviewed papers and reports that have been written on V&V of engineering and scientific software over the past two and a half decades.
You have once again clearly illustrated the lack of knowledge of the state-of-the-art of independent V&V and SQA that has been successfully applied to a wide range of engineering and scientific software that pervades the Climate Science Community.
Thank you for your very professional and deeply considered response. Over the period of time that several of us have attempted constructive discussion here, you have yet to provide a single example of the lack of appropriateness of our peer-reviewed sources. Not a single example. You provide only meaningless arm waving from a position of authority.
Dan Hughes pointed me to the following paper:
A Comprehensive framework for verification, validation, and uncertainty quantification in scientific computing
Christopher J. Roy and William L. Oberkampf
Abstract. An overview of a comprehensive framework is given for estimating the predictive uncertainty of scientific computing applications. The framework is comprehensive in the sense that it treats both types of uncer- tainty (aleatory and epistemic), incorporates uncertainty due to the mathematical form of the model, and it provides a procedure for including estimates of numerical error in the predictive uncertainty. Aleatory (random) uncertainties in model inputs are treated as random variables, while epistemic (lack of knowl- edge) uncertainties are treated as intervals with no assumed probability distributions. Approaches for propagating both types of uncertainties through the model to the system response quantities of interest are briefly discussed. Numerical approximation errors (due to discretization, iteration, and computer round off) are estimated using verification techniques, and the conversion of these errors into epistemic uncertainties is discussed. Model form uncertainty is quantified using (a) model validation procedures, i.e., statistical comparisons of model predictions to available experimental data, and (b) extrapolation of this uncertainty structure to points in the application domain where experimental data do not exist. Finally, methods for conveying the total predictive uncertainty to decision makers are presented. The dif- ferent steps in the predictive uncertainty framework are illustrated using a simple example in computa- tional fluid dynamics applied to a hypersonic wind tunnel.
Comput. Methods Appl. Mech. Engrg. 200 (2011) 2131–2144
Link to entire paper [here].
This is a very good paper, which is broadly consistent with what I have been proposing regarding uncertainty determination and V&V. One note re vocabulary: aleatory uncertainty is the same as ontic uncertainty discussed in my uncertainty lexicon.
The paper’s conclusions highlights the new elements of their approach, which I think are particularly relevant for climate models:
The framework for verification, validation, and uncertainty quantification (VV&UQ) in scientific computing presented here represents a conceptual shift in the way that scientific and engi- neering predictions are performed and presented to decision mak- ers. The philosophy of the present approach is to rigorously segregate aleatory and epistemic uncertainties in input quantities, and to explicitly account for numerical solution errors and model form uncertainty directly in terms of the predicted system response quantities of interest. In this way the decision maker is clearly and unambiguously shown the uncertainty in the predicted quantities of interest. For example, if the model has been found to be inaccurate in comparisons with relevant experimental data, the decision maker will starkly see this in any new predictions; as opposed to the approach of immediately incorporating newly obtained experimental data for system responses into the model by way of re-calibration of model parameters. We believe our proposed approach to presenting predictive uncertainty to decision makers is needed to reduce the tendency of under estimating predictive uncertainty, especially when large extrapolation of models is required. We believe that with this clearer picture of the uncertainties, the decision maker is better served. This approach is particularly important for predictions of high-consequence systems, such as those where human life, the public safety, national security, or the future of a company is at stake.
JC comments: If climate models were built only for science qua science, to support the academic research programs of the people building the models, then Steve Easterbrook would have a valid point. However, climate models (particularly the NCAR Community Climate Models) are built as tools for community research, and these models are used for a wide variety of scientific research problems. Further, these climate models are used in the IPCC assessment process, which reports to the UNFCCC, and in national-level policy making as well. Greater accountability is needed not only because of the policy applications of these models, but also to support the range of scientific applications of these models by people that were not involved in building the models.
So count me in the camp of Dan Hughes, Joshua Stults and William Oberkampf on this one. I have been trying to make some headway to raise the standards of climate model V&V, through the NASA and DOE committees that I serve on. I am cautiously optimistic that the relevant people (i.e. the ones in charge of the climate $$) might be prepared to listen to these arguments.