by Judith Curry
The new International Journal of Uncertainty Quantification has some very interesting papers. Lets take a look at a paper entitled ‘Error and Uncertainty Quantification and Sensitivity Analysis in Mechanics Computational Models.’
Error and Uncertainty Quantification and Sensitivity Analysis in Mechanics Computational Models.
Bin Liang and Sankharan Mahadevan
Abstract. Multiple sources of errors and uncertainty arise inmechanics computational models and contribute to the uncertainty in the final model prediction. This paper develops a systematic error quantification methodology for computational models. Some types of errors are deterministic, and some are stochastic. Appropriate procedures are developed to either correct the model prediction for deterministic errors or to account for the stochastic errors through sampling. First, input error, discretization error in finite element analysis (FEA), surrogate model error, and output measurement error are considered. Next, uncertainty quantification error, which arises due to the use of sampling-based methods, is also investigated. Model form error is estimated based on the comparison of corrected model prediction against physical observations and after accounting for solution approximation errors, uncertainty quantification errors, and experimental errors (input and output). Both local and global sensitivity measures are investigated to estimate and rank the contribution of each source of error to the uncertainty in the final result. Two numerical examples are used to demonstrate the proposed methodology by considering mechanical stress analysis and fatigue crack growth analysis.
Citation: International Journal for Uncertainty Quantification, 1 (2): 147–161 (2011). [Link] to complete paper online.
The paper is quite readable, and well worth reading. From the Introduction:
The motivation of this paper is to develop a methodology that provides quantitative information regarding the relative contribution of various sources of error and uncertainty to the overall model prediction uncertainty. Such information can guide decisions regarding model improvement (e.g., model refinement, additional data collection) so as to enhance both accuracy and confidence in the prediction. The information sought is in the form of rankings of the various errors and uncertainty sources that contribute to the model prediction uncertainty. It is more advantageous to spend resources toward reducing an error with a higher ranking than one with a lower ranking. The rankings are based on systematic sensitivity analysis, which is possible only after quantifying the effect of each error source on the model prediction uncertainty.
The error in a computationalmodel prediction consists of two parts: model formerror (model) and solution approximation error or numerical error (num) . The model form error depends on whether the selected model correctly represents the real phenomenon (e.g., small deformation versus large deformation model, linear elastic versus elasto plastic model, or Euler versus Navier-Stokes equation). The solution approximation error arises when numerically solving the model equations. In other words, the model form error is related to the question—Did I choose the correct equation?—which is answered using validation experiments, while the solution approximation error is related to the question—Did I solve the equation correctly?—which is answered through verification studies.
Note that the numerical error depends on the choice of the model form; thus, the two errors are not independent. The numerical error num is a nonlinear combination of various components. This paper first considers three typical numerical error components and their quantification and combination, including input error, discretization error in FEA, and surrogate model error.
One concern in this paper is how to obtain a model prediction yc corrected for numerical error sources. Among all errors, some errors are stochastic, such as input error and surrogate model error, and some errors are deterministic, such as discretization error in FEA. In this paper, a simple but efficient approach is developed to obtain yc. The basic idea is to quantify and correct for each error where it arises. Stochastic error is corrected for by adding its randomly sampled values to the original result. Deterministic error is corrected for by directly adding it to the corresponding result. For example, to correct for the discretization error, every time a particular FEA result is obtained, the corresponding discretization error is calculated, added to the original result, and the corrected FEA result is used in further computation to obtain yc.
In addition to the model form and solution approximation errors mentioned above, another error arises due to Monte Carlo sampling used in the error quantification procedure itself. This error is referred to here as uncertainty quantification (UQ) error. For example, when estimating the cumulative distribution function (CDF) of a random variable from sparse data, there is error in the CDF value, and methods to quantify this UQ error are available.
Then, if more samples are generated by the inverse CDF method using the CDF estimated from sparse data, then the UQ error is propagated as sampling error to the newly generated samples. An approach is developed in Section 3 to quantify this sampling error. This method is particularly useful in quantifying model form error (Section 4).
After a probabilistic framework to manage all sources of uncertainty and error is established, sensitivity analyses are performed in Section 5 to assess the contribution of each source of uncertainty and error to the overall uncertainty in the corrected model prediction. The sensitivity analysis result can be used to guide resource allocation for different activities, such as model fidelity improvement, data collection, etc., according to the importance ranking of errors in orders to trade off between accuracy and computational/experimental effort.
The contributions of this paper can be summarized as follows:
1. A systematic methodology for error and uncertainty quantification and propagation in computational mechanics models is developed. Previous literature has developed methods to quantify the discretization error and to propagate input randomness through computational models. However, the combination of various error and uncertainty sources is not straightforward: some are additive, some multiplicative, some nonlinear, and some even nested. Also, some errors are deterministic and some are stochastic. The methodology in this paper provides a template to track the propagation of various error and uncertainty sources through the computational model.
2. The error and uncertainty quantification methodology itself has an error due to the use of limited number of samples when large finite element models are used; therefore, this UQ error is quantified. A methodology is proposed in this paper to also quantify the propagation of this UQ error through further calculations in model prediction and assessment, and is used in the quantification of model form error.
3. Sensitivity analysis methods are developed to identify the contribution of each error and uncertainty source to the overall uncertainty in the model prediction. Previous literature in global sensitivity analysis has only considered the effect of input random variables, and this paper extends the methodology to include data uncertainty and model errors. The sensitivity information is helpful in identifying the dominant contributors to model prediction uncertainty and in guiding resource allocation for model improvement. The sensitivity analysis method is further enhanced to compare the contributions of both deterministic and stochastic errors on the same level, in order to facilitate model improvement decision making.
JC comments: This paper tackles the whole problem of uncertainty quantification in dynamical models in an integrated way. It is rare to see someone trying to quantify model functional form error (although I am not sure how this would work for climate models). I also like the global sensitivity analysis approach. I am not at all sure about the proposed method for actually using the error analysis to correct the model prediction. I look forward to discussion on this.
UQ in climate models
I am starting to hear the term ‘UQ’ in the context of climate models, which is a really good development IMO:
- The University of Minnesota Department of Computer Science Engineering has an NSF funded program on uncertainty quantification in climate models.
- SAMSI has a program on uncertainty quantification in climate modeling.
- Oak Ridge National Laboratory is taking this on, as per personal communication with Richard Archibald.
This Fact Sheet from ORNL lays it all out:
Complex simulation systems, ranging from the suite of global climate models used by the Intergovernmental Panel on Climate Change (IPCC) for informing policy-makers to the agent-based models of social processes which may be used for informing military commanders and strategic planners, typically cannot be evaluated using standard techniques in mathematics and computer science. Uncertainties in these systems arise from intrinsic characteristics which include complex interactions, feedback, nonlinearity, thresholds, long-range dependence and space-time variability, as well as our lack of understanding of the underlying processes compounded by the complicated noise and dependence structures in observed or simulated data. The complicated dependence structures, interactions and feedback rule out the use of mathematical or statistical methods on individual differential equations or state transition rules. The direct application of computational data science approaches like spatial of spatio-temporal data mining is limited by the nonlinear interactions and complex dependence structures. The complexity of the systems precludes multiple model runs to comprehensively explore the space of input data, random model parameters, or key component processes. The importance of extreme values and space-time variability, as well as the ability to produce surprising or emergent behavior and retain predictive ability, further complicates the systematic evaluation and uncertainty quantification. The area of verification and validation within modeling and simulation is not yet equipped to handle these challenges. We have developed a set of tools, which range from pragmatic applications of the state-of-the-art to new adaptations of recent methodologies and all the way to novel approaches, for a wide variety of complex systems. These include systematic evaluation of social processes and agent-based simulations for an activity funded by DARPA/ DOD, multi-scale uncertainty characterization and reduction for simulations from the IPCC suite of global climate models towards research funded by ORNL / DOE and DOD, as well as high-resolution population modeling funded by multiple agencies. The set of tools and methodologies, which represents an interdisciplinary blend of mathematics, statistics, signal processing, nonlinear dynamics, computer science, and operations research, has demonstrated wide applicability over multidisciplinary simulation systems.
Sounds like this is exactly what is needed for climate models. I hope this project is well supported and is successful.
Moderation note: this is a technical thread and comments will be moderated for relevance.