by Judith Curry
John and Michel are hypothetical pub owners in Britain and France. Lenny Smith asks the following questions:
Can today’s science tell John what +4 degrees would be like for his pub? Or his insurer? (or their reinsurer?) Or better still “climate-proof” his business? Is it a question of mere probabilities? Or might models see a “Big Surprise”? How to best manage Expectations (Theirs) and Credibility (Ours) ? Why is this so hard? Should Michel care about global mean temperature?
Lenny Smith is Director of the Center for the Analysis of Time Series (CATS) at the London School of Economics and Senior Research Fellow in Mathematics at Pembroke College, Oxford. I can count on the fingers of one hand the number of scientists who work on the climate topic that I avidly read just about everything they write or say; Lenny Smith corresponds to one of those 5 fingers. His publications are listed [here].
The link to his 4 degrees presentation is [here].
He starts by posing the following questions:
How many physical details can our models miss and still yield useful quantitative decision-relevant information downstream?
How can we tell in the case of a given decision?
What do 4 degree warmer GCMs tell us about a 4 degree warmer Earth?
From his overview slide (JC bold):
There are many many different 4 degree worlds.
Today’s models appear unlikely to provides quantitative decision-relevant probabilities about details.
Climate science and climate models make it very clear exploring a 4+ degree world empirically would carry huge ecological, human and economic costs.
The credibility of science is at risk if we fail to communicate our deep uncertainty quantitative results provided to decision makers.
Honestly lowering the bar makes applied science more useful, less volatile, and much much easier to advance.
How might we outline a test for decision-support relevant probabilities from models? And communicate the result to decision makers (and impacts modellers!)?
From his slide titled Schematic of Test For Quantitative Decision Relevance:
Specify the Decision Question in terms of local environmental phenomena that impact it. (“hot dry periods”)
Determine the larger scale “meteorological” phenomena that impact the local. (“blocking”)
Identify all relevant drivers (which are known). (“mountains”)
Pose necessary (NEVER SUFFICIENT) conditions for model output to quantitatively inform prior subjective science based reflection.
Are local phenomena of today realistically simulated in the model? (If not: Are relevant larger scale (to allow “prefect prog”)).
Are all drivers represented? (to allow “laws-of-physics” “extrapolation”) Are these conditions likely to hold given the end-of-run model-climate?
If one cannot clear these hurdles, the scientific value of the results does not make them of value to decision makers. They can be a detriment.
And claiming they are the “Best Available Information” is both false and misleading.
From his slide titled: How clear is our vision of 4 degree worlds?
Our models sample an ill-defined mathematical space of bland worlds, similar to the Earth but systematically less rich: mere abstractions.
Where implementation details matter (in distribution) in those models- worlds, we have no rational way to interpret ensembles as probabilities.
Can climate science suggest the space and time scales, as a function of lead time, on which we can make arguably robust statements or “decision-relevant probabilities” ?
How do we communicate this insight to decision makers? ? Is there a better approach than quantifying Prob(Big Surprise) ?
How do we explore methodologies without misleading decision makers?
On Why is Decision Support Hard?
Most decisions depend neither on “average meteorological variables” nor “standard deviation of the average weather” they depend on the trajectory.
As they are nonlinear we have to evaluate them along trajectories. Crops, cables, wind energy and system failures depend on what and even when weather events unfold.
We need to communicate whether or not we believe current models can provide robust, relevant and informative quantitative information on decision relevant distributions:
Prob(Big Surprise)
More words of wisdom plucked from the slides:
On what space and time scales do we have (robust) climate information? Today’s State-of-the-art models are better than ever before.
((The usual numerical arguments require much larger scales than the model’s grid, at least!)
Before using phrases like “based on the Laws of Physics” to defend hi-resolution predictions, we might check for internal consistency (quantitative).
Or better: find necessary (not sufficient) conditions for this model to contain decision relevant information.
On “Big Surprises” (BS):
Big Surprises arise when something our models cannot mimic turns out to have important implications for us.
Climate science can (sometimes) warn us of where those who use naïve (if complicated) model-based probabilities will suffer from a Big Surprise.
(Science can warn of “known unknowns” even when the magnitude is not known)
Big Surprises invalidate (not update) the foundations of model-based probability forecasts. (Arguably “Bayes” does not apply, nor the probability calculus.)
Failing to highlight model inadequacy can lead to likely credibility loss)
Including information on the Prob(BS) in every case study allows use of probabilities conditioned on the model (class) being fit for purpose without believing it is. (or appearing to suggest others should act as if they do!)
