by Judith Curry
“So when you take uncertainty into account, it actually leads to the decision that we should take action more quickly.”
I first spotted the statement in the Discover Magazine interview with myself and Michael Mann . I thought it had to be a typo or misquote (note, Mann said this, not me :))
In discussing the Discover Magazine article over at Collide-a-Scape (I don’t recall which thread), I quickly found out that this was no typo or misquote. I was informed that this was statement came from a recent highly touted paper by Martin Weitzman entitled “On modeling and interpreting the economics of catastrophic climate change.” Abstract below:
Abstract. With climate change as prototype example, this paper analyzes the implications of structural uncertainty for the economics of low-probability high-impact catastrophes. Even when updated by fnite Bayesian learning, uncertain structural parameters induce a critical ìtail fatteningî of posterior-predictive distributions, Such fattened tails have strong implications for situations, like climate change, where a catastrophe is theoretically possible because prior knowledge cannot place su¢ ciently narrow bounds on overall damages. The core problem is learning extreme-impact tail probabilities from finite data. Fat-tailed structural uncertainty, along with great unsureness about high-temperature damages, can outweigh discounting in climate-change economics.
I was familiar with that paper, and didn’t recall that particular conclusion. I was then pointed to a NYTimes Magazine article by Paul Krugman entitled “Building a Green Economy.”
Now, despite the high credibility of climate modelers, there is still tremendous uncertainty in their long-term forecasts. But as we will see shortly, uncertainty makes the case for action stronger, not weaker. So climate change demands action. Is a cap-and-trade program along the lines of the model used to reduce sulfur dioxide the right way to go?
Then near the end of the article (page 8):
Finally and most important is the matter of uncertainty. We’re uncertain about the magnitude of climate change, which is inevitable, because we’re talking about reaching levels of carbon dioxide in the atmosphere not seen in millions of years. The recent doubling of many modelers’ predictions for 2100 is itself an illustration of the scope of that uncertainty; who knows what revisions may occur in the years ahead. Beyond that, nobody really knows how much damage would result from temperature rises of the kind now considered likely.
You might think that this uncertainty weakens the case for action, but it actually strengthens it. As Harvard’s Martin Weitzman has argued in several influential papers, if there is a significant chance of utter catastrophe, that chance — rather than what is most likely to happen — should dominate cost-benefit calculations. And utter catastrophe does look like a realistic possibility, even if it is not the most likely outcome.
Weitzman argues — and I agree — that this risk of catastrophe, rather than the details of cost-benefit calculations, makes the most powerful case for strong climate policy. Current projections of global warming in the absence of action are just too close to the kinds of numbers associated with doomsday scenarios. It would be irresponsible — it’s tempting to say criminally irresponsible — not to step back from what could all too easily turn out to be the edge of a cliff.
So there it is, from Nobel laureate Paul Krugman. Joe Romm has “translated” all this for us in several posts. Here is a quote from Krugman on this, in response to the book “Superfreakonomics” (note I haven’t read the book). As Krugman explains:
Yikes. I read Weitzman’s paper, and have corresponded with him on the subject — and it’s making exactly the opposite of the point they’re implying it makes. Weitzman’s argument is that uncertainty about the extent of global warming makes the case for drastic action stronger, not weaker.
Ok, so Krugman is apparently accurately representing what Weitzman meant. I guess my poor little brain had a difficult time letting that Nobel-level economics pass through its filter when I read Weitzman’s paper.
So, lets think about some of the perhaps unintended implications of this statement. Two implications that jump immediately into my mind are:
1. The accusations made against the “merchants of doubt” is that they are talking about uncertainty so as to delay action. So, now are we to infer that that the merchants of doubt are now climate policy action’s “best friends”?
2. Consider a potential asteroid strike: far greater economic impact and also far greater uncertainty than climate change. So the implication of this is that we should be focusing more on the potential asteroid strike than the potential catastrophic climate change?
Neither Romm nor Krugman discuss these points :)
Insights from Jonathan Gilligan
Over at Collide-a-Scape, Professor Jonathan Gilligan has a very insightful post entitled “Why U.S. Climate Policy is Radioactive”. This post addresses uncertainty and risk, and compares the climate change issue to policy issues surrounding nuclear waste disposal in Yucca Mountain. The whole thread including comments is well worth reading. Some excerpts from the main post:
As Daniel Sarewitz pointed out years ago, in both climate politics and nuclear waste politics, policymakers have tended to “scientize” the issue by acting as though greater scientific certainty would solve problems that were fundamentally political. No advances in earth science, hydrology, materials science, or engineering will do much to reduce our uncertainties about how spent nuclear fuel will behave underground over the course of tens or hundreds of millennia. Neither do I think it likely that advances in climate science will give us great certainty about exactly how bad global warming will be over the coming centuries.
So what are the lessons [from Yucca Mountain that are relevant to climate change]? . . . In both cases, connecting policy action to scientific certainty was likely a bad tactical mistake. In both cases, there is substantial uncertainty about the things we most care about and in fact, in the case of climate change, Martin Weitzman’s Dismal Theorem concludes that calculations of the expected economic cost of climate change are dominated by the mathematical details of the low-probability/catastrophic-consequence tail of the probability distribution. (Weitzman’s theorem is controversial, but the controversy is over the mathematical form he chooses for the tail of the probability distribution.)
I actually found Gilligan’s comments in the discussion part of the thread to be the most relevant to the topic at hand on this thread:
For normally distributed risks, the probability falls off fast enough in the tails that you can ignore the really low-probability/high-consequence events. But when the tails of the distribution are fat (i.e., when they fall off a lot more slowly than exp(-x^2), and especially if they fall off only polynomially) then the expected cost of the risk is dominated by the extreme of the low-probablity/high-risk tail. First, uncertainty about climate sensitivity is fundamentally asymmetric, both because of the mathematical form of the feedback function and also because there’s more data to constrain the low-sensitivity side of the distribution. Second, there’s also a lot of uncertainty about the economic damage function for a given temperature change. These combine to give good reason to believe that not only is our uncertainty about the cost of climate change is very asymmetrically distributed, but that uncertainties about enormously catastrophic damage have fat tails, possibly fat enough to dominate any calculation of expected value. This is similar to the argument Benoit Mandelbrot and Nassim Taleb made about Mandelbrot’s observation that fluctuations in markets for shares, futures, and commodities are not normally distributed but have fat tails: this means that standard risk-management practices (e.g., stress-testing portfolios) will fail to account properly for extremely unlikely events. My argument is that even if there is probably no cliff, there is still enough chance of a cliff that it’s foolish to wander around blindly. . . It all comes down to the mathematical shape of the probability distribution of our ignorance. Or, as Dirty Harry said more succinctly, “Do you feel lucky?”
A comment from Sashka:
“Do you feel lucky?” is a false dilemma. We have a continuum of options along the mitigation-adaptation axis. There is geoengineering option. So, the short answer is “I feel smart.” The idea to describe our differences in terms of risk preferences is not without merit. Except in this case it’s very hard to define the risk.
A comment from Gilligan:
Something I would like to emphasize in the argument between Bart V. and myself on one side vs. Sashka and kdk33 on the other: There is no right or wrong here. We’re arguing more about our comfort or discomfort with uncertain risks than about science. Bart and I are more precautionary and tend to put more credence in fat tails; Sashka and kdk are more comfortable with uncertainty and more dismissive of the fat tail hypothesis. The shape of the probability distribution for extreme catastrophes is not something that can be empirically verified with any great precision, so there is room for reasonable people to disagree both on the shape of the curve that represents our ignorance and on the proper policy response to it.
Nullius in Verba says:
The Precautionary Principle in the absence of quantified risks is equivalent to Pascal’s Wager. It means that the decision is determined entirely by the scariness of the hypotheses being offered rather than the strength of the evidence. Usually a false dilemma is being offered – two scenarios, one scary, one not, when there are many more scenarios possible (and more likely).
A better approach to uncertain risks is to develop more flexible resources ready to jump the right way when more information becomes available. Be an adaptable generalist. Creating economic prosperity for the poor would therefore seem to be the priority, as it is applicable to many different problems and scenarios, rather than only one.
The analogy is fairly straightforward. The Precautionary Principle as commonly applied to climate change says that even if you’re not fully convinced that it will definitely happen, if you accept that it might happen, the costs are so high (e.g. Ted Turner’s cannibal scenario) that it’s still the only rational choice to act to prevent it. Pascal’s Wager applied to the Christian afterlife mythology says that even if you’re not fully convinced that it will definitely happen, the costs (eternal torment versus eternal bliss) are so high that the only rational choice is to believe. The distinctive features of the argument are that it offers only two alternatives with the putative costs embedded the hypothesis, and the conclusion arises from the hypothesised costs alone, not the evidence.
Deconstructing Friedman’s conclusion about Weitzman’s argument
So, back to the original statement:
“So when you take uncertainty into account, it actually leads to the decision that we should take action more quickly.”
Lets deconstruct the apparent argument behind this to try to understand this apparent paradox.
- Under the precautionary principle, a minimum threshold of plausibility or certainty is required before acting.
- If the assumed PDF is something like a bell shaped normal distribution, possibly skewed but with a thin tail, the probability of the possible catastrophe lies below some minimum threshold that would trigger action.
- A broader PDF with a fatter tail, motivated by greater uncertainty, would imbue the possible catastrophe with a greater probability of occurrence that is above the minimum threshold that triggers action.
- Thus: Increased uncertainty provides a greater probability of occurrence of the catastrophe, strengthening the case for action under the precautionary principle
- Under a robust decision making framework, the plausible worst-case scenario (catastrophe) is included in the decision making process without letting it completely dominate the decision making.
- The catastrophe scenario is one scenario in a possibility distribution of scenarios.
- The weight of the catastrophic scenario in the decision making framework increases with the likelihood of occurrence of the catastrophic scenario.
- Thus: Decreased uncertainty regarding the likelihood of the catastrophic scenario increases its weight in the decision making process.
Starting from two different decision frameworks (precautionary principle versus robust decision making) and assumptions about the nature of the uncertainty (PDF versus possibility distribution) results in opposite conclusions regarding whether uncertainty weakens or strengthens the case for action. Argument A1 finds that Krugmann’s statement is true if you accept the premise that future climate change can be represented by a PDF. Using a possibility distribution rather than a PDF (Argument A2) leads to a more reasonable conclusion regarding the role of uncertainty in the decision making process.
Depending on your decision analytic framework and what you assume about ignorance and uncertainty, you can come to either conclusion: uncertainty increases the need to act, or uncertainty decreases the need to act. My argument is that the level of uncertainty is too great for a pdf of climate outcomes to be useful in the context of the precautionary principle.
I would very much appreciate your take on the arguments presented at the end of this post, I am considering including this in the revisions to my uncertainty monster paper.
Moderation note: this is a technical thread, comments will be moderated for relevance. To discuss specific scientific uncertainties like cloud feedback or whatever, do so on other technical threads. To discuss the politics associated with climate decision making, go to another thread.