by Judith Curry
The word “doubt” has a bad connotation in the climate debate owing to the merchants of doubt meme. Richard Feynman puts the word “doubt” into the appropriate perspective in the context of science:
When a scientist doesn’t know the answer to a problem, he is ignorant. When he has a hunch as to what the result is, he is uncertain. And when he is pretty damn sure of what the result is going to be, he is still in some doubt. We have found it of paramount importance that in order to progress, we must recognize our ignorance and leave room for doubt. Scientific knowledge is a body of statements of varying degrees of certainty — some most unsure, some nearly sure, but none absolutely certain.
I became outspoken on the subject of uncertainty following the release of the CRU emails. I was hardly the first “mainstream” climate scientist to emphasize uncertainty. Steve Schneider has been generally regarded as the IPCC’s “uncertainty cop.” However Steve Schneider’s vision of climate uncertainty was that of statistical uncertainty, that could be managed in a subjective Bayesian approach using the judgment of experts.
My vision of climate uncertainty includes greater levels of uncertainty, including scenario uncertainty and ignorance. In the broader context of science and the philosophy of science, my view of uncertainty is not unusual at all. However, climate scientists have clung to Steve Schneider’s vision in an ever tightening spiral of reducing uncertainty and increasing confidence levels. I can only hope that my writings on this subject are influencing climate scientists. In any event, it seems that the science journalists are paying attention to the issue of uncertainty.
A twist on climate change uncertainty and risk
I have a slightly different perspective [ from Bill McKibben]. What we have here is not a failure to communicate and accept the obvious effects of climate change. Instead, it’s a failure to communicate and accept a critical point of how science works, without which scientific literacy is reduced to mere talking points. This is about nuance and uncertainty, and if the American public doesn’t get those things, then we’ll never get climate change.
When scientists study climate they aren’t really studying just one thing. Climate is a complex system, involving multiple natural subsystems and many variables—both “natural” and man-made—that can alter the way those systems work. This is such a complicated subject that we really only developed the computer processing power necessary to start making any sense of it in the mid 1970s. What scientists have learned since then is vitally important stuff. The Earth, as a whole, is warming as humans pump more and more greenhouse gases into the atmosphere. And those rising global temperatures, and rising carbon dioxide concentrations, will affect our lives in a variety of strange, and often surprising, ways. This is the science that should be influencing the way we plan for the future. But it’s not. Not really. And I think the reason why has a lot to do with how science is taught to the vast majority of Americans, the people whose science education really ends along with the end of high school.
In this country, we teach kids that science is a collection of hard facts. We teach them that scientists come up with a hypothesis—an idea that might explain some aspect of how the world works. Scientists then test their hypotheses and find out whether it’s correct or not. If it’s correct, then it becomes something that children must memorize. That story is true. But it’s also vastly oversimplified. It gives people the impression that every scientific question can be answered with “yes” or “no.” And if it can’t, then the real answer is probably “no.”
That perspective might work okay when you’re sitting in a high school science lab, studying the digestive system of a fetal pig. But it doesn’t work as well in the real world. And it leaves people completely unprepared to understand something like climate change, and how we assess the risks associated with it.
That’s because all risk—and especially the risks associated with complex systems like climate—come with uncertainty. To a person whose knowledge of science comes from that simplified story we tell school kids, “uncertainty” sounds like saying you’re wrong without having to say that you’re wrong. But that’s not the case. Instead, “uncertainty” is about complexity and randomness, it’s about probability, and it’s about how you attribute the cause of one effect that is really likely to have multiple causes.
This is scientific uncertainty—where the things we know and the things we don’t know collide, and we are left to figure out how to use what we have to make decisions anyway. That process is so confusing that researchers like Gerd Gigerenzer, director of the Max Planck Institute for Human Development in Berlin, actually make their careers studying the psychology behind it. Gigerenzer will be speaking as part of the World Science Festival panel on The Illusion of Certainty: Risk, Probability, and Chance. It’s an important panel. One that gets to the heart of what scientific literacy is all about.
If we want people to understand science, we can’t just give them facts to memorize. Scientific literacy isn’t about being able to win a game of quiz bowl. It’s about understanding how science works, and how science can be used to guide human decision-making. It’s about knowing that we don’t have all the answers. But it’s also about knowing that “we don’t have all the answers” isn’t the same thing as “we don’t know anything.” If we pump people full of facts, but don’t teach them about uncertainty, then we can’t be surprised when they dismiss anything that isn’t 100% certain.
The future of human life depends on how we respond to the risks of climate change. How we respond to those risks depends on how well the general public understands the messy world of real science.
On not being certain about uncertainty
Chris Mooney at the discover magazine blog has written an article with the title “On not being certaint about uncertainty: why you can’t downplay global warming.” Some excerpts:
The scenarios with the most catastrophic outcomes of global warming are low probability outcomes — a fact that explains why the world’s governments in practice treat reducing CO2 emissions as a low priority, despite paying lip service to it….
In one sense, this is obviously true. In another sense, it’s completely off base.
First, uncertainty cuts both ways, so it makes no sense to be confident that change will be on the low end. This is something about which Kerry Emanuel recently testified:
In soliciting advice, we should be highly skeptical of any expert who claims to be certain of the outcome. I include especially those scientists who express great confidence that the outcome will be benign; the evidence before us simply does not warrant such confidence.
But more generally, you can really only make Lind’s argument if 1) you’re paying enough attention to global warming to understand that there’s a real scientific consensus that it’s happening, but 2) you’re not paying enough attention to realize what global warming really means for Planet Earth.
It’s really that simple. We don’t know the timeline, but if we don’t stop it, we know the eventual outcome–and it is intolerable and unacceptable on any timeline. And that’s why any attempt to minimize worry about global warming by citing “uncertainty” about the projections just doesn’t make sense.
This conclusion brings to mind this quote from Through the Looking Glass:
“If it was so, it might be, and if it were so, it would be; but as it isn’t it ain’t. That’s logic! “
Why ethics requires acknowledging links between tornadoes and climate change in spite of uncertainties
Now for something really “different”, check out this essay by Professor Donald Brown of Penn State University. His punch line:
Because scientists are expected to produce scientific knowledge that can be applied to public policy questions, they must be able to describe threats that are not fully proven. From the standpoint of public policy, therefore, scientists should not deny that climate change creates risks of increased damage from tornadoes. A claim that there is no link between climate change and tornadoes is misleading. If someone is concerned about whether to adopt policies reducing the threat of climate change they need to know whether climate change creates risks of damage from tornadoes even if there are open questions about what happens to tornado frequency and intensity in a warming world.
In other words, when science is applied to public policy where there is reasonable basis that some human activity is dangerous, science has an important role in communicating any scientifically plausible dangerous risks-not just proven facts.
As long as anyone is asking the question of whether there is a link between climate change and tornado damage because they want to know whether there is reason to limit greenhouse gas emissions, it is therefore ethically problematic to say there is no link
However, it is also ethically required to acknowledge that increased tornado damage and frequency are not yet proven. When talking about these risks it is important to acknowledge that there is also scientific basis for doubt about increased tornado and frequency in a warming world. However, if this said, it is also ethically important to acknowledge that increased damage from other kinds of storms is virtually certain as the planet warms. Furthermore, it is ethically important to acknowledge that tornadoes will appear in places that they would not likely occur in the absence of global warming even if tornado frequency and intensity decrease because a changing climate is already affecting tornado propagation.
I’m speechless, I don’t even know what to say regarding this one.
Yes, there is a blog that I just spotted with the name ‘ignorance and uncertainty’ (I’m adding it to the blog roll, this is a superb blog). The current post is entitled “Communicating about uncertainty in climate change Part I.” The post is written by Robert Smithson (I can’t figure out who he is by googling, the name is too common and there is no “about” info on the blog.) Some excerpts:
I’ll focus on the issues around probability expressions in a subsequent post, but in this one I want to address the issue of communicating “uncertainty” in a broader sense.
Why does it matter? First, the public needs to know that climate change science actually has uncertainties. Otherwise, they could be misled into believing either that scientists have all the answers or suffer from unwarranted dogmatism. Likewise, policy makers, decision makers and planners need to know the magnitudes (where possible) and directions of these uncertainties. Thus, the IPCC is to be commended for bringing uncertainties to the fore its 2007 report, and for attempting to establish standards for communicating them.
Second, the public needs to know what kinds uncertainties are in the mix. This concern sits at the foundation of the first and second recommendations of the Budescu paper. Their first suggestion is to differentiate between the ambiguous or vague description of an event and the likelihood of its occurrence. The example the authors give is “It is very unlikely that the meridonial overturning circulation will undergo a large abrupttransition during the 21st century” (emphasis added). The first italicized phrase expresses probabilistic uncertainty whereas the second embodies a vague description. People may have different interpretations of both phrases. They might disagree on what range of probabilities is referred to by “very likely” or on what is meant by a “large abrupt” change. Somewhat more worryingly, they might agree on how likely the “large abrupt” change is while failing to realize that they have different interpretations of that change in mind.
The crucial point here is that probability and vagueness are distinct kinds of uncertainty (see, e.g., Smithson, 1989). While the IPCC 2007 report is consistently explicit regarding probabilistic expressions, it only intermittently attends to matters of vagueness. For example, in the statement “It is likely that heat waves have become more frequent over most land areas” (IPCC 2007, pg. 30) the term “heat waves” remains undefined and the time-span is unspecified. In contrast, just below that statement is this one: “It is likely that the incidence of extreme high sea level3 has increased at a broad range of sites worldwide since 1975.” Footnote 3 then goes on to clarify “extreme high sea level” by the following: “Excluding tsunamis, which are not due to climate change. Extreme high sea level depends on average sea level and on regional weather systems. It is defined here as the highest 1% of hourly values of observed sea level at a station for a given reference period.”
The Budescu paper’s second recommendation is to specify the sources of uncertainty, such as whether these arise from disagreement among specialists, absence of data, or imprecise data. Distinguishing between uncertainty arising from disagreement and uncertainty arising from an imprecise but consensual assessment is especially important. In my experience, the former often is presented as if it is the latter. An interval for near-term ocean level increases of 0.2 to 0.8 metres might be the consensus among experts, but it could also represent two opposing camps, one estimating 0.2 metres and the other 0.8.
The IPCC guidelines for other kinds of expert assessments do not explicitly refer to disagreement: “Where uncertainty is assessed more quantitatively using expert judgement of the correctness of underlying data, models or analyses, then the following scale of confidence levels is used to express the assessed chance of a finding being correct: very high confidence at least 9 out of 10; high confidence about 8 out of 10; medium confidence about 5 out of 10; low confidence about 2 out of 10; and very low confidence less than 1 out of 10.”
There are understandable motives for concealing or disguising some kinds of uncertainty, especially those that could be used by opponents to bolster their own positions. Chief among these is uncertainty arising from conflict. In a series of experiments Smithson (1999) demonstrated that people regard precise but disagreeing risk messages as more troubling than informatively equivalent imprecise but agreeing messages. Moreover, they regard the message sources as less credible and less trustworthy in the first case than in the second. In short, conflict is a worse kind of uncertainty than ambiguity or vagueness. Smithson (1999) labeled this phenomenon “conflict aversion.” Cabantous (2007) confirmed and extended those results by demonstrating that insurers would charge a higher premium for insurance against mishaps whose risk information was conflictive than if the risk information was merely ambiguous.
Conflict aversion creates a genuine communications dilemma for disagreeing experts. On the one hand, public revelation of their disagreement can result in a loss of credibility or trust in experts on all sides of the dispute. Laypeople have an intuitive heuristic that if the evidence for any hypothesis is uncertain, then equally able experts should have considered the same evidence and agreed that the truth-status of that hypothesis is uncertain. . . On the other hand, concealing disagreements runs the risk of future public disclosure and an even greater erosion of trust (lying experts are regarded as worse than disagreeing ones). The problem of how to communicate uncertainties arising from disagreement and vagueness simultaneouslyand distinguishably has yet to be solved.