by Judith Curry
In addition to traditional fallacies such as ad hominem, discussions of risk contain logical and argumentative fallacies that are specific to the subject-matter. Ten such fallacies are identified, that can commonly be found in public debates on risk. They are named as follows: the sheer size fallacy, the converse sheer size fallacy, the fallacy of naturalness, the ostrich’s fallacy, the proof-seeking fallacy, the delay fallacy, the technocratic fallacy, the consensus fallacy, the fallacy of pricing, and the infallability fallacy. – Sven Ove Hansson
Ever since Aristotle, fallacies have had a central role in the study and teaching of logical thinking and sound reasoning. It is not difficult to find examples of traditional fallacies such as ad hominem in any major modern discussion of a controversial issue. Discussions on risk are no exception. In addition, the subject-matter of risk seems to invite to fallacies of a more specific kind. The purpose of this short essay is to discuss ten logical and argumentative fallacies that can be found in public debates on risk.
1. The Sheer Size Fallacy
|X is accepted.
Y is a smaller risk than X.
Y should be accepted.
This is one of the commonest fallacies in the lore of risk. “You will have to accept this chemical exposure since the risk it gives rise to is smaller than the risk of being struck by lightning.” Or: “You must accept this technology, since the risks are smaller than that of a meteorite falling down on your head.”
The problem with these arguments is, of course, that we do not have a choice between the defended technology and the atmospheric phenomena referred to. Comparisons between risks can only be directly decision-guiding if they refer to objects that are alternatives in one and the same decision. When deciding whether or not to accept a certain pesticide, we need to compare it to other pesticides (or non-pesticide solutions) that can replace it.
Life can never be free of risks. We are forced by circumstances to live with some rather large risks, and we have also chosen to live with other, fairly large risks – typically because of the high value we assign to their associated benefits.
Strictly speaking, it is on most occasions wrong to speak of acceptance of a risk per se. Instead, the accepted object is a package or social alternative that contains the risk, its associated benefits, and possibly other factors that may influence a decision.
JC comment: This fallacy seems to be at the heart of the precautionary principle when applied to a complex problem.
5. The proof-seeking fallacy
|There is no scientific proof that X is dangerous.
No action should be taken against X.
Science has fairly strict standards of proof. When determining whether or not a scientific hypothesis should be accepted for the time being, the onus of proof falls squarely to its adherents. Similarly, those who claim the existence of an as yet unproven phenomenon have the burden of proof.
In many risk-related issues, standards and burdens of proof have to be different from those used for intrascientific purposes. Consider a case when there are fairly strong indications that a chemical substance may be highly toxic, although the evidence is not (yet) sufficient from a scientific point of view. It would not be wise to continue unprotected exposure to the substance until full scientific proof has been obtained. According to the precautionary principle, we must be prepared to take action in the absence of full scientific proof.
We can borrow terminology from statistics, and distinguish between two types of errors in scientific practice. The first of these consists in concluding that there is a phenomenon or an effect when there is in fact none (type I error, false positive). The second consists in missing an existing phenomenon or effect (type II error, false negative). In science, errors of type I are in general regarded as much more problematic than those of type II. In risk management, type II errors – such as believing a highly toxic substance to be harmless – are often the more serious ones. This is the reason why we must be prepared to accept more type I errors in order to avoid type II errors, i.e. to act in the absence of full proof of harmfulness.
JC comment: The burden of proof and type I, II errors in context of climate change were discussed on the thread Climate Null(?) Hypothesis.
6. The delay fallacy
|If we wait we will know more about X._____
No decision about X should be made now.
In many if not most decisions about risk we lack some of the information that we would like to have. A common reaction to this predicament is to postpone the decision. It does not take much reflection to realize the problematic nature of this reaction. In the period when nothing is done, the problem may get worse. Therefore, it may very well be better to make an early decision on fairly incomplete information than to make a more well-informed decision at a later stage.
It must also be observed that in some cases, scientific uncertainty is recalcitrant and not resolvable through research, at least not in the short or medium run. Many of the technical issues involved in assessing risks are not properly speaking scientific but “transscientific“, i.e. they are “questions which can be asked of science and yet which cannot be answered by science“. A further complication is that new scientific information often gives rise to new scientific uncertainty. New results may cast doubt on previous standpoints, and they may also create new uncertainty by revealing mechanisms or phenomena that were previously unknown.
The search for new knowledge never ends, and there is almost no end to the amount of information that one may wish to have in a risk-related decision. Since the premise of the delay argument (“If we wait we will know more about X”) is true on all stages of a decision process, this argument can almost always be used to prevent risk-reducing actions. Therefore, from the viewpoint of risk reduction, the delay fallacy is one of the most dangerous fallacies of risk.
JC comments: Joe Romm is big on this one, commonly chastising the ‘delayers.’
7. The technocratic fallacy
|It is a scientific issue how dangerous X is._____________
Scientists should decide whether or not X is acceptable.
It should be a trivial insight, but it needs to be repeated again and again: Competence to determine the nature and the magnitude of risks is not competence in deciding whether or not risks should be accepted. Decisions on risk must be based both on scientific information and on value judgments that cannot be derived from science.
There seems to be a fairly general tendency to describe issues of risk as “more scientific” than they really are. Wendy Wagner (1995) concluded from her study of the EPA and its external relations that there is a massive “science charade” going on: policy decisions are camouflaged as science. Instead of discussing, on the policy level, how to handle scientific uncertainty, authorities, industry, and environmentalists send forward scientists to argue about technical details that are not the real issue. In this way, the entire decision-making procedure is mischaracterized as much more “scientific” than it can actually be. This mischaracterization, Wagner says, can create obstacles to democratic participation in the decision-making process.
JC comment: This one characterizes the public debate on climate change in spades, where it has been overly scientized.
8. The consensus fallacy
|We must ask the experts about X.______________________
We must ask the experts for a consensus opinion about X.
The conventional approach to science advising is to search for consensus so far as this is at all possible. Scientific expert committees have a strong tendency to opt for compromises whenever possible. Even if the initial differences of opinion are substantial, discussions are continued until consensus has been reached. It is extremely unusual for minority opinions to be published. Instead, committees of scientists end up with a unanimous – although sometimes watered down – opinion in issues that are controversial in the scientific community.
The search for consensus has many virtues, but in advisory expert committees it has the unfortunate effect of underplaying uncertainties and hiding away alternative scenarios that may otherwise have come up as minority opinions. If there is uncertainty in the interpretation of scientific data, then this uncertainy can often be reflected in a useful way in minority opinions. Therefore, it is wrong to believe that the report of a scientific or technical advisory committee is necessarily more useful if it is a consensus report.
There are also other ways in which scientific uncertainty can be reported. Scientists can (perhaps unanimously, perhaps not) describe scientific uncertainties in ways that are accessible to decision-makers. The Intergovernmental Panel on Climate Change (IPCC) does this in an interesting way: it systematically distinguishes between “what we know with certainty; what we are able to calculate with confidence and what the certainty of such analyses might be; what we predict with current models; and what our judgement will be, based on available data.” (Bolin 1993)
JC comment: Bert Bolin was the ‘good guy’ in setting up the IPCC, but his ideas lost out to John Houghton’s push for consensus. Re the problems with scientific consensus seeking in context of the climate change debate, see my paper No Consensus on Consensus.
9. The fallacy of pricing
|We have to weigh the risks of X against its benefits.
We must put a price on the risks of X.
There are many things that we cannot easily value in terms of money. I do not know which I prefer, $8000 or that my child gets a better mark in math.
Similar situations often arise in issues of risk, including those that involve the loss of human lives. We cannot pay unlimited amounts of money to save a life. The sums that we are prepared to pay in a specific situation will depend on the particular circumstances. Again, general-purpose prices are not useful as decision-guides.
To the contrary, such pricing will tend to hide away the fact that these are decisions under conditions that have all the characteristics of moral dilemmas. Our competence as decision-makers is increased if we recognize a moral dilemma when we have one, rather than misrepresenting it as an easily resolvable decision problem.
JC comment: This issue is definitely there in the climate debate, whereby the greens bemoan that the issue of climate change has been taken over by the economists.
10. The infallibility fallacy
| Experts and the public do not have the same attitude to X.
The public is wrong about X.
Much of the public opposition to risks has been directed at the risks inherent in complex technological systems, which can only be assessed through the combination of knowledge from several disciplines. From the viewpoint of experts, this is often seen primarily as a matter of information or communication: the public has to be informed. However, this is only part of the problem. There is also another aspect that is often neglected: The experts may be wrong.
When there is a wide divergence between the views of experts and those of the public, this is certainly a sign of failure in the social system for division of intellectual labour. However, it does not necessarily follow that this failure is located within the minds of the non-experts who distrust the experts. It cannot be a criterion of rationality that one takes experts for infallible.
JC comment: This fallacy is at the heart of the public debate on climate change, putting the pause front and center in the debate.
Avoiding the fallacies
A fallacy-free public discussion in a contested social issue is probably an idle dream. Therefore, the task of exposing fallacious reasoning is much like garbage collecting: Neither task can ever be completed, since new material to be treated arrives all the time. However, in neither case does the perpetuity of the task make it less urgent. In order to improve the intellectual quality of public discussions on risks it is essential that more academics take part in these discussions, acting as independent intellectuals whose mission is not to advocate a standpoint but to promote science and sound reasoning.
To complicate all this, each of these fallacies also has a converse, i.e. they cut both ways. In any event, these fallacies illustrate the complexity of reasoning about risk.
In trying to apply these fallacies to the public debate on climate change, particularly in context of climate change problem and its solution as irreducibly global, it is not at all straightforward to assess the ‘urgent action needed’ or or ‘do nothing’ policies as being the preferred mode of action.
The opponents of ‘urgent action needed’ might accuse the proponents of the sheer size fallacy, the technocratic fallacy, the consensus fallacy, the infallibility fallacy.
The proponents of ‘urgent action needed’ might accuse their opponents of the proof-seeking fallacy, the delay fallacy, the fallacy of pricing.
The climate change policy problem arguable illustrates 7 of the 10 fallacies (I didn’t mention the converse sheer size fallacy, the fallacy of naturalness, the ostrich’s fallacy). The sheer numbers of risk fallacies floating around this problem arguably reflects the wickedness of the climate change risk problem.
Avoiding logical fallacies is relatively straightforward. Finding a balance between the consensus fallacy, the infallibility fallacy and the proof-seeking fallacy is at the heart of the problem for climate scientists, in terms of portraying the science in the appropriate way. It is unfortunate that John Houghton won the day (over Bert Bolin) in the early years of the IPCC; this entire debate might have taken a very different trajectory if Bert Bolin’s strategy had been adopted.