by Judith Curry
Climate science is sometimes characterized by skeptics as pseudoscience. Here are the arguments for why climate science is not pseudoscience.
From the Wikipedia:
Pseudoscience is a claim, belief, or practice which is presented as scientific, but does not adhere to a valid scientific method, lacks supporting evidence or plausibility, cannot be reliably tested, or otherwise lacks scientific status. Pseudoscience is often characterized by the use of vague, exaggerated or unprovable claims, an over-reliance on confirmation rather than rigorous attempts at refutation, a lack of openness to evaluation by other experts, and a general absence of systematic processes to rationally develop theories.
Gary Taubes on pseudoscience
Gary Taubes has a very interest post entitled Science, Pseudoscience, Nutritional Epidemiology and Meat. Excerpts:
Back in 2007 when I first published Good Calories, Bad Calories I also wrote a cover story in theNew York Times Magazine on the problems with observational epidemiology. The article was called “Do We Really Know What Makes Us Healthy?” and I made the argument that even the better epidemiologists in the world consider this stuff closer to a pseudoscience than a real science. I used as a case study the researchers from the Harvard School of Public Health, led by Walter Willett, who runs the Nurses’ Health Study. In doing so, I wanted to point out one of the main reasons why nutritionists and public health authorities have gone off the rails in their advice about what constitutes a healthy diet. The article itself pointed out that every time in the past that these researchers had claimed that an association observed in their observational trials was a causal relationship, and that causal relationship had then been tested in experiment, the experiment had failed to confirm the causal interpretation — i.e., the folks from Harvard got it wrong. Not most times, but every time. No exception. Their batting average circa 2007, at least, was .000.
Science is ultimately about establishing cause and effect. It’s not about guessing. You come up with a hypothesis — force x causes observation y — and then you do your best to prove that it’s wrong. If you can’t, you tentatively accept the possibility that your hypothesis was right. Peter Medawar, the Nobel Laureate immunologist, described this proving-it’s-wrong step as the ”the critical or rectifying episode in scientific reasoning.” Here’s Karl Popper saying the same thing: “The method of science is the method of bold conjectures and ingenious and severe attempts to refute them.” The bold conjectures, the hypotheses, making the observations that lead to your conjectures… that’s the easy part. The critical or rectifying episode, which is to say, the ingenious and severe attempts to refute your conjectures, is the hard part. Anyone can make a bold conjecture. (Here’s one: space aliens cause heart disease.) Making the observations and crafting them into a hypothesis is easy. Testing them ingeniously and severely to see if they’re right is the rest of the job — say 99 percent of the job of doing science, of being a scientist.
Well, because this is supposed to be a science, we ask the question whether we can imagine other less newsworthy explanations for the association we’ve observed. What else might cause it? An association by itself contains no causal information. There are an infinite number of associations that are not causally related for every association that is, so the fact of the association itself doesn’t tell us much.
The answer ultimately is that we do experiments. Before we get around to doing the experiments, we must rack our brains to figure out if there are other causal explanations for this association beside the the meat-eating one. Another way to think of this is that we’re looking for all the myriad possible ways our methodology and equipment might have fooled us. The first principle of good science, as Richard Feynman liked to say, is that you must not fool yourself because you’re the easiest person to fool. And so before we go public and commit ourselves to believing this association is meaningful and causal, let’s think of all the ways we might be fooled. Once we’ve thought up every possible, reasonable alternative hypotheses (space aliens are out on this account), we can then go about testing them to see which ones survive the tests: our preferred hypothesis (meat-eating causes disease, in this case) or one of the many others we’ve considered.
This is why the best epidemiologists — the one’s I quote in the NYT Magazine article — think this nutritional epidemiology business is a pseudoscience at best. Observational studies like the Nurses’ Health Study can come up with the right hypothesis of causality about as often as a stopped clock gives you the right time. It’s bound to happen on occasion, but there’s no way to tell when that is without doing experiments to test all your competing hypotheses. And what makes this all so frustrating is that the Harvard people don’t see the need to look for alternative explanations of the data — for all the possible confounders — and to test them rigorously, which means they don’t actually see the need to do real science.
JC comment: I think the nutritional epidemiology example is a good one, whereby correlation analysis, particularly in the presence of multiple confounding factors, and in the absence of physiological mechanisms, arguably qualifies for the pseudoscience mantle, as suggested by Taube.
Climate science vs pseudo science
So, how should we evaluate the science of climate change in this context?
Any theories for climate change that rely solely on correlation (e.g. celestial motions) that do not posit a physical mechanism may qualify for the pseudoscience mantle if they overinterpret the significance of the results. The result (correlation) may be scientifically interesting if leads to hypotheses about actual physical mechanisms that can account for the correlation.
In the case of main stream climate science, the physical mechanism for climate change is clearly posited as arising from external forcing: solar, volcanoes, anthropogenic greenhouse gases and aerosols. However, climate scientists have not racked their brains anywhere near hard enough to come up with other causal explanations. The main outstanding causal explanation that has been neglected is internal natural variability of the coupled ocean/atmosphere system.
When people say that the hockey stick and paleoclimate analysis of the last 1000 years isn’t an important part of the climate change argument, well it should be. We have been seduced by the relatively flat blade of the hockey stick into thinking that natural internal variability isn’t important. With improved proxies and analysis methods, we may find out that natural internal variability is significantly larger than is indicated by the Mann et al. reconstructions.
Experiments to test our theories and hypotheses are conducted by the Earth itself. For each day and for each annual cycle, we demonstrate that our understanding and climate models are generally correct in simulating the warming and cooling (although getting the diurnal and annual cycle of precipitation correct is much more difficult). Volcanic eruptions serve as another useful experiment; climate models do simulate the expected cooling, but the simulated cooling is often too large for a short duration whereas the observed cooling is shallower and spread over a longer period. Changes in greenhouse gases provide another experiment.
The concern that I have that insufficient attention is given to developing climate models and conducting climate model experiments to explore the other hypotheses, particularly solar variability and natural internal variability.
However, this concern only implies that climate change science is far from complete in terms of being able to understand and predict climate change on decadal to century time scales. It does not imply in any way that climate science is pseudoscience.