Site icon Climate Etc.

Error cascade

by Judith Curry

So…how do you tell when a research field is in the grip of an error cascade? The most general indicator I know is consilience failures. Eventually, one of the factoids generated by an error cascade is going to collide with a well-established piece of evidence from another research field that is not subject to the same groupthink.

The blog Armed and Dangerous has a provocative post entitled: Error Cascade: A definition and examples. Some excerpts:

In medical jargon, an “error cascade” is something very specific: a series of escalating errors in diagnosis or treatment, each one amplifying the effect of the previous one. This is a well established term in the medical literature: this abstract is quite revealing about the context of use.

There’s a slightly different term, information cascade, which is used to describe the propagation of beliefs and attitudes through crowd psychology. Information cascades occur because humans are social animals and tend to follow the behavior of those around them. When the social incentives are right, humans will substitute the judgment of others for their own.

A useful, related concept is preference falsification, the act of misrepresenting one’s desires or beliefs under perceived social pressures. Preference falsification amplifies informational cascades — humans don’t just substitute the judgment of others for their own, they talk themselves into beliefs most around them don’t actually hold but have become socially convinced they should claim to hold!

I use the term “error cascade” in a meaning halfway between the restricted sense of the medical literature and “information cascade”, and I apply it specifically to a kind of bad science, especially bad science recruited in public-policy debates. Ascientific error cascade happens when researchers substitute the reports or judgment of more senior and famous researchers for their own, and incorrectly conclude that their own work is erroneous or must be trimmed to fit a “consensus” view.

But it doesn’t stop there. What makes the term “cascade” appropriate is that those errors spawn other errors in the research they affect, which in turn spawn further errors. It’s exactly like a cascade from an incorrect medical diagnosis. The whole field surrounding the original error can become clogged with theory that has accreted around the error and is poorly predictive or over-complexified in order to cope with it.

Numerous examples are given, but lets cut straight to the climate change chase:

In extreme cases, entire fields of inquiry can go down a rathole for years because almost everyone has preference-falsified almost everyone else into submission to a “scientific consensus” theory that is (a) widely but privately disbelieved, and (b) doesn’t predict or retrodict observed facts at all well. In the worst case, the field will become pathologized — scientific fraud will spread like dry rot among workers overinvested in the “consensus” view and scrambling to prop it up. Yes, anthropogenic global warming, I’m looking at you!

There an important difference between the AGW rathole and the others, though. Errors in the mass of the electron, or the human chromosome count, or structural analyses of obscure languages, don’t have political consequences (I chose Chomsky, who is definitely politically active, in part to sharpen this point). AGW theory most certainly does have political consequences; in fact, it becomes clearer by the day that the IPCC assessment reports were fraudulently designed to fit the desired political consequences rather than being based on anything so mundane and unhelpful as observed facts.

When a field of science is co-opted for political ends, the stakes for diverging from the “consensus” point of view become much higher. If politicians have staked their prestige and/or hopes for advancement on being the ones to fix a crisis, they don’t like to hear that “Oops! There is no crisis!” — and where that preference leads, grant money follows. When politics co-opts a field that is in the grip of an error cascade, the effect is to tighten that grip to the strangling point.

Consequently, scientific fields that have become entangled with public-policy debates are far more likely to pathologize — that is, to develop inner circles that collude in actual misconduct and suppression of refuting data rather than innocently perpetuating a mistake.

So…how do you tell when a research field is in the grip of an error cascade? The most general indicator I know is consilience failures. Eventually, one of the factoids generated by an error cascade is going to collide with a well-established piece of evidence from another research field that is not subject to the same groupthink.

Here’s an example: Serious alarm bells rang for me about AGW when the “hockey team” edited the Medieval Warm Period out of existence. I knew about the MWP because I’d read Annalist-style histories that concentrated on things like crop-yield descriptions from primary historical sources, so I knew that in medieval times wine grapes — implying what we’d now call a Mediterranean climate — were grown as far north as southern England and the Lake Mälaren region of Sweden! When the primary historical evidence grossly failed to match the “hockey team’s” paleoclimate reconstructions, it wasn’t hard for me to figure which had to be wrong.

Consilience failures offer a way to spot an error cascade at a relatively early stage, well before the field around it becomes seriously pathologized. At later stages, the disconnect between the observed reality in front of researchers’ noses and the bogus theory may increase enough to cause problems within the field. At that point, the amount of peer pressure required to keep researchers from breaking out of the error cascade increases, and the operation of social control becomes more visible.

You are well into this late stage when anyone invokes “scientific consensus”. Science doesn’t work by consensus, it works by making and confirming predictions. Science is not democratic; there is only one vote, only Mother Nature gets to cast it, and the results are not subject to special pleading. When anyone attempts to end debate by insisting that a majority of scientists believe some specified position, this is the social mechanism of error cascades coming into the open and swinging a wrecking ball at actual scientific method right out where everyone can watch it happening.

The best armor against error cascades is knowing how this failure mode works so you can spot the characteristic behaviors. Talk of “deniers” is another one; that, and the moralistic quasi-religious language that it goes with, is a leading indicator that scientific method has left the building. Sound theory doesn’t have to be buttressed by demonizing its opponents; it demonstrates itself with predictive success.

JC comment:  I think the the concept of error cascade is interesting and relevant to climate science.  I think this particular article goes over the top in essentially dismissing all of AGW as junk science, but I think his perception is correct in that once you start invoking scientific consensus and deniers, you lay yourselves open to the charge of junk science.

IMO the error cascade in the IPCC argument starts here: multidecadal and longer modes of natural internal variability are dismissed in the attribution arguments, based upon a flawed ‘detection’ of unusual warming (relative to natural variability) using climate model simulations that produce natural internal variability on time scales longer than ~20 years that is substantially lower (factor of 2-3) than observed variability (which is itself uncertain).  Dangerous climate related impacts are then attributed to AGW, which leads to a policy prescription of CO2 mitigation.  When people say the hockey stick and millennial climate reconstructions don’t really matter, I strongly disagree, since these data are crucial for empirical support of detection arguments.

With regards to this statement: “Eventually, one of the factoids generated by an error cascade is going to collide with a well-established piece of evidence from another research field that is not subject to the same groupthink.”  It seems to me that any such challenge from outside the field would most likely come from the solar community.

What interested me particularly is the concept of “consilience failure.”  The complexity of the climate system makes the concept of consilience failure more complex than  for some of the other case studies presented in the article.  The case for AGW is made by a consilience of evidence argument (here and here), which is basically “multiple lines of evidence.”  If one of the lines of evidence turns out to be flawed, then how does this influence the overall argument?  I discussed this on an earlier thread Frames and narrative in climate science:

The “doesn’t matter” versus “death knell” interpretations can be explained by the use of two different logics represented by the jigsaw puzzle analogy and thehouse of cards analogy.  Consider a partially completed jigsaw puzzle, with many pieces in place, some pieces tentatively in place, and some missing pieces.  Default reasoning allows you to infer the whole picture from an incomplete puzzle if there is not another picture that is consistent with the puzzle in its current state.  Under a monotonic logic, adding new pieces and locking existing pieces into place increases what is known about the picture.  For a climate scientist having a complex mental model of interconnected evidence and processes represented by the jigsaw puzzle, the evidence in the North report merely jiggled loose a few puzzle pieces but didn’t change the overall picture.  The skeptics, lacking the puzzle frame but focused on the specific evidence of the North report, viewed the evidence as collapsing the house of cards and justifying major belief revision on the subject.  Which frame is “correct”?  Well, both frames are too simplistic and the use of both frames are heuristics used in the absence of formal logical arguments. The puzzle frame is better suited to the complexity of the problem, but as a mental model it can be subject to many cognitive biases.

And this takes us full circle back to the points I made in my Reasoning About Climate Uncertainty paper,  the ways of combining evidence and the associated uncertainties and the associated logics becomes critical in determining how you would even go about falsifying the theory or inferring anything about the theory from comparison of model predictions and observations.

In googling around on the topic, I also encountered something called ‘cascade analysis,’ about which I am unfamiliar.  Let me know if you are knowledgeable about this, and find it to be of relevance.

Exit mobile version