by Judith Curry
This list will help non-scientists to interrogate advisers and to grasp the limitations of evidence – William J. Sutherland, David Spiegelhalter and Mark A. Burgman.
Nature has published a very interesting comment, titled Twenty tips for interpreting scientific evidence. Excerpts:
Perhaps we could teach science to politicians? It is an attractive idea, but which busy politician has sufficient time? The research relevant to the topic of the day is interpreted for them by advisers or external advocates.
In this context, we suggest that the immediate priority is to improve policy-makers’ understanding of the imperfect nature of science. The essential skills are to be able to intelligently interrogate experts and advisers, and to understand the quality, limitations and biases of evidence.
To this end, we suggest 20 concepts that should be part of the education of civil servants, politicians, policy advisers and journalists — and anyone else who may have to interact with science or scientists. Politicians with a healthy scepticism of scientific advocates might simply prefer to arm themselves with this critical set of knowledge.
Differences and chance cause variation. The real world varies unpredictably. Science is mostly about discovering what causes the patterns we see. Why is it hotter this decade than last? There are many explanations for such trends, so the main challenge of research is teasing apart the importance of the process of interest from the innumerable other sources of variation.
No measurement is exact. Practically all measurements have some error. If the measurement process were repeated, one might record a different result. In some cases, the measurement error might be large compared with real differences. Results should be presented with a precision that is appropriate for the associated error, to avoid implying an unjustified degree of accuracy.
Bias is rife. Experimental design or measuring devices may produce atypical results in a given direction. Confirmation bias arises when scientists find evidence for a favoured theory and then become insufficiently critical of their own results, or cease searching for contrary evidence.
Bigger is usually better for sample size. The average taken from a large number of observations will usually be more informative than the average taken from a smaller number of observations. That is, as we accumulate evidence, our knowledge improves. This is especially important when studies are clouded by substantial amounts of natural variation and measurement error.
Correlation does not imply causation. It is tempting to assume that one pattern causes another. However, the correlation might be coincidental, or it might be a result of both patterns being caused by a third factor — a ‘confounding’ or ‘lurking’ variable.
Regression to the mean can mislead. Extreme patterns in data are likely to be, at least in part, anomalies attributable to chance or error.
Extrapolating beyond the data is risky. Patterns found within a given range do not necessarily apply outside that range.
Scientists are human. Scientists have a vested interest in promoting their work, often for status and further research funding, although sometimes for direct financial gain. This can lead to selective reporting of results and occasionally, exaggeration. Peer review is not infallible: journal editors might favour positive findings and newsworthiness. Multiple, independent sources of evidence and replication are much more convincing.
Feelings influence risk perception. Broadly, risk can be thought of as the likelihood of an event occurring in some time frame, multiplied by the consequences should the event occur. People’s risk perception is influenced disproportionately by many things, including the rarity of the event, how much control they believe they have, the adverseness of the outcomes, and whether the risk is voluntarily or not.
Data can be dredged or cherry picked. Evidence can be arranged to support one point of view. The question to ask is: ‘What am I not being told?’
JC comments: I really like the idea behind this article:
What we offer is a simple list of ideas that could help decision-makers to parse how evidence can contribute to a decision, and potentially to avoid undue influence by those with vested interests.
I suspect this article will not be appreciated by scientists who are playing power politics with their expertise, or by advocates promoting scientism with cherry-picked evidence.
I picked 10 of the 20 tips that I thought were of greatest relevance to the climate change debate. So, what do you think of this list? What would you add?