by Judith Curry
One of the most sensitive issues in science today: the idea that something has gone fundamentally wrong with one of our greatest human creations. – Richard Horton
Research integrity has been a recurring theme and concern at Climate Etc. Two new high profile cases of scientific fraud have recently been identified:
- How the biggest fabricator in science got caught tells the story of medical researcher Yoshitaka Fujii, who falsified 183 papers before statistics exposed him. This is an amazing (not to mention disturbing) story.
- Data apparently faked in study on gay marriage rattles the field tells the story of a graduate student who may have faked the data in a high profile study published in Science. This case raises the issue of pressures on graduate students, and also the challenges faced by mentors.
Important reflections on the broader issues are provided in an editorial in the Lancet by Richard Horton (editor of The Lancet), excerpts:
“A lot of what is published is incorrect.” I’m not allowed to say who made this remark because we were asked to observe Chatham House rules. This symposium—on the reproducibility and reliability of biomedical research, held at the Wellcome Trust in London last week—touched on one of the most sensitive issues in science today: the idea that something has gone fundamentally wrong with one of our greatest human creations.
The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness. The apparent endemicity of bad research behaviour is alarming. In their quest for telling a compelling story, scientists too often sculpt data to fit their preferred theory of the world. Or they retrofit hypotheses to fit their data.
Our acquiescence to the impact factor fuels an unhealthy competition to win a place in a select few journals. Our love of “significance” pollutes the literature with many a statistical fairy-tale. We reject important confirmations. Journals are not the only miscreants. Universities are in a perpetual struggle for money and talent, endpoints that foster reductive metrics, such as high-impact publication. National assessment procedures, such as the Research Excellence Framework, incentivise bad practices. And individual scientists, including their most senior leaders, do little to alter a research culture that occasionally veers close to misconduct.
Can bad scientific practices be fixed? Part of the problem is that no-one is incentivised to be right. Instead, scientists are incentivised to be productive and innovative.
The good news is that science is beginning to take some of its worst failings very seriously. The bad news is that nobody is ready to take the first step to clean up the system.
Periodically, and perhaps increasingly as of late, high profile research misconduct is exposed, which acts to diminish the public trust in scientists and the research enterprise (but not necessarily ‘science’ itself). Climate science had its place in this undesirable limelight circa 2010, following Climategate.
Unlike medical and social science, data fabrication does not seem to be a factor in climate science – there is more than enough scope for data cherry picking and statistical methods to pretty much produce any ‘desired’ result. So while we may not see actual research misconduct in climate science, bias in climate research is a major problem IMO.
Nevertheless, the incentives for scientists are pretty much common in all these fields. Graduate students perceive the need for a high profile publication in order to get a faculty position at one of the prestige universities. University resources, big salaries, big government grants, peer recognition, media attention and policy influence are the rewards of publishing high profile papers. Nowhere does there seem to be any incentives to actually get the right answer, attempt to reproduce and criticize a peer’s paper, and generally to behave with integrity.
Beyond the motivations of careerism, there is also the motivation to produce a result that will support perceived ‘good’ societal objectives – this seems to have been a factor in the gay marriage study, and is also a factor in climate research.
Relying on the personal integrity and ethics of individual researchers clearly isn’t adequate; the problem is with the institutionalized incentives. I am very pleased to see scientific leaders like Richard Horton beginning to take on this daunting challenge.