Imagine the following Irish Times headline: "UCC professor sentenced to six months in prison for fabricating data behind recent Nature paper on cancer therapy." This might seem to be a highly unlikely scenario, but not if the views of Dr Richard Smith (New Scientist, September) prevail. Smith, a former editor of the British Medical Journal, argues that scientific misconduct degrades trust in science, causes real-world harm and should be treated as a crime akin to fraud.
Science is rightly proud of its achievements. It has discovered an amazing amount of knowledge about how the material world works, and science-based technology runs the modern world. Science enjoys widespread public trust, and massive public resources are used to fund scientific research. However, it is now clear that scientific misconduct is not uncommon. This is a major worry, because it is almost impossible to overstate the harm misconduct will inflict on science unless it is effectively tackled now.
A scientist researches by proposing a hypothesis to explain a phenomenon, making predictions based on the hypothesis and carrying out experiments to see if the predictions are fulfilled. If the predicted results are achieved (a positive result), the hypothesis is supported. Further predictions are then tested. If predictions continue to prove correct, confidence in the hypothesis grows until eventually it is accepted as the true explanation of the phenomenon, and the results are published. If predictions do not pan out (negative results), confidence in the hypothesis is lost.
We can trust the honest application of science to generate reliable knowledge. However, science is carried out by scientists, human beings who are prone to the usual temptations and pressures to behave dishonestly.
The most serious scientific misconduct is making up data to support a hypothesis. This happens occasionally, but the most common scientific misconduct is ignoring experimental data that does not support a hypothesis. Small "glitches" occasionally produce contrary results, but, with experience, such random noise is recognisable and can safely be ignored. But what if 20 per cent of your results contradict your hypothesis? It is patently unsafe to ignore this, but many scientists do.
In a recent large study (Daniele Fanelli, Plos One, May 2009), only 1.97 per cent of scientists admitted to outright falsification of data, but 33.7 per cent admitted practices such as dropping data based on a "gut feeling" or selectively reporting results that supported their hypotheses. About 70 per cent said they had seen colleagues doing this.
Conflicting data
Why is misconduct so widespread? Robert de Vries discusses the problem in the Conversation (August 2014). Scientists should be able to publish results exactly as they find them, but this is difficult in practice. Scientific research, although mostly funded by the taxpayer, is published in journals that you must pay to read, run by for-profit companies. These journals, particularly the most prestigious ones, want to publish attention-grabbing articles and have little interest in publishing studies reporting negative or conflicting results.
So what do you do, having spent several years studying a problem but getting mixed results? Your prospects for promotion, further research funding, perhaps even renewal of your work contract depend on publishing a paper. You know you’re on to something because 75 per cent of your results support your hypothesis, but 25 per cent do not. Unfortunately, too many scientists succumb to such pressures and ignore the minority conflicting data.
De Vries proposes that a solution to the problem is to change the publishing model so that it is open to publishing everything so long as the methodology is sound, as some open-access journals now do. If this model became universal, it would remove the incentive for scientists to hide inconvenient results. I would back up this system by introducing a random audit of published work for reproducibility and severe penalties for serious fraud.
Scientific misconduct has very serious consequences. For example, the faulty study that claimed to find a link between MMR vaccination and autism scared many parents, who stopped vaccinating their children. But the most serious overall consequence for science if misconduct is not effectively eliminated will be loss of public trust. The public votes huge resources to support science. If this trust is lost, we will all reap the results.