Share this...Share on Facebook0Tweet about this on TwitterShare on Google+0Share on LinkedIn0

Growing pressure on researchers to have a high research output could mean quality of scientific results decreases. Betsy Herbert explores the problem and reports on possible solutions suggested by a University of Bristol study.

Science in the modern age is more ubiquitous than ever before. Approximately 2.5 million research papers will be published in 2016 – and since the global scientific output doubles every nine years, this value is only set to increase.

While much of this growth is due to the increasing number of people entering research, there is growing concern that the researchers of today are publishing too much, too quickly and as a consequence, their results are flawed.

60% of psychology studies are irreproducible and therefore putatively incorrect

A recent study carried out by psychologists from the Universities of Bristol and Exeter implies just this. The research, published in the journal PLOS Biology last month, suggests that up to 60% of psychology studies are irreproducible and therefore putatively incorrect.

It was part of a larger effort known as “The Reproducibility Project: Psychology”, in which 270 scientists from across the world aimed to reproduce the findings of 100 psychology studies and put their validity to the test.

Worryingly, only 39 out of the 100 yielded replicable findings – and while this reflects, to an extent, the inherent variability and uncertainty in scientific conclusions, it is interpreted by some as part of a growing problem of “failed” reproducibility in research.

A ‘hyper-competitive’ culture providing little incentive for replicating findings

In searching for the cause of this phenomena, many point the finger at the current system of funding and merit, which has bred a ‘hyper-competitive’ culture providing little incentive for replicating findings.

In order to earn respect and prestige, and therefore be more successful in grant applications, a scientist must publish their research in those journals which are most respected and prestigious – for example Nature or Science.

However, the most prestigious journals only publish novel, ‘breakthrough’ findings; the research which ‘goes where no one has gone before’. This pressure to publish in leading journals has therefore caused researchers to abandon the (albeit noble and correct) task of carrying out confirmatory studies to check and validate their findings, in favour of pursuing small, exploratory studies which may yield more surprising results.

The psychologists involved in the Reproducibility Project created a mathematical model to emulate how a researcher who is trying to maximise their impact and reputation may behave, and to discover what proportion of time they invest in looking for exciting, novel results rather than confirming previous findings.

They discovered that the best tactic for career progression is to carry out many small-scale experimental studies, identify the ones that produced the most surprising results, and publish these alone, to stand a chance of making it into a high-end journal.

The problem here is that the smaller the sample size, the less reliable the experiment – so small-scale studies are likely to be ridden with false positives and are not at all likely to deliver any truthful observations.

Dr Andrew Higginson of Exeter University, one of the authors of the study, laments “so much money is wasted doing research from which the results can’t be trusted; a significant finding might be just as likely to be a false positive as actually be measuring a real phenomenon.”

So how might these unhealthy habits be overcome? Professor Marcus Munafò of the University of Bristol, the second author of the study, believes a short-term solution lies in the hands of the journal editors and reviewers, insisting “they should be much stricter about good statistical procedures – they should insist on large sample sizes and tougher statistical criteria for deciding whether an effect has been found.”

Indeed, a number of journals and grant applicators have begun introducing ‘submission checklists’ which require authors to explain their methodology and provide evidence of good scientific practice, including a justification of their chosen sample size.

Ultimately, however, change needs to occur on a grander scale, we must address the manner in which merit is endowed.

“The best thing for scientific progress would be a mixture of medium-sized exploratory studies and large-scale confirmatory studies,” advises Dr Higginson.  “Our work suggests that researchers would be more likely to do this if funding agencies and promotion committees rewarded the asking of important questions and good methodology rather than surprising findings and exciting interpretations.”


Let us know your thoughts on FB, Twitter or in the comment section below!

Facebook // Epigram Science & Technology // Twitter

Share this...Share on Facebook0Tweet about this on TwitterShare on Google+0Share on LinkedIn0