Media Briefings

THE CRISIS OF TRUST IN SCIENCE: Evaluating solutions to the problem of false positives

  • Published Date: March 2015

Discouraging minor statistical sloppiness by scientists will reduce more severe questionable research practices such as outright data manipulation. That is the central conclusion of research by Zacharias Maniadis to be presented at the Royal Economic Society’s annual conference 2015. His study explores the potential effects of some of the proposed policies of increased transparency and monitoring on the reliability of scientific results.

Science has entered a crisis of trust: many empirical results, usually expressed in terms of statistical significance, appear to be surprisingly hard to replicate, thus eroding the public trust in scientific results and, possibly, also in scientific methodology more general.

This has been recognised both in the popular press (a recurrent theme, for example, in The Economist) and in academic circles, leading to a variety of proposals for policy remedies, for instance in form of increased requirements on the transparency of research design.

It is remarkable, however, that economics has so far been content to remain largely silent on this issue. It is, after all, the dismal science that specialises in the analysis of strategic behaviour and the provision of adequate incentives to implement desirable outcomes (and otherwise is not shy of providing helpful if not always wanted insights to neighbouring fields).

This study aims to take up this challenge and provide a first step in examining the theoretical effects of some of the proposed policies of increased transparency and monitoring on the reliability of scientific results. The idea behind the proposals is that by imposing transparency, researchers will refrain from committing questionable practices that tend to make the results difficult to interpret and generalise.

The main result is that discouraging slight transgressions, for instance in form of statistical sloppiness such as unreported multiple testing, will have a knock-on effect and reduce more severe questionable research practices such as outright data manipulation.

The study examines a setting where researchers are intrinsically motivated to conduct research ethically (or, equivalently, to maintain a good reputation), but are also concerned about being published in a world with limited attention, such as a limited number of top journals. The latter is crucial, as it introduces an externality.

The return to questionable research practices for an individual researcher depends on other researchers’ behaviour: the more frequent lighter transgressions are, the higher the expected return to outright manipulation, guaranteeing a unique result receiving wide attention and numerous citations.

Therefore a policy that reduces lighter transgressions does not, as might be expected at first glance, lead to substitution into more severe misbehaviour. On the contrary, reducing the incidence of lighter misdemeanour will reduce competitiveness of the race to publication and thus ease the pressure of engaging in questionable practices.

Policies that aim to reduce more severe transgressions – for example, requiring replication for publication – will have the opposite effect, increasing the rewards for lighter transgressions and the incidence of these practices.

The overall effect on the reliability of scientific results is ambiguous. It will depend on whether one is concerned more with the quantity of biased results, which tends to increase, or with the size of the bias per study, which is lower for lighter transgressions than for outright manipulation.

ENDS


Zacharias Maniadis, University of Southampton. Email: z.maniadis@soton.ac.uk; mobile: 07475 530271