Solutions

Roadmap

Munafò et al. (2017)

Munafò, M. R., Nosek, B. A., Bishop, D. V. M., Button, K. S., Chambers, C. D., Sert, N. P. du, Simonsohn, U., Wagenmakers, E.-J., Ware, J. J. & Ioannidis, J. P. A. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1, 0021. https://doi.org/10.1038/s41562-016-0021

Abstract

Improving the reliability and efficiency of scientific research will increase the credibility of the published scientific literature and accelerate discovery. Here we argue for the adoption of measures to optimize key elements of the scientific process: methods, reporting and dissemination, reproducibility, evaluation and incentives. There is some evidence from both simulations and empirical studies supporting the likely effectiveness of these measures, but their broad adoption by researchers, institutions, funders and journals will require iterative evaluation and improvement. We discuss the goals of these measures, and how they can be implemented, in the hope that this will facilitate action toward improving the transparency, reproducibility and efficiency of scientific research.

Figure 1

Figure 1 from [@munafo_manifesto_2017](https://doi.org/10.1038/s41562-016-0021)

Figure 86: Figure 1 from Munafò et al. (2017)

Improving methods

Protecting against cognitive biases.

Improving methodological training

Implementing independent methodological support

Encouraging collaboration and team science.

Improving reporting and dissemination

Promoting study pre-registration

Improving the quality of reporting

Improving reproducibility

Promoting transparency and open science

Improving evaluation

Diversifying peer review

Changing incentives

Begley (2013)

Begley, C. G. (2013). Six red flags for suspect work. Nature, 497(7450), 433–434. https://doi.org/10.1038/497433a

Were experiments performed blinded?

Were basic experiments repeated?

Were all the results presented?

Were there positive and negative controls? Often in the non-reproducible, high-profile papers, the crucial control experiments were excluded or mentioned as ‘data not shown’.

Were reagents validated?

Were statistical tests appropriate?

Why do we repeatedly see these poor-quality papers in basic science? In part, it is down to the fact that there is no real consequence for investigators or journals. It is also because many busy reviewers (and disappointingly, even co-authors) do not actually read the papers, and because journals are required to fill their pages with simple, complete ‘stories’. And because of the apparent failure to recognize authors’ competing interests — beyond direct financial interests — that may interfere with their judgement.

References

Begley, C. G. (2013). Six red flags for suspect work. Nature, 497(7450), 433–434. https://doi.org/10.1038/497433a
Gilmore, R. O., Cole, P. M., Verma, S., Aken, M. A. G., & Worthman, C. M. (2020). Advancing scientific integrity, transparency, and openness in child development research: Challenges and possible solutions. Child Development Perspectives, 14(1), 9–14. https://doi.org/10.1111/cdep.12360
Munafò, M. R., Nosek, B. A., Bishop, D. V. M., Button, K. S., Chambers, C. D., Sert, N. P. du, … Ioannidis, J. P. A. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1, 0021. https://doi.org/10.1038/s41562-016-0021
Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., … Yarkoni, T. (2015). Promoting an open research culture. Science, 348(6242), 1422–1425. https://doi.org/10.1126/science.aab2374
SRCD. (2019). Policy on scientific integrity, transparency, and openness | society for research in child development SRCD. https://www.srcd.org/policy-scientific-integrity-transparency-and-openness. Retrieved from https://www.srcd.org/policy-scientific-integrity-transparency-and-openness