Monday 08

Promoting More Rigorous Science

Posted by Center for Open Science on 08 Aug 2016

As scientists, we value transparency and reproducibility. However, we are rewarded for exciting and unexpected results. The present culture of science does not promote the ideal process of science. The result of this disconnect between practice and values are a published literature that does not represent the complete body of evidence and work that cannot be replicated. John Ioannidis predicted that this clash would result in irreproducible results (Ioannidis, 2005), and the Open Science Collaboration demonstrated the challenge in reproducing published findings (OSC, 2015).

The Center for Open Science (COS) was founded in 2013 to enhance scientific rigor by increasing the transparency of the entire research workflow. If every aspect of the research workflow can be reproduced and connected, then barriers to replicating previous work fall and the ability to critically evaluate work is strengthened. This vision will not be easily achieved. The rewards that lead to present behaviors have to be addressed and changed, and those rewards are created by the complete academic ecosystem: publishers, universities, academic societies, and funding agencies.

Because of this challenge, COS works in three areas. First, we study the extent of the problem and its proposed solutions. Our meta-science work on Reproducibility Projects in Psychology and Cancer Biology estimate the ability to reproduce findings across a discipline. The work we facilitate in the Many Labs projects replicate single studies across contexts, to estimate the boundary conditions of any given effect. We also evaluate the effectiveness of our initiatives, for example, the effect of Open Practice Badges on increasing the rates of data and research materials sharing.

Second, we educate and advocate for better practices through community efforts. This not only includes workshops, webinars, and materials on reproducible practices, but also policy guidelines and competitions to encourage uptake. Our advocacy efforts seek to shift the incentives in the research ecosystem in order to reward rigorous and transparent practices over exciting, but irreproducible, findings.

The Transparency and Openness Promotion (TOP) Guidelines provide eight modular standards that journals, publishers, or funding agencies can adopt in order to reward transparency. Each of the eight standards can be adopted at one of three levels of increasing rigor, thus reducing barriers to adoption while providing guidance for future improvement. For example, the standard on data transparency can be adopted at the first level, which requires disclosure of whether or not data are publicly available, the second level, which requires data sharing (with editorial exceptions permitted for legal or ethical constraints), while the third level includes verification that the data can be used to replicate the primary findings of a study. While few are ready to commit to the most stringent level, disclosure or sharing requirements can be readily implemented.

The Preregistration Challenge is a competition in which 1,000 researchers will receive $1,000 prizes for publishing the results of their preregistered work. At its heart, it is an education campaign, designed to spur adoption of preregistration, in which key analytical decisions are specified before conducting a study. Preregistration makes clear the distinction between hypothesis-testing, confirmatory work, and hypothesis-generating, exploratory work. The distinction between the two processes can be surprisingly easy to blur as researchers dig through a dataset making many small decisions on exclusion and stopping rules, how to combine measures for predictor or outcome measures, which controlling factors to include, and more.

One fear about preregistration is that it will tie researchers’ hands too much. Obviously, science relies on serendipitous findings, and there is risk to missing an unexpected finding. Type II errors, false negatives, can hinder our progress. But when confirming an expected finding, it is critical to preserve the utility of a p-value and to minimize false positives. Simply making clear when hypothesis-generating and hypothesis-testing work are being conducted will increase transparency and the strength of our assertions. Preregistration frees a researcher to explore a dataset to find those unexpected results by removing the incentive to suggest the analyses are confirmatory.

Finally, the third focus of our work is to create infrastructure that enables the changes for which we advocate. The Open Science Framework (OSF) is our flagship product, a free and open-source workflow management tool that enables transparent and reproducible work. The OSF allows researchers to manage complex projects and to collaborate with peers and students. It has built-in features that encourage data sharing with unique, persistent identifiers. It can be used to register projects at different points in time. The registrations are read-only, persistent snapshots of your project. Projects can be registered at multiple points in the research lifecycle in order to preserve its state. Preregistrations are snapshots of a project before data collection begins, and may include a complete analysis plan for any confirmatory tests to be completed. Other opportunities to make such a registered “snapshot” could be before submitting for peer review or around the time of publication.

The OSF can also serve as a hub for research being conducted at an institution. These institutional landing pages can help find collaborators and surface work being conducted more quickly and easily than through traditional outlets. See how the University of Virginia is using the OSF for these purposes here.

Improving the reproducibility of published scientific literature is a complex challenge that can only be undertaken as a community. The Center for Open Science encourages change through our outreach and enables change with the tools that we build.

How can you get involved?

Researchers can take the Preregistration Challenge to clarify the distinction between analyses specified prior to seeing the data from those that arose later. You can also use the Open Science Framework to manage your research, work with collaborators, or share your data.

Journal editors can promote these values by becoming signatories of the TOP Guidelines. Editors can also issue Open Practices Badges or conduct peer review before results are known using the Registered Report format.

Our vision is a field in which all parts of the research workflow are transparent. This transparency improves rigor by allowing expert evaluation where it is needed. However, this vision will not be achieved without collective action, so please join us in that work.

David Mellor

David Mellor works at the Center for Open Science in the USA.

[email protected] | @EvoMellor | https://osf.io/qthsf

This work is dedicated to the public domain under the terms specified by Creative Commons CC0 1.0 Universal.


Leave a comment

Comments on the blog are moderated before they appear on the site. Comment Policy

0 comments

Follow us: