loading page

Measuring Replicability to Promote Reproducibility in Hydrology
  • +3
  • James Stagge,
  • David Rosenberg,
  • Adel Abdallah,
  • Hadia Akbar,
  • Ryan James,
  • Nour Atallah
James Stagge
The Ohio State University

Corresponding Author:[email protected]

Author Profile
David Rosenberg
Utah State University
Author Profile
Adel Abdallah
Utah State University
Author Profile
Hadia Akbar
Utah State University
Author Profile
Ryan James
Utah State University
Author Profile
Nour Atallah
Utah State University
Author Profile

Abstract

There have been numerous calls to promote reproducible research. This growing awareness coincides with major advances in data/code sharing technologies. Yet authors, journals, institutions, and funders still need to act to advance more reproducible research. Here, we suggest to view reproducibility as a continuum that includes the 1) availability of data, models, code, and directions to use the digital artifacts, 2) replication of results, and 3) reproducibility of findings. We present a simple survey tool to assess where a peer-reviewed journal article lies on the continuum. We use the tool to assess 360 random sampled articles of the 1,989 articles published in 2017 in six well-regarded hydrology and water resources journals. 49% of sampled articles had some materials available online, but just 5.6% made available all the data, models, code, and directions. For 1.6% of articles, we generated results that replicated some or all of the published results. Assessments took 5 to 14 minutes per article to determine the availability of digital artifacts and 25 to 86 minutes to replicate results (25-75% range). The availability of data, models, code, and directions differed by journal and journal policy towards data availability. From the 360 article sample, we estimate that 0.6% to 6.8% of all articles published in the six journals in 2017 can be replicated using their published artifacts (95% confidence interval). These results suggest several practices to improve the reproducibility of published research. First, authors should provide directions to use their data, models, and code in addition to the digital artifacts. Second, on author submission, journals should use a tool like ours to assess the submission’s position on the reproducibility continuum. Third, journals should formulate policies that require authors to state the intended reproducibility of their work and place relevant information in an easy-to-find article location. Fourth, journals, institutions, and funders should highlight work whose digital artifacts, results, and findings are available, replicable, and reproducible.