Efforts to enhance reproducibility in a human performance research project

Jeffrey A. Drocco*, Kyle Halliday, Benjamin J. Stewart, Sarah H. Sandholtz, Michael D. Morrison, James B. Thissen, Nicholas A. Be, Christopher E. Zwilling, Ramsey R. Wilcox, Steven A. Culpepper, Aron K. Barbey, Crystal J. Jaing

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Background: Ensuring the validity of results from funded programs is a critical concern for agencies that sponsor biological research. In recent years, the open science movement has sought to promote reproducibility by encouraging sharing not only of finished manuscripts but also of data and code supporting their findings. While these innovations have lent support to third-party efforts to replicate calculations underlying key results in the scientific literature, fields of inquiry where privacy considerations or other sensitivities preclude the broad distribution of raw data or analysis may require a more targeted approach to promote the quality of research output. Methods: We describe efforts oriented toward this goal that were implemented in one human performance research program, Measuring Biological Aptitude, organized by the Defense Advanced Research Project Agency's Biological Technologies Office. Our team implemented a four-pronged independent verification and validation (IV&V) strategy including 1) a centralized data storage and exchange platform, 2) quality assurance and quality control (QA/QC) of data collection, 3) test and evaluation of performer models, and 4) an archival software and data repository. Results: Our IV&V plan was carried out with assistance from both the funding agency and participating teams of researchers. QA/QC of data acquisition aided in process improvement and the flagging of experimental errors. Holdout validation set tests provided an independent gauge of model performance. Conclusions: In circumstances that do not support a fully open approach to scientific criticism, standing up independent teams to cross-check and validate the results generated by primary investigators can be an important tool to promote reproducibility of results.

Original languageEnglish
Article number1430
JournalF1000Research
Volume12
DOIs
StatePublished - 2023
Externally publishedYes

Keywords

  • data quality
  • evaluation methodology
  • reproducibility of results
  • validation studies

Cite this