Home / Advocacy & Resources / Policy Areas / Reproducibility Journal Club
Use this journal club activity to gain insight into the challenges of improving scientific rigor.
Primary Article:

A call for transparent reporting to optimize the predictive value of preclinical research Landis et. al.

Summary

The authors provide an overview of the issues and suggest ways to improve the reporting of preclinical animal studies in publications and grant applications. These are based upon specific deficiencies in the reporting of methodology and data analysis that have been identified as likely sources of reproducibility problems.

Discussion Questions:
  1. List some of the reasons why preclinical data may not be easily reproducible.
  2. Do you agree that poor reporting, often associated with poor experimental design, is a significant issue across the life sciences?
    • If you agree, do you think this is a new problem?
    • If it is a new problem, is it due to something that has changed about the way we do science?
    • What can be done to improve scientific rigor? Training courses on improving study design? Required quality control of studies? Consultation with a statistician on study design and data analysis?
  3. “Core set of reporting standards for rigorous study design” (Box 1)
    • What would you add or delete from these standards?
    • How could they be implemented to minimize additional regulatory burden?
    • What role does peer-review (for grants and publications) play in this?
    • These recommendations are not new ideas. Why are they not already more widely followed and/or reported on?
  4. What can be done to get investigators to pay greater attention to the problems and solutions?
Looking to the Future
  1. What are the implications if we don’t improve scientific rigor?
  2. What measures are needed to evaluate the impact of the suggested changes?
  3. When and how should a “re-evaluation” be performed?
  4. How can a scientific society be part of the solution through its publications, education and training for members, and/or efforts to increase awareness?
Scientific Publications
  1. What steps can journals take to enhance the scientific rigor of published experiments?
  2. Do you know of any current paths for publishing neutral data?
  3. What is the best way to encourage the publication of neutral or negative results?
  4. Does publishing of neutral results affect the reputation of the researchers, i.e., would this affect their chances for subsequent funding?
  5. Is it a bigger deal that a Nature or Science paper can’t be replicated compared to a paper in an AJP journal?
Improving Animal Models
  1. To what extent do physiological variation between animal strains and species contribute to replication issues? How should we address this?
  2. Do you agree or disagree with the following statement: “Animal care and use committees and ethical principles such as reduction, replacement, and refinement, create pressure on investigators to propose underpowered animal studies.”
Rigorous Statistical Analysis
  1. What factors influence effect size estimates?
  2. What is the difference between sample size and effect size, and sample size and power?
  3. How do we know what is the appropriate sample size? What method do we use?
  4. Can effect size be compared between laboratories?
Other Articles
  • Can animal models of disease reliably inform human studies? Van der Worp et. al.
    This article addresses sources of “translational failure” between animal studies and clinical treatments in terms of “internal validity,” i.e., problems with study design and “external validity,” i.e., differences between the animal model and the human disease condition.
  • Believe it or not: how much can we rely on published data on potential drug targets? Prinz et. al.
    This paper highlights the work of a group of industry scientists who took an a posteriori look at the reproducibility of published preclinical outcomes and determined that only 20–25% of the studies replicated. The authors discuss factors that may contribute to lack of reproducibility including competition and bias toward publishing positive results. They concluded that lack of replication is not necessarily scientific fraud or flawed science, but they nevertheless explored the implications of reproducibility problems for industry and for science as a whole.
  • Publication bias in reports of animal stroke studies leads to major overstatement of efficacy, Sena et. al.
    This paper describes publication bias toward positive significant results in a total of 525 experimental stroke studies. They noted that based upon 16 systematic review papers, papers tend to skew preclinical stroke data toward overstatement of treatment efficacy, which suggests that hundreds of papers with neutral or negative results remain unpublished.
  • Design, power and interpretation of studies in the standard murine model of ALS, Scott et al.
    The Scott article investigated the replication of data from familial amyotrophic lateral sclerosis (FALS) model mice (mice carrying at least 23 copies of the SOD1 FALS transgene). They determined the effects of biological variables and developed an optimized experimental design to minimize “noise” through optimally powered group size. After retesting various compounds to replicate published work, the authors found no positive effects for any compound (NOTE: one compound which has a clinical effect in humans did not produce a positive effect in the optimized SOD1 FALS mouse)