Close

EPIFORGE 2020 Guidelines

The MIDAS Coordination Center is supporting scientists who have developed a set of guidelines for epidemic forecast reporting, the EPIFORGE checklist. 

These guidelines are being submitted for formal journal publication and consideration by the EQUATOR network thereafter. 

Feedback and questions are welcome! Please see files below for the paper and supplementary material. 
Comments may be sent to questions@midasnetwork.us.

EPIFORGE paper – (297 KB)  

EPIFORGE supplementary material – (83 KB)  

Abstract: The importance of infectious disease epidemic forecasting and prediction research has been underscored by decades of communicable disease outbreaks, including COVID-19. Unlike other fields of medical research, such as clinical trials and systematic reviews, no reporting guidelines exist for reporting epidemic forecasting and prediction research despite their utility. We therefore developed the EPIFORGE checklist, the first known guideline for standardized reporting of epidemic forecasting research. We developed this checklist using a best-practice process for development of reporting guidelines, involving a Delphi process and broad consultation with an international panel of infectious disease modelers and model end-users. The objectives of these guidelines are to improve the consistency, reproducibility, comparability, and quality of epidemic forecasting reporting. The guidelines are not designed to advise scientists on how to perform epidemic forecasting and prediction research, but rather to serve as a standard for reporting critical methodological details of such studies. These guidelines will be submitted to the EQUATOR network, in addition to hosting by other dedicated webpages to facilitate feedback and journal endorsement.

Checklist:

Section of ManuscriptChecklist item
Title / Abstract 1Study described as a forecast or prediction research in at least the title or abstract
Introduction 2Purpose of study and forecasting targets defined
Methods 3Methods fully documented
Methods 4Identify whether the forecast was performed prospectively, in real-time, and/or retrospectively
Methods 5Origin of input source data explicitly described with reference
Methods 6Source data made available, or reasons why this was not possible documented
Methods 7Input data processing procedures described in detail
Methods 8Statement and description of model type, with model assumptions documented with references.
Methods 9Model code made available, or reasons why this was not possible documented
Methods 10Description of model validation, with justification of approach.
Methods 11Description of forecast accuracy evaluation method, with justification
Methods 12Where possible, compare model results to a benchmark or other comparator model, with justification of comparator choice
Methods 13Description of forecast horizon, and justification of its length
Results 14Uncertainty of forecasting results presented and explained
Results 15Results briefly summarized in lay terms, including a lay interpretation of forecast uncertainty
Results 16If results are published as a data object, encourage a time-stamped version number
Discussion 17Limitations of forecast described, including limitations specific to data quality and methods
Discussion 18If the research is applicable to a specific epidemic, comment on its potential implications and impact for public health action and decision making
Discussion 19If the research is applicable to a specific epidemic, comment on how generalizable it may be across populations