Format

Send to

Choose Destination
PLoS One. 2014 Apr 8;9(4):e94130. doi: 10.1371/journal.pone.0094130. eCollection 2014.

Influenza forecasting in human populations: a scoping review.

Author information

1
Division of Integrated Biosurveillance, Armed Forces Health Surveillance Center, Silver Spring, Maryland, United States of America.
2
Division of Analytic Decision Support, Biomedical Advanced Research and Development Authority, Department of Health and Human Services, Washington, DC, United States of America.
3
Department of Environmental Health Sciences, Mailman School of Public Health, Columbia University, New York, New York, United States of America.
4
Fogarty International Center, National Institutes of Health, Bethesda, Maryland, United States of America.

Abstract

Forecasts of influenza activity in human populations could help guide key preparedness tasks. We conducted a scoping review to characterize these methodological approaches and identify research gaps. Adapting the PRISMA methodology for systematic reviews, we searched PubMed, CINAHL, Project Euclid, and Cochrane Database of Systematic Reviews for publications in English since January 1, 2000 using the terms "influenza AND (forecast* OR predict*)", excluding studies that did not validate forecasts against independent data or incorporate influenza-related surveillance data from the season or pandemic for which the forecasts were applied. We included 35 publications describing population-based (N = 27), medical facility-based (N = 4), and regional or global pandemic spread (N = 4) forecasts. They included areas of North America (N = 15), Europe (N = 14), and/or Asia-Pacific region (N = 4), or had global scope (N = 3). Forecasting models were statistical (N = 18) or epidemiological (N = 17). Five studies used data assimilation methods to update forecasts with new surveillance data. Models used virological (N = 14), syndromic (N = 13), meteorological (N = 6), internet search query (N = 4), and/or other surveillance data as inputs. Forecasting outcomes and validation metrics varied widely. Two studies compared distinct modeling approaches using common data, 2 assessed model calibration, and 1 systematically incorporated expert input. Of the 17 studies using epidemiological models, 8 included sensitivity analysis. This review suggests need for use of good practices in influenza forecasting (e.g., sensitivity analysis); direct comparisons of diverse approaches; assessment of model calibration; integration of subjective expert input; operational research in pilot, real-world applications; and improved mutual understanding among modelers and public health officials.

PMID:
24714027
PMCID:
PMC3979760
DOI:
10.1371/journal.pone.0094130
[Indexed for MEDLINE]
Free PMC Article

Supplemental Content

Full text links

Icon for Public Library of Science Icon for PubMed Central
Loading ...
Support Center