We created a synthetic dataset with daily counts of influenza cases and vaccinations, calculated "true" averted cases using a reference model applied to the daily data, aggregated the data by month to simulate data that would actually be available, and evaluated the month-level data with seven test methods (including the current method). Methods with averted case estimates closest to the reference model were considered most accurate. To examine their performance under varying conditions, we re-evaluated the test methods when synthetic data parameters (timing of vaccination relative to cases, vaccination coverage, infection rate, and vaccine effectiveness) were varied over wide ranges. Finally, we analyzed real (i.e., collected by surveillance) data from 2010 to 2017 comparing the current method used by CDC with the best-performing test methods.
In the synthetic dataset (population 1 million persons, vaccination uptake 55%, seasonal infection risk without vaccination 12%, vaccine effectiveness 48%) the reference model estimated 28,768 averted cases. The current method underestimated averted cases by 9%. The two best test methods estimated averted cases with <1% error. These two methods also worked well when synthetic data parameters were varied over wide ranges (≤6.2% error). With the real data, these two methods estimated numbers of averted cases that are a median 8% higher than the currently-used method.
We identified two methods for estimating numbers of influenza cases averted by vaccine that are more accurate than the currently-used algorithm. These methods will help us to better assess the benefits of influenza vaccination.
To evaluate the public health benefit of yearly influenza vaccinations, CDC estimates the number of influenza cases and hospitalizations averted by vaccine. Available input data on cases and vaccinations is aggregated by month and the estimation model is intentionally simple, raising concerns about the accuracy of estimates.