Close

Quantification of the Tradeoff between Test Sensitivity and Test Frequency in a COVID-19 Epidemic-A Multi-Scale Modeling Approach.

Abstract

Control strategies that employ real time polymerase chain reaction (RT-PCR) tests for the diagnosis and surveillance of COVID-19 epidemic are inefficient in fighting the epidemic due to high cost, delays in obtaining results, and the need of specialized personnel and equipment for laboratory processing. Cheaper and faster alternatives, such as antigen and paper-strip tests, have been proposed. They return results rapidly, but have lower sensitivity thresholds for detecting virus. To quantify the effects of the tradeoffs between sensitivity, cost, testing frequency, and delay in test return on the overall course of an outbreak, we built a multi-scale immuno-epidemiological model that connects the virus profile of infected individuals with transmission and testing at the population level. We investigated various randomized testing strategies and found that, for fixed testing capacity, lower sensitivity tests with shorter return delays slightly flatten the daily incidence curve and delay the time to the peak daily incidence. However, compared with RT-PCR testing, they do not always reduce the cumulative case count at half a year into the outbreak. When testing frequency is increased to account for the lower cost of less sensitive tests, we observe a large reduction in cumulative case counts, from 55.4% to as low as 1.22% half a year into the outbreak. The improvement is preserved even when the testing budget is reduced by one half or one third. Our results predict that surveillance testing that employs low-sensitivity tests at high frequency is an effective tool for epidemic control.

MIDAS Network Members

Citation: