The importance of playing the long game when it comes to pandemic surveillance
The COVID-19 pandemic highlighted the importance of gathering information on pathogen characteristics as early as possible when an outbreak strikes. But it also underscored how quickly much of that information becomes out-of-date as the virus and host populations change, weakening epidemiological analyses based on the early information and so the evidence for policy decisions. To prepare for future pandemics, more funds and scientific effort must be directed at ways to track these ongoing changes and regularly update estimates of pathogen characteristics.

Moving Target
On multiple occasions, the pandemic demonstrated just how crucial it is to track and respond to new developments (1, 2). For example, within the first two years, the virus became more transmissible, while infections became more and then less likely to result in severe outcomes. At the same time, the population developed immunity from exposure and vaccination, reducing the average severity of infections. If such changes are not accounted for, they undermine projections of likely or possible scenarios for cases, hospitalizations, and deaths.
And as the Omicron variant began its global spread in late 2021, surveillance and analysis from South Africa suggested that populations around the world should expect enormous numbers of cases due to increased transmissibility and immune escape and evidence of reduced severity per case. Without that information, we would have been taken completely by surprise by this new and very different variant. Even with it, modelers struggled to estimate how these opposing changes would play out in terms of the burden of severe cases, and renewed efforts to generate data on Omicron’s characteristics around the world helped to reduce that uncertainty.
Sign up for PNAS alerts.
Get alerts for new articles, or get an alert when an article is cited.
During COVID-19 and other previous pandemics, the characterization of pathogens too often relied on one-off studies to try to hit a moving target. To improve, we should reframe this activity as ongoing surveillance. In addition to gathering the data to estimate key characteristics in the early stages of an outbreak, that means continuing to measure more of these characteristics as the pandemic unfolds. This will require more resources and a change in mindset.
For example, data collection and analysis protocols should be designed to facilitate ongoing surveillance. By allowing pathogen characteristics to be readily compared through time, as well as across regions and pathogen variants, this will allow researchers and public health officials to better detect changes and maximize the value of surveillance for public health.
Pathogen Characteristics
But what exactly should be measured and when? Numerous characteristics of a pathogen and its spread can change substantially through the course of a pandemic. Ongoing surveillance of these can be done in three ways: continuous, scheduled, and triggered.
But what exactly should be measured and when? Numerous characteristics of a pathogen and its spread can change substantially through the course of a pandemic.
Continuous surveillance collects data in an ongoing way, such as daily recorded hospital admissions of diagnosed cases. Scheduled surveillance collects data according to a predetermined schedule—for example, a series of seroprevalence surveys conducted at scheduled intervals, irrespective of the level of epidemic activity at each sampling time point. The triggered approach restarts data collection in response to triggers. For example, an initial data collection period is triggered by the emergence of a pandemic pathogen and the second by an emerging variant with higher transmissibility.
The preferred schedule will depend on the nature of the characteristic, its public health significance, and how quickly—and by how much—it changes over time. Also relevant is whether repeated bouts of measurement or an “always-on” system is more efficient. Table 1 offers some examples, and we discuss two—clinical severity and the generation interval—in more detail below.
Table 1.
Pathogen or epidemiological characteristic | Why is this important? | Why it may change through time (PG: pathogen genomic changes; HI: host immunity from vaccination and/or infection) | How it should be measured (C: continuous; S: scheduled; T: triggered) |
---|---|---|---|
Basic reproduction number (R0) | Strongly influences how quickly the epidemic grows, when it peaks, its overall magnitude, how long it lasts, and the effectiveness of interventions | PG, changing behavior | T sufficient |
Clinical severity and symptom profile | Determines impact on population health and healthcare systems | PG, HI | C, S, or T |
Symptomatic proportion | Determines illness burden and workforce absenteeism, and determinant of “visibility” of an infection to infected individuals, healthcare professionals, and surveillance systems, which impacts disease controllability | PG, HI | C, S, or T |
Natural history of infection, symptoms, and infectiousness (e.g., generation interval, incubation period, variability in transmissibility per host [i.e., variance of the individual-level basic reproduction number]) | Guides case and contact management strategies (e.g., the isolation period of diagnosed cases, duration of quarantine for contacts, time window available for contact tracing, effectiveness of symptom-prompted strategies) | PG, HI | T sufficient |
Test sensitivity and specificity (over time since infection) | Determinant of “visibility” of an infection to infected individuals, healthcare professionals, and surveillance systems, which impacts disease controllability | PG, potentially HI | T sufficient |
Risk factors for infection, transmission, or severity | Required to develop the most targeted and effective public health interventions | Changing behavior, PG, HI | C, S, or T |
We outline reasons why measuring these characteristics is important, examples of why they may change, and the timing of data collection for monitoring. Under the timing of data collection, we indicate whether triggered (T) studies are sufficient for monitoring or whether continuous (C) or scheduled (S) studies should be considered.
The severity of a new infection informs how to respond (3, 4). It is best quantified as the proportion of infected individuals who die or are hospitalized. COVID-19 and prior pandemics have shown that estimating this proportion is more complicated than it might appear (5). Mild or asymptomatic infections are readily missed, giving a falsely low denominator and falsely high severity estimate. And there’s usually a time lag between someone being confirmed with the disease and the occurrence and reporting of severe outcomes, which can lead in a growing epidemic to a falsely low severity estimate if not appropriately addressed.
There are statistical methods to address these issues (5), particularly during the growth phase of an epidemic, and these were applied during the COVID-19 pandemic (6). But a better way to estimate severity more accurately is to continuously collect more and better data on the total number of infections. Government, academics, and funding bodies in the United Kingdom worked together to show how this can be done, implementing national rolling studies of both infection prevalence and seroprevalence during the COVID-19 pandemic (7, 8).
Fluctuating severity of infection during COVID-19 also highlighted why continuous monitoring is needed. A series of studies, for example, assessed the risk of hospitalization and, in some cases, the length of stay for successive waves of SARS-CoV-2 variants, starting with Delta, followed by Omicron BA.1, then Omicron BA.2 (9), and continuing with subsequent variants (10).
Experience with these studies, and others (e.g., ref. 11), shows the critical importance of dedicated variant genetic characterization efforts. Serendipitously, it was possible to characterize variants during COVID-19 by the pattern of “S gene target failures” in PCR-based tests. But we are unlikely to be so lucky next time. Public health officials need to plan for sequencing of infections linked to clinical outcomes.
Researchers should also plan studies that distinguish changing host characteristics (such as history of infection or vaccination) from changing viral characteristics (such as immune escape) (12). We must also acknowledge that per-infection severity is harder to measure and less generalizable across populations in a setting of widespread prior infection and vaccination.
Triggered Approach
The generation interval (GI) is the time between an individual becoming infected and infecting another person. This interval differs for each transmission pair, and the distribution of GIs is required to accurately estimate the basic and effective reproduction numbers of a pathogen (13) to inform public health responses. The GI will change over the course of a pandemic, as host behavior and pathogen characteristics change with new variants. But existing surveillance methods do not collect the data best suited to track the GI distribution: dates of infection of the infector and infectee in each transmission pair. While these data are not usually collected for the purpose of estimating pathogen characteristics, they are sometimes available through contact tracing data.
We suggest that data should be purposely collected during future outbreaks on a triggered basis (independent of contact tracing measures). The triggers would be the initial discovery of a significant emerging pathogen, and then subsequently when new variants with significant transmission advantages are identified or change in the natural history of symptoms or viral load is observed. Data collection would then be stood down in each phase once sufficient characterization of the pathogen or variant was achieved.
It’s true that early estimates of GI were updated when new COVID-19 variants emerged. [The GIs of Delta and Omicron variants of SARS-CoV-2 were estimated to be shorter compared with ancestral strains (14).] However, these estimates were not made from data collected for this purpose or using a standardized data-collection protocol with guidance on considerations such as sample size. Instead, epidemiologists mostly relied on estimates of a different quantity called the serial interval (SI) (15): the time between symptom onsets of an infector–infectee pair. They could work out the SI from analyses of contact tracing data because dates of symptom onset are more easily identified than dates of infection.
But while the GI and the SI distributions must have the same mean, the SI has limitations as a proxy for the GI (16), particularly because the SI for a particular infector–infectee pair can be negative, while the GI is always greater than zero. Measuring SI (or GI) distributions from contact tracing data is also not straightforward since the observed distributions will vary temporally as the epidemic progresses (17).
Research or Surveillance?
For many endemic diseases, the quantities listed in Table 1 would naturally be the targets of single research studies, performed in one population at one time and intended to inform future analyses. Our core point is that it’s essential in a pandemic to set up systems for ongoing measurement.
This raises a difficulty of definitions, and so questions of who is responsible. We are asking for more comprehensive surveillance, which is often defined as repeated or continuous measurement. But the measurement of many pathogen characteristics (severity and GI among them) is typically classed as research, with data produced once and reused.
In the United States and other places, disease research and disease surveillance are carried out and supported by different communities and funders. We argue that pathogen characterization should be reframed as ongoing disease surveillance and funded accordingly.
The distinction between surveillance and research is contested and defined differently in different places. From an operational perspective, the key point is that data on these quantities will be more informative if systems to gather it are set up from the start with the idea that they will be used continuously or repeatedly. They will be more expensive, but necessary only during relatively infrequent events and in many cases only when triggered. A key point is to set up the studies to be repeatable and continued at the outset, rather than designing, funding, and approving them as one-time efforts that have to be restarted.
The investment would pay off. Well-designed surveillance studies can simultaneously serve multiple purposes. For example, infection prevalence studies in the United Kingdom provided near-real-time insight into the dynamics of SARS-CoV-2 infection by age group (8) and variant (18), as well as data for timely assessments of symptomatology (19), clinical severity (11), risk factors (20), and intervention effectiveness, including components of vaccine effectiveness against transmission (21). Data on infector–infectee transmission pairs can provide information on multiple biological intervals, including the GI, SI, and incubation period (15).
Plans and Practice
As with any data-collection activity, surveillance system design should be guided by data analysis plans. While pandemic plans developed prior to COVID-19 articulated the need and intent to rapidly estimate pathogen characteristics (22, 23), significant barriers were faced in practice. In some cases, data-collection protocols had never been tested in the field or were not designed in tandem with an analysis plan.
One way to overcome these challenges is to set up and run data-collection systems and analyses in miniature (for continuous or periodic systems) or in an exercise (for triggered protocols) during interpandemic periods. By targeting established pathogens that are poorly characterized in such an exercise, the public health benefits could be twofold: we would gain valuable insight into current infectious disease threats [e.g., seasonal respiratory viruses (24), mpox, or Bordetella pertussis], while improving preparedness for future pandemics.
Assessments and approvals covering data governance, ethics, cost-effectiveness, and logistics are important and may take time to generate. They should therefore be established now. However, each outbreak brings challenges that cannot be fully anticipated. Improved frameworks should therefore be established to expedite approval and implement surveillance strategies for other important variables during a pandemic.
We have focused here on measuring the characteristics of a pathogen. But host factors could also be targets for continuous surveillance, such as population behavior, attitudes, and perceptions, due to their strong impact on the effectiveness of interventions to reduce transmission. Academic institutions and government agencies introduced systems to measure such quantities at high temporal frequency during the COVID-19 pandemic: weekly social contact surveys quantified transmission potential (25) and intervention effectiveness (26).
While the need to measure pathogen characteristics over time is clear, it is possible, in some instances, to use estimates generated in one geographic location to inform responses in other locations. Jurisdictions such as the United Kingdom and Israel served during COVID-19 as sources of the world’s estimates for vaccine effectiveness as a function of variant and time. Quantities that depend more strongly on social structures and processes, including population behavior, may require estimation in each jurisdiction. These include the generation interval, frequency of drug resistance (determined in part by treatment levels), and risk factors for transmission and severity.
The COVID-19 pandemic has reinforced the need for continuous or repeated monitoring of key pathogen characteristics, rather than one-time surveillance activities conducted in the early phase of emergence. The next generation of pandemic plans and surveillance systems should address this need. Ongoing knowledge of pathogen characteristics will enable decision-makers to develop better targeted and more proportionate public health responses through the course of a pandemic.
Acknowledgments
F.M.S. acknowledges James McCaw (University of Melbourne) for many helpful discussions on pandemic surveillance and its role in planning and response.
Author contributions
F.M.S. and M.L. designed research; performed research; and wrote the paper.
Competing interests
The authors declare no competing interest.
References
1
D. B. Jernigan, D. George, M. Lipsitch, Learning from COVID-19 to improve surveillance for emerging threats. Am. J. Public Health 113, 520–522 (2023).
2
F. M. Shearer et al., Opportunities to strengthen respiratory virus surveillance systems in Australia: Lessons learned from the COVID-19 response. Commun. Dis. Intell., https://doi.org/10.33321/cdi.2024.48.47 (2024).
3
M. Lipsitch et al., Improving the evidence base for decision making during a pandemic: The example of 2009 influenza A/H1N1. Biosecur. Bioterror. 9, 89–115 (2011).
4
T. Garske et al., Assessing the severity of the novel influenza A/H1N1 pandemic. BMJ 339, b2840 (2009).
5
M. Lipsitch et al., Potential biases in estimating absolute and relative case-fatality risks during outbreaks. PLoS Negl. Trop. Dis. 9, e0003846 (2015).
6
X. Deng et al., Case fatality risk of the first pandemic wave of coronavirus disease 2019 (COVID-19) in China. Clin. Infect. Dis. 73, e79–e85 (2021).
7
O. Eales et al., Trends in SARS-CoV-2 infection prevalence during England’s roadmap out of lockdown, January to July 2021. PLoS Comput. Biol. 18, e1010724 (2022).
8
K. B. Pouwels et al., Community prevalence of SARS-CoV-2 in England from April to November, 2020: Results from the ONS Coronavirus Infection Survey. Lancet Public Health 6, e30–e38 (2021).
9
J. A. Lewnard et al., Clinical outcomes associated with SARS-CoV-2 Omicron (B.1.1.529) variant and BA.1/BA.1.1 or BA.2 subvariant infection in Southern California. Nat. Med. 28, 1933–1943 (2022).
10
J. A. Lewnard et al., Immune escape and attenuated severity associated with the SARS-CoV-2 BA.2.86/JN.1 lineage. Nat. Commun. 15, 8550 (2024).
11
O. Eales et al., Dynamics of SARS-CoV-2 infection hospitalisation and infection fatality ratios over 23 months in England. PLoS Biol. 21, e3002118 (2023).
12
R. P. Bhattacharyya, W. P. Hanage, Challenges in inferring intrinsic severity of the SARS-CoV-2 Omicron variant. N. Engl. J. Med. 386, e14 (2022).
13
J. Wallinga, M. Lipsitch, How generation intervals shape the relationship between growth rates and reproductive numbers. Proc. R. Soc. B Biol. Sci. 274, 599–604 (2007).
14
Z. J. Madewell et al., Rapid review and meta-analysis of serial intervals for SARS-CoV-2 Delta and Omicron variants. BMC Infect. Dis. 23, 429 (2023).
15
D. Chen et al., Inferring time-varying generation time, serial interval, and incubation period distributions for COVID-19. Nat. Commun. 13, 7727 (2022).
16
S. Lehtinen, P. Ashcroft, S. Bonhoeffer, On the relationship between serial interval, infectiousness profile and generation time. J. R. Soc. Interface 18, 20200756 (2021).
17
D. Champredon, J. Dushoff, Intrinsic and realized generation intervals in infectious-disease transmission. Proc. R. Soc. B Biol. Sci. 282, 20152026 (2015).
18
O. Eales et al., Dynamics of competing SARS-CoV-2 variants during the Omicron epidemic in England. Nat. Commun. 13, 4375 (2022).
19
J. Elliott et al., Predictive symptoms for COVID-19 in the community: REACT-1 study of over 1 million people. PLoS Med. 18, e1003777 (2021).
20
E. Pritchard et al., Monitoring populations at increased risk for SARS-CoV-2 infection in the community using population-level demographic and behavioural surveillance. Lancet Reg. Health Eur. 13, 100282 (2022).
21
K. B. Pouwels et al., Effect of Delta variant on viral burden and vaccine effectiveness against new SARS-CoV-2 infections in the UK. Nat. Med. 27, 2127–2135 (2021).
22
Australian Government Department of Health and Aged Care, Australian Health Management Plan for Pandemic Influenza (2014). https://www1.health.gov.au/internet/main/publishing.nsf/Content/ohp-ahmppi.htm. Accessed 7 July 2024.
23
US Department of Health and Human Services, Pandemic Influenza Plan 2017 Update (2017). https://www.cdc.gov/pandemic-flu/media/pan-flu-report-2017v2.pdf. Accessed 3 April 2025.
24
O. Eales et al., Key challenges for respiratory virus surveillance while transitioning out of acute phase of COVID-19 pandemic. Emerg. Infect. Dis. 30, e230768 (2024).
25
N. Golding et al., A modelling approach to estimate the transmissibility of SARS-CoV-2 during periods of high, low, and zero case incidence. Elife 12, e78089 (2023).
26
C. I. Jarvis et al., Quantifying the impact of physical distance measures on the transmission of COVID-19 in the UK. BMC Med. 18, 124 (2020).
Information & Authors
Information
Published in
Classifications
Copyright
Copyright © 2025 the Author(s). Published by PNAS. This article is distributed under Creative Commons Attribution-NonCommercial-NoDerivatives License 4.0 (CC BY-NC-ND).
Submission history
Published online: April 9, 2025
Published in issue: April 15, 2025
Acknowledgments
F.M.S. acknowledges James McCaw (University of Melbourne) for many helpful discussions on pandemic surveillance and its role in planning and response.
Author contributions
F.M.S. and M.L. designed research; performed research; and wrote the paper.
Competing interests
The authors declare no competing interest.
Notes
Any opinions, findings, conclusions, or recommendations expressed in this work are those of the authors and have not been endorsed by the National Academy of Sciences.
Authors
Metrics & Citations
Metrics
Altmetrics
Citations
Cite this article
The importance of playing the long game when it comes to pandemic surveillance, Proc. Natl. Acad. Sci. U.S.A.
122 (15) e2500328122,
https://doi.org/10.1073/pnas.2500328122
(2025).
Copied!
Copying failed.
Export the article citation data by selecting a format from the list below and clicking Export.
View Options
View options
PDF format
Download this article as a PDF file
DOWNLOAD PDF