data
stringlengths
1
59.1k
This study has been funded by the research project promoted by China Scholarship Council (CSC) Grant #20180625 0023. We also thank CINECA for providing computational resources via the ISCRA C project: HiReS (HP10C3YNLC).
COVID-19, caused by the emerging virus SARS-CoV-2, rapidly expanded across the globe, overwhelmed healthcare systems, and has led to just under 4.0 million deaths with the pandemic still underway as of July 2021 [1] . It is just the latest and most widespread in a series of (re)emerging and expanding infectious disease outbreaks, including SARS-CoV in 2003, H1N1 influenza virus in 2009, Ebola virus in 2014 and Zika virus in 2016. Before effective vaccines and specific drug therapies are available at the start of emerging epidemics, non-pharmaceutical interventions such as social distancing, maskwearing, diagnostic and serological testing, contact tracing, and quarantine are the best available tools to slow epidemics and to mitigate their health toll. Early in the COVID-19 epidemic, when epidemiological information was limited, governments and other decision-makers used models (e.g. [2] [3] [4] [5] ) to predict the spread of COVID-19 under various non-pharmaceutical interventions and to show the benefits of social distancing for reducing and delaying the epidemic peak (i.e. flattening the curve) in an effort to prevent medical systems from becoming overwhelmed and to buy time for more effective treatments, testing capacity and potential vaccines to become available.
Early models can and should inform policy decisions in an epidemic as they are our primary tool for synthesizing early knowledge of transmission in order to define a plausible range of epidemic outcomes [6] . Such models also quantify trade-offs among proposed intervention scenarios to identify which responses will most efficiently slow exponential growth. During the 2014 Ebola epidemic, for example, early models promoted the use of contact tracing and sanitary burials to reduce transmission [7] . At the beginning of the COVID-19 pandemic, models identified the critical importance of social distancing to slow viral spread [2, 8, 9] . Early models also serve to illuminate difficult-to-observe processes and to test the implications of new information; for example, in February 2020, Hellwell et al. [3] showed that because of SARS-CoV-2 presymptomatic and asymptomatic transmission, contact tracing would be insufficient to curb the spread of the disease and strong social distancing would also be necessary.
Early models must be built rapidly and calibrated to data that are incomplete and of unknown quality, therefore it can be difficult to appropriately quantify uncertainty and to assess model accuracy in order to compare policy decisions. With ample time, alternative transmission scenarios can be thoughtfully compared and uncertainty well characterized [10] ; however, the need for rapid decisions in the face of exponential epidemic growth makes such efforts infeasible for most early models. Though such real-time model assessment is rare, postepidemic retrospective analyses of SARS [11] [12] [13] , H1N1 influenza [14] [15] [16] [17] [18] [19] [20] [21] , Ebola [22] [23] [24] [25] [26] [27] [28] and Zika [29] [30] [31] [32] [33] have illustrated that much can be learned from emerging epidemics about the fundamental principles of disease transmission and epidemic modelling (e.g. the limits and utility of model complexity [15, 23, 32] , the effects of population and geographic heterogeneity on disease dynamics [20, 21, 27] , and the importance of stochastic models to capture uncertainty [13, 25] ). However, few of these analyses focus on models developed early in the epidemics. The accuracy and appropriate use of early models is rarely assessed, first because of a lack of emphasis and resources while the epidemic is underway, and later because early models tend to give way to more sophisticated and better-fitting models as more information and data are acquired (e.g. moving from using single transmission rate during the early COVID-19 lockdowns [34] to continuous-time human movement data [35] ). Thus, we know less about the value and limitations of models that are built and applied early in an epidemic when epidemiological information is severely limited. Yet, this early model assessment remains vitally important because such models often inform policy and public opinion in real time, even if they are later revised. To understand and anticipate problems for future emerging infectious diseases, and to produce models that will be taken up by policymakers, it is critical to reflect upon the value and limitations of early models and to assess their accuracy over time.
The need for rapid model development with incomplete and uncertain data forces modellers to make a series of decisions and assumptions, many of which must be made with relatively little empirical evidence (e.g. about the proportion of infections that are asymptomatic and the transmission potential of asymptomatic infections). All early models of COVID-19 dynamics, for example, were constrained by limited: (1) observations of unmitigated epidemic dynamics from which to inform key epidemiological parameters like R 0 ; (2) information about the impact of preliminary interventions and (3) availability of testing, which made case data an unreliable indicator of epidemic magnitude and dynamics. Further, because of regional differences in socio-economic conditions and demography paired with difficult-to-observe case importations, disentangling local epidemic dynamics from policy interventions (and estimating the effectiveness of those interventions) proved difficult [36] . Data limitations also generate trade-offs between realistic complexity, parameter identifiability, computational feasibility and accuracy (see [37] ), which require models to be designed around a targeted purpose rather than comprehensively describing all aspects of disease dynamics. Early in the COVID-19 pandemic, the need for rapid model deployment resulted in some researchers adopting a minimally complex statistical approach with the aim of producing near-term (e.g. one to five month) epidemic forecasts (e.g. [38, 39] ). However, because statistical models do not capture the underlying mechanistic transmission process, early COVID-19 statistical models were poorly suited to predict the effects of non-pharmaceutical interventions. On the opposite end of the spectrum, mechanistic, agent-based models sought to more precisely estimate the potential effects of different social distancing policies by incorporating population structures and individual movement (e.g. [2] ). Such models were, however, computationally intensive, contained a large number of parameters, and did not always have publicly available code from the outset, making them infeasible to rapidly fit and simulate in many locations and relatively inaccessible to outside research groups and decision makers. Similar types of uncertainty and the need for rapid model development also led to the mixed success of early models of SARS [13] , H1N1 pandemic influenza [14] [15] [16] [17] , Ebola [22] [23] [24] [25] [26] [27] [28] and Zika [32] .
In addition to limited information, unreliable data and model trade-offs, a hurdle in the use of models early in the COVID-19 epidemic was the rapidity and heterogeneity of policy changes (especially in locations where interventions varied at a local level, for example in the USA). These changes, along with temporal variation in human behaviour (which often changed in advance of government interventions [40] ) and extrinsic factors (e.g. seasonality), quickly rendered many models obsolete. Though many models were being continually updated (using, for example, more reliable case data due to expanded testing and cell phone-based mobility data), delays between changes in epidemic drivers and model improvements, their uptake, and public health decisions led to many models being used after the epidemiological environment was no longer reflected in the model's underlying assumptions. Furthermore, in the USA, for example, the fragmented COVID-19 response forced many state and local governments to rely upon a few highly publicized early models, which led to an outsized influence of some early non-peer-reviewed models (e.g. [38] ).
Here, we quantitatively evaluate the successes and failures of an early epidemic model by retrospectively analysing an epidemiological compartment model that we developed in March 2020 for COVID-19 dynamics in Santa Clara County, California during the first wave of the US pandemic. While later development improved upon this model (see figure 1 ; [42] ), here we use the early model as a snapshot in time, analysing it as an artefact rather than improving upon it royalsocietypublishing.org/journal/rspb Proc. R. Soc. B 288: 20210811 [43] . That is, we intentionally seek to evaluate our model in light of the limited information and rapid decisions used to build it as well as the sparse data used to fit it. We critically gauge the strengths and limitations of this model by evaluating decisions that benefited and hindered model accuracy and potential model mis-specifications, and by quantifying the accuracy of predictions over time to understand why inaccuracy increased.
We began development on the model on 13 March 2020, 4 days after the first reported death in Santa Clara County and four days before the introduction of the San Francisco Bay Area shelter-in-place orders that applied to this county, which were initially proposed to last three weeks (figure 1). Our first aim was to deploy (within approximately two weeks) a public-facing user-friendly graphical model (http:// covid-measures.stanford.edu/) that would allow users to adjust intervention parameters to help the public and local decision-makers understand the potential for resurgence if restrictions were lifted or relaxed too early and to evaluate viable exit strategies. Like other early COVID-19 epidemic models that were built for a specific purpose, we designed the model with the following considerations in mind. First, because we sought to compare the effects of various non-pharmaceutical interventions, we chose to build a mechanistic epidemiological compartment (susceptible-exposedinfectious-recovered: SEIR) model. Second, given early work showing the potential for asymptomatic transmission [44] and our interest in symptomatic isolation as a potential intervention strategy, we broke up the infected classes into multiple compartments (asymptomatic, pre-symptomatic, mildly symptomatic and severely symptomatic). Because we were also interested in interventions triggered by hospital capacity thresholds, we included model compartments (state variables) for hospitalizations. Third, as we were interested in the implications of both initial, short-term interventions and longer-term exit strategies, we used a time-varying transmission parameter, β t , to encapsulate the impact of nonpharmaceutical interventions on epidemic dynamics and control. Fourth, as case data were significantly biased at the time due to limits in testing capacity and access, we fit the model to local epidemic dynamics using daily reported COVID-19 deaths in Santa Clara County (which we assumed were more reliably reported than cases). We completed our first analyses on 13 April and posted a preprint on 30 April 2020 (figure 1). In the light of the narrow time window and these considerations, like many other early models we made a series of decisions (electronic supplementary material, table S1), some of which we deemed to be sub-optimal and improved in our later model, which we began developing on 1 May 2020 [42] .
After developing the public-facing website, we originally used this model to estimate key epidemiological metrics and to evaluate the effectiveness of long-term intervention strategies, specifically focusing on Santa Clara County, California as a case study. We estimated the transmission rate under pre-intervention and shelter-in-place conditions, calculated reproduction numbers before and during interventions, explored the impact of long-term intervention strategies, and investigated counterfactuals to understand the impact of early intervention decisions. We now seek to answer the following retrospective questions: what did the model suggest about epidemic metrics, dynamics, and the impact of non-pharmaceutical interventions, and how did these estimates change over time? How accurately did the model predict epidemic dynamics going forward? For how long was the model accurate enough to be useful, and what limited its longer-term accuracy?
We developed a stochastic compartmental model using an SEIR framework. We divided the population into states with respect to SARS-CoV-2 infection: susceptible (S); exposed but not yet infectious (E); infectious and pre-symptomatic (I P ), asymptomatic (I A ), mildly symptomatic (I M ), or severely symptomatic (I S ); hospitalized cases that will recover (H R ) or die (H D ); recovered and immune (R); and dead (D). We assumed an underlying, unobserved process model of SARS-CoV-2 transmission depicted in figure 2 . We used a Euler approximation of the continuous time process with a time step of 4 h. We assumed that transitions between states were simulated as binomial or multinomial processes, which treat periods within each state as being geometrically distributed. Given that each period is geometrically distributed with a different rate, transition times through multiple states follow no named distribution but are unimodal (e.g. disease onset-to-death: electronic supplementary material, figure S1). It is also possible to divide each state (e.g. infectious classes) into multiple sub-stages to produce Erlang-distributed periods within stages [45, 46] (a change that we implemented in later iterations of our model [42] ); however, here we relied on single compartments for each state for simplicity (electronic supplementary material, table S1). The equations (electronic supplementary material, Eq. S1-Eq. S8) describe in detail the stochastic transitions between states. We assumed that the observed deaths are a Poisson random variable with mean of total new deaths accumulated over the observation period (1 day for this analysis). The transmission parameter, β t , describes the average per capita contact rate, at time t, between susceptible and infectious people multiplied by the per-contact transmission probability. We defined β t as being directly proportional to the impact of social distancing at time t, which is given by σ t , such that β t = β 0 · σ t , where β 0 is the transmission rate prior to any social distancing (with σ t = 1). The degree to which a social distancing intervention reduces the overall population contact rate is 1 − σ t . While any sequence of time-varying transmission rates can be implemented in this framework, given the limited data to inform estimates of different transmission rates, we model β t in three distinct segments of time that are characterized by different social contact structures:
(1) baseline prior to any interventions (β 0 = β 0 · 1), assumed to occur at least until 29 February (2) the San Francisco Bay Area 'work-from-home' initiative, which we model as beginning some time between 1 March and 9 March (β WFH = β 0 · σ WFH ); and (3) the San Francisco Bay Area shelter-in-place, which began on 17 March (β SIP = β 0 · σ SIP ). We included σ WFH and the work-from-home start date as two of the parameters we sampled across a plausible parameter range (see electronic supplementary material, table S4) but allowed σ SIP to be estimated by the model (see 'Fitting the Model' below). We did not fit σ WFH due to concerns about identifiability as σ WFH modifies the transmission rate for only a brief period of time prior to the first observed death in Santa Clara County (figure 1).
By including asymptomatic and pre-symptomatic individuals, we were able to track 'silent spreaders' of the disease, both of E Figure 2 . Epidemiological model structure. English letters in boxes designate state variables (model compartments), while the Greek letters β, γ, λ and ρ refer to transition rates between states; the Greek letters α, μ and δ designate transition probabilities between states. We assume that the per capita transmission rate (β t ) is directly proportional to the effectiveness of social distancing (σ t ), such that β t = β 0 · σ t , where β 0 is the transmission rate prior to any social distancing (i.e. σ = 1). The transition rate between the susceptible and exposed states is given by the force of infection (β t · I eff /N), where I eff is the effective number infectious, which is equal to the weighted sum of infectious classes (in which the weights are the relative infectiousness of the infected classes: I eff = I a κ a + I p *κ p + I m κ m + I s κ s ). See electronic supplementary material, tables S3 and S4 for details on the relative infectiousness (κ) parameters.
royalsocietypublishing.org/journal/rspb Proc. R. Soc. B 288: 20210811 which have been shown to contribute to COVID-19 transmission [47] (electronic supplementary material, table S1). Tracking deaths allowed us to compare our simulations to a data source that was likely more reliable than confirmed cases, particularly in the absence of widespread rapid testing and case detection. Mildly symptomatic cases were defined as those people that show symptoms but do not require hospitalization, while we assumed that all severely symptomatic cases would eventually require hospitalization (figure 2). We also assumed that no onward transmission occurred from hospitalized individuals. We further assumed that all individuals not exposed to the virus begin as susceptible to infection, and that all model compartments other than susceptible and exposed began with zero individuals.
We fit β 0 , which describes the initial transmission rate in the absence of any interventions; σ SIP , which describes the proportional reduction in β 0 under shelter-in-place; and E 0 , with which we drew the initial number of exposed individuals as Poisson(E 0 ) + 1. Fitting more than these parameters with only a few weeks of daily deaths (our first model iteration was hosted online on 22 March 2020: figure 1), was unrealistic because parameters were not identifiable. To estimate these three parameters, we assumed point estimates for parameters for which there was at least some convergence in estimates in the literature (electronic supplementary material, table S3); most notably these parameters include the average time individuals spend in infectious states. We use the inverse of durations as average exit rates, but note the possibility that taking the inverse of durations from individual-based studies (e.g. incubation period [49] and time from symptom onset to death [59, 62] ) might not scale appropriately for use as rates in a population-level model. For the remaining parameters, we drew 200 Sobol sequences, a more efficient method than Latin hypercube for sampling input parameters [64] , across a range of plausible values (electronic supplementary material, table S4) to form 200 plausible parameter sets. While sampling over all non-fitted parameters is possible, we decided against this strategy in an effort to focus computation time on the areas of greatest uncertainty.
We note that we use λ to refer to the exponential rates at which individuals leave infectious classes and not force of infection as is common. The κ parameters (electronic supplementary material, table S3) scale β t for individual infectious classes, where κ = 1 indicates no scaling. For simplicity and in the absence of better data, we assumed that only asymptomatic transmission had a scaling factor different from one (specifically, κ A < 1; see electronic supplementary material, table S4). For the rates shown in electronic supplementary material, tables S3 and S4, the unimodal distribution for the time from first symptoms to death has a mean of approximately 23.5 days, a median of 20 days, a mode greater than zero and a moderate right-skew (electronic supplementary material, figure S1 ). This median is between the mean value of 17.8 found by Verity et al. [59, 72] and the range of 35-44 days observed in Bi et al. [73] .
Using the pomp (statistical inference for partially observed Markov processes) package [74] (function mif2) in the R programming language [75] , we fit β 0 , σ SIP and E 0 to daily deaths for each of the 200 parameter sets using six independent replicate particle filtering runs. For each independent replicate, we perturbed the starting values for fitted parameters among runs using random samples from a lognormal distribution for β 0 (meanlog = log(0.7), sdlog = 0.17) and σ SIP (meanlog = log(0.2), sdlog = 0.2) and a uniform distribution between 0 and 6 for E 0 . Each individual mif2 replicate run used 300 iterations, 1000 particles, a cooling fraction of 0.50, and a random-walk perturbation for all parameters of 0.02 (using the function ivp for E 0 to designate it as an initial value parameter). The optimization was constrained to positive values for E 0 and β 0 and between zero and one for σ SIP . After the filtering steps were completed, for each mif2 replicate run, loglikelihoods were calculated using the function pfilter 10 times with 10 000 particles each to produce both mean and standard errors for log likelihoods for each parameter set.
We computed weekly fits from 1 April through 24 June by withholding data reported after the given fit date. We used COVID-19 death data from The New York Times, based on reports from state and local health agencies (available at https://github. com/nytimes/covid-19-data, figure 3b). Daily deaths were calculated from differences in cumulative death reports. Using these data, which are available for all counties in the USA, our model can be used to fit β 0 , σ SIP and E 0 in any county.
We calculated R 0 as estimated β 0 times the duration and infectiousness of an average infection (as defined by our model structure) for each of the 1200 parameter sets (using all six estimates from each of the mif2 replicate runs). For each of the 1200 parameter sets, we calculated the effective reproduction number R E for each weekly fit by modifying the calculation for R 0 to scale the estimated β 0 by both σ SIP and the estimated median proportion of the population remaining susceptible across 200 simulated epidemics for the given parameter set.
For each fitted model parameter (β 0 , σ SIP and E 0 ), uncertainty comes from two sources: variation in fitted values among replicate mif2 iterations (e.g. uncertainty in the value of fitted parameter conditional on a given parameter set-a single given conceivable state of the world), and variation in the estimated parameter value across the 200 parameter sets (uncertainty in the value of the fitted parameter given uncertainty in the state of the world). We computed likelihood profiles for the three fit parameters over 30 uniformly spaced points (hereafter, fixed points): from 0.2 to 1.2 for β 0 , 0.01 to 0.6 for σ SIP and 1 to 30 for E 0 . For each parameter and each fixed point, we refit the model for each of 200 unique Sobol-sequenced parameter combinations with the same mif2 settings used in other model fitting steps (except with three mif2 replicates rather than six due to computational cost). We identified the maximum log-likelihood for each fixed point among all 600 fits (200 parameter sets, each fit three times with random starting values). We computed likelihood profiles for three fit dates only (1 April, 13 May and 24 June 2020) because of computational costs (18 000 model fits per profile).
While a set of parameters can produce many simulations, only some of these trajectories are conceivable given the observed data. To simulate using only trajectories that are plausible conditional on the observed data, we drew trajectories from the smoothing distribution using the filter.traj and pf functions in pomp with 5000 particles for each particle filter. These filtering trajectories can be viewed as weighted samples from the distribution of unobserved state processes given the observed data, where the weights are determined by the likelihoods from the particle filtering [74] . All simulations were run forward in time from filtering trajectories, which constrains forecasts to continue from a present state matched to the observed epidemic dynamics in order to avoid overly large forecast uncertainty. Unless otherwise noted, all simulations used parameter sets within the top two log likelihood units for each fit date. For all simulated trajectories, we used 25 filtered trajectories and 25 forward simulations from each filtered trajectory for a total of 625 total epidemic forecasts for each of the parameter sets within the top two loglikelihood units for each fit date (the number of which varied by fit date). Unless noted otherwise, the uncertainty bands that we display for all simulations prior to the fit date contain the central 95% range of outcomes across parameter variation among the fits within the top two log likelihood units and variation among the 25 filtered trajectories. The uncertainty bands after the fit date (forecasts) contain the same parameter variation royalsocietypublishing.org/journal/rspb Proc. R. Soc. B 288: 20210811 and stochastic variation and include additional variation from 25 forward simulations for each of the 25 filtered trajectories. The uncertainty bands on future simulations of deaths also contain additional variation from the Poisson observation process. These are simultaneously wide because of large numbers of stochastic simulations, but narrow because we ignored uncertainty in the parameters listed in electronic supplementary material, table S3 as well as uncertainty in the estimated parameters for each fit, and thus should be interpreted with caution.
To assess model performance, we compared model-forecasted deaths-simulated for 14 days forward in time from filtering trajectories assuming that the existing levels of social distancing were maintained-to observed deaths. Specifically, we quantified model forecast performance for deaths using the 'quadratic score'. The quadratic score is a commonly used strictly proper scoring rule (a forecasting evaluation metric with a unique maximum that is reached by increasing both the accuracy and the concentration of the predictive distribution around the true value) for a predictive model with a discrete (e.g. Poisson) error distribution [76] [77] [78] [79] . We calculated the quadratic score for each simulation over the 14 days following the fit date as
where i indexes the days since the fit date (from 1 to N = 14), y i is the observed new daily reported deaths,m i is the daily prediction from the simulation, and Prðy i jm i Þ is the probability mass on y i given the predictionm i . The sum P 1 k¼0 ðPrðkjm i ÞÞ 2 runs over all positive integers (k) to measure the dispersion of probability mass given the predictionm i ; for a Poisson distribution, we calculate this sum analytically based on the following closed form expression for any x ≥ 0: P 1 k¼0 ðPrðkjxÞ 2 Þ ¼ e À2x I 0 ðxÞ, where I 0 is the Bessel function. For the model predictionsm i , we used the underlying new daily deaths from the simulation's trajectory (D new ), rather than the deaths arising from the Poisson observation process, given that the model-predicted distribution of deaths for a given simulation is best characterized by Poisson(D new ). For each fit date, we show the distribution of quadratic scores across simulations from fitted parameter sets within the top two log-likelihood units (one score per simulation and 625 simulations-25 forecasts for each of 25 filtered trajectories-per fitted parameter set).
We refrained from calculating a quadratic score comparing the model's predicted cases and the reported cases given that we did not model incomplete case detection. Instead, we simply relied on a visual comparison between the trajectory (curvature) of the predicted cases and reported cases to qualitatively assess the model's predictive accuracy. Specifically, we compared predicted new symptomatic infections to the cases reported one week later in order to account for a week of reporting lag.
Our modelling framework allows for different types, intensities and durations of interventions, and thereby illustrates how these interventions impact dynamics and the resulting number of COVID-19 cases and fatalities through time. Here, we use fits generated from deaths reported prior to 22 April to consider three possible interventions that can be implemented at different times during the simulation: (for a total of 625 total simulations) from the parameter set with the best likelihood using data up to 22 April. To quantify the effectiveness of each intervention scenario, we estimated cumulative deaths and the number of new symptomatic infections over time from the simulated epidemics for parameters sets within 2 log-likelihood units of the best for each intervention (e.g. the effectiveness of infected isolation).
In addition to making forward projections under different intervention scenarios, models can help compare past actions taken (e.g. public health interventions) to alternative hypothetical scenarios (e.g. alternative types and timings of non-pharmaceutical interventions). Such comparisons can help to highlight which actions were the most helpful and which could have been improved. Early in an epidemic, a counterfactual analysis is particularly useful to assist local policymakers and the public contextualize the impact of early decisions relative to other possible decisions that could have been made. To assess the impact that existing county orders and resources had on the epidemic trajectory, we limited filtering trajectories to the date in which the counterfactual scenarios diverged (i.e. 17 March when the county shelter-in-place order went into effect), then simulated forward assuming: (1) shelter-in-place orders went into effect on 17 March; (2) shelter-in-place orders went into effect one week later on 24 March and (3) testing and isolation of infected individuals began in addition to the shelterin-place orders on 17 March (in reality, testing remained limited in Santa Clara County and throughout the USA through the end of April [80] ). In particular, we assumed that testing and isolation of symptomatic individuals further reduced their infectious contacts by 80% for severely symptomatic individuals and 70% for mildly symptomatic individuals.
We iteratively fit our model each week from 1 April through 24 June 2020 using data on daily reported COVID-19 deaths up to that date. For each fitted model, we first estimated R 0 and R E to investigate how our understanding of epidemic dynamics changed with increasing data availability, then compared the fit of simulations to out-of-sample data to evaluate how model performance changed over time.
The model consistently estimated that R 0 was between 3 and 4 (with a median among all fits of 3.76), though R E varied considerably, especially among the fits throughout April (figure 3). For example, R E jumped from a confident estimate below one on the 15 April fit to an uncertain estimate spanning one on the 22 April fit, after a week of higher deaths was included. After the April volatile period, R E estimates stabilized near 0.69 by mid-late May with very narrow confidence intervals (e.g. on 27 May the model estimated a median R E of 0.642 with a 95% CI of 0.571-0.708 among fits within two units of the top log likelihood) and continued to vary little throughout June. Even as cases increased in June (figure 3), deaths remained low, leading to little change in R E estimates. The estimated impact of social distancing (σ SIP ) and transmission rate in the absence of non-pharmaceutical interventions (β 0 ) followed a similar pattern of increased confidence over time, while the initially exposed class (E 0 ) was estimated with large uncertainty in all fits (electronic supplementary material, figures S2 top panel and S3). Profiles over the fitted parameters reflect these patterns: the β 0 and σ SIP profiles showed clear peaks, the E 0 profile was flat across the range of values examined, and the σ SIP profile began showing jaggedness for the last fit date (24 June 2020) when the model was no longer suited for the changing epidemiological situation (electronic supplementary material, figure S2 bottom panel). The difficulty in identifying the initial value parameter E 0 is not surprising given that only early time points are expected to inform the initial state in a stochastic process [81] ; in the case of this dataset, the time between the initial conditions (the start of the epidemic) and the first observation reached up to 68 days for some parameter sets. We ran additional diagnostics to understand the effect of the model's difficulty in identifying E 0 (electronic supplementary material, figure S3 ) on the other focal parameters and quantities of interest. We found that altered mif2 settings (an expanded range of starting values and random walk standard deviations; see electronic supplementary material, figure S3 ): (1) permitted larger E 0 values which allowed for overly late, biologically implausible, start dates (electronic supplementary material, figure S3A );
(2) did not lead to much better convergence (electronic supplementary material, figure S3B) and (3) despite allowing for much larger estimates of E 0 , did not meaningfully impact the values of other fitted parameters (electronic supplementary material, figure S3C ).
Despite the uncertainty in initial conditions, strong convergence of the 10 replicate log-likelihood estimates (as measured by small standard errors among them, electronic supplementary material, figure S4 ), the six replicate mif2 runs in terms of log-likelihood (electronic supplementary material, figure S5 ), and good convergence in estimates for σ SIP and β 0 (electronic supplementary material, figure S6 ) indicates that much of this uncertainty is due to the inability of our model to differentiate among alternative parameter sets ( parameter identifiability issues) (electronic supplementary material, figures S7-S10) and not a misuse or failure of the fitting algorithm or log-likelihood calculations.
Near-term forecasts and model performance also varied substantially over time. Simulations based on the model fit to deaths through 1 April show that while the bulk of simulations predicted declining deaths (figure 4, corresponding to the majority of the R E density being below 1, see figure 3 ), some simulations show rapidly increasing daily deaths over time (corresponding to an R E . 1). With the 22 April model fit, uncertainty in whether R E was above or below 1 (figure 3) resulted in very large uncertainty in epidemic trajectories ( figure 4) . Simulations from later model fits (e.g. those from late May through early June) projected a decline in deaths through the end of June; model fits in June consistently suggested epidemic fade-out by the end of the month, assuming that the existing non-pharmaceutical intervention regime had remained in place. Weekly model fits (electronic supplementary material, figure S11 ) show that the model tended to under-predict deaths early in the study period (early to mid April) when a period of low daily deaths led to low estimates of R E (figure 3, gold shaded violin plots). The model then equally under-and overpredicted deaths in the middle of the study period (late April to mid-May), when sufficient data had likely allowed for more accurate estimates of the shelter-in-place effectiveness. Finally, the model under-predicted deaths again at the end of the study period (mid-May to late June) after shelter-in-place orders were relaxed on 4 May (figure 1) and the single estimated value for the effectiveness of social distancing was no longer realistic.
royalsocietypublishing.org/journal/rspb Proc. R. Soc. B 288: 20210811
The model qualitatively matched the curvature in the cases reported within the time window used to fit the model for fits through 13 May (figure 5, blue points), although, as expected, the model predicted far more new symptomatic infections than reported cases (figure 5, difference in left and right vertical axes). Estimates of daily cases through the date of fitting produced realistic estimates of the proportion of the population remaining susceptible (electronic supplementary material, figure S12 ). However, all future projections failed to capture the trajectory of future, out-of-sample, reported cases; in most instances the model predicted declining cases despite the increase in cases observed starting in late May. By 3 June, the model began to fail to capture the increasing cases even within the observed time period for which death data were used to fit the model ( figure 5, blue points) . This inaccuracy in predictions was likely due to both the changing epidemiological environment (figure 1) that made model assumptions unrealistic (i.e. a constant effectiveness of social distancing, σ SIP , and constant mortality rate) and the fact that changes in epidemic dynamics will be apparent in cases prior to deaths.
We originally designed and fitted our model (and accompanying interactive website) in part to communicate to local policymakers and the public the impact of the early social distancing interventions in Santa Clara County and the importance of continuing strong non-pharmaceutical interventions for saving lives and preventing an epidemic resurgence. To achieve these goals, we used counterfactual analyses to compare what transpired to alternative unrealized scenarios and to forecast the epidemic under alternative future scenarios with different non-pharmaceutical interventions. We revisit these analysis here, in brief, to illustrate this use of our model. We estimated that a second peak would have been inevitable in the absence of any non-pharmaceutical interventions even if shelter-in-place had been maintained until 1 June 2020, as illustrated here for the single best-fitting parameter set (figure 6, red lines). Across all parameter sets within 2 log-likelihood units of the MLE and stochastic epidemic simulations, we estimated that in this scenario Santa Clara County would have had a median of 6140 deaths (95% CI: 546-19 494) and a peak number of daily new symptomatic infections of 33 193 (95% CI: 12 536-58 259) occurring on 5 July 2020 (95% CI: 26 June 2020-18 July 2020).
Maintaining shelter-in-place until 1 June, followed by less stringent social distancing (50% of baseline contacts), combined with strong symptomatic case isolation (removing an additional 80% and 70% of contacts from severe and mild symptomatic infections, respectively), would have allowed for higher background contact rates (e.g. more businesses reopening) and yet fewer deaths, as predicted by our single best model fit ( figure 6 ). For a range of possible combinations of symptomatic case isolation efficiencies and background social distancing (electronic supplementary material, figure S13), we found an overlap in confidence intervals for deaths, but higher median estimated deaths at the weakest levels of social distancing in the general population. For reference, the median number of estimated deaths under maintained shelter-in-place is shown by the horizontal black line, with 80% and 95% CI in dashed and dotted lines, respectively (electronic supplementary material, figure S13 ). These confidence intervals span a wide range because our estimated R E values as of 22 April ranged from 0.76-1.34, which led to some epidemics growing and some declining through time.
We proposed that, without widespread testing availability before the end of shelter-in-place, a hypothetical alternative strategy would have been adaptive triggering, in which social distancing orders are intensified and relaxed as hospitalizations exceed and fall below critical thresholds, respectively. However, because the estimated R E for Santa Clara County was approximately one (and the confidence interval included one on 22 April), a strategy that periodically reduces the strength of social distancing may have led to an overall increase in cases that would not be reversed when the shelter-in-place was reinstated. We found that an adaptive triggering strategy that alternates between social distancing that reduces transmission to 20% and 80% of baseline could be effective in keeping cases and deaths low (electronic supplementary material, figure S14 ). This method would have kept the epidemic within the capacity of the healthcare system, but resulted in prolonged cycles of epidemic resurgence and control, continuing until herd immunity was reached through recovery of infected individuals or vaccination.
In simulations of counterfactual scenarios, we found that an additional 57 (95% CI: 10-143) lives would have been lost if shelter-in-place orders had been delayed even a week, and 26 (95% CI: 3-51) deaths could have been averted if testing and isolation of symptomatic individuals was available from the time of the shelter-in-in place (electronic supplementary material, figure S15 ).
During an unfolding pandemic, modelling is an essential tool for tactical decision-making, strategic planning and communicating qualitative scenarios [17] . Many early COVID-19 models played a critical role in highlighting the importance of social distancing to governments and to the public (e.g. [2] [3] [4] [5] ). Models like our own helped communicate to the public that 'flattening the curve' and slowing transmission was not a short-term endeavour, but also that early and sustained interventions had major benefits for local public health. Despite their importance, it is often unclear how quickly and for what reasons early models become obsolete given that retrospective analyses are not usually conducted on the first iterations of models. Here, we presented an example retrospective analysis on our early COVID-19 model to ask the following questions:
(1) what did the model suggest about epidemic metrics, dynamics, and the impact of non-pharmaceutical interventions, and how did these estimates change over time? (2) How accurately did the model predict epidemic dynamics going forward? (3) For how long was the model accurate enough to be useful, and what limited its longer-term accuracy?
Our model stably estimated R 0 between April and June with an overall median of 3.76 (figure 3), comparable to values estimated elsewhere (e.g. [82] ), and identified that R E declined substantially after the shelter-in-place order was enacted, to near or below one. However, predicted future epidemic dynamics were highly uncertain given that many model parameters had large uncertainty (especially for the 1 April model fit, see electronic supplementary material, figures S2 and S7), estimated credible intervals on R E spanned one in royalsocietypublishing.org/journal/rspb Proc. R. Soc. B 288: 20210811 model fits until early-mid-May (figure 3), and R E estimates were variable from week to week when data were sparse (figure 3). For example, the inclusion of a week with higher deaths (most of which occurred in long-term care facilities [83] , a distinction that is not captured in our model that assumes a homogeneous population), led estimated R E to jump to span one on 22 April ( figure 3) . The volatility and uncertainty in R E estimates highlight the difficulty in inferring epidemic metrics from deaths alone, which are a noisy and lagged indicator of the underlying epidemic dynamics. Despite these limitations, early models like ours play a critical role in estimating coarse epidemiological metrics and dynamics (e.g. R 0 and R E ), which are fundamental for quantifying and comparing the efficacy of various intervention scenarios and predicting the future course of the epidemic.
Our model initially gained accuracy in predicting epidemic dynamics as additional data increased parameter identifiability (April to mid-May 2020, figure 4 , bottom panel) but then began to decline in performance as the model assumptions became too simplistic to capture the changing epidemiological context (late May to June 2020). Out-of-sample predictions of deaths suffered from high uncertainty during the early period (e.g. on 22 April) while predictions during the end of the study window (e.g. on 3 June) became overly confident that the epidemic would die out. Because of limited and variable testing capacity, we relied on a visual comparison of the curvature in model predictions of new symptomatic infections to observed daily cases (lagged one week for a plausible reporting lag). Model fits through 13 May qualitatively match the curvature in the reported cases within the time window used to fit the model, but by 3 June the model failed to capture even the increasing cases within the observed time period (figure 5, blue points). Thus, our model illustrates that trajectories of cases can be captured in relatively simple mechanistic models based on only reported deaths, but only for a limited period of time until epidemiological conditions change.
(c) Limitations to long-term accuracy (Q3)
Predictions deteriorated as the epidemiological environment (figure 1) began to deviate further from our model assumptions. We fit a simple step function for the impact of non-pharmaceutical interventions with the aim of balancing identifiability in the face of limited data and accuracy of early predictions. However, this assumption restricted the utility of our model once Santa Clara County began to relax social distancing orders ( figure 1) . Additionally, the relationship between cases and deaths fundamentally changed between the first and second waves. COVID-19 deaths remained relatively flat through July and August even while reported cases surged above 300 per day by the third week in July [83] . This pattern occurred in many places in the US during the summer resurgence [84] and may reflect some combination of differences in personal protective behaviours and social distancing adherence across disease severity risk groups, resulting in a larger share of cases occurring in people less vulnerable to severe disease and death. Improved Figure 6 . Maintaining non-pharmaceutical interventions after 1 June, when early shelter-in-place restrictions were relaxed, is critical for preventing a devastating resurgence. Maintaining shelter-in-place at the σ SIP value predicted by the model (here σ SIP = 0.40) (gold) or test-and-isolate (σ SIP = 0.50 plus an additional 80% effectiveness of test and isolate for severe infections and 70% for mild infections) (blue) strategies over long periods are necessary to prevent a major epidemic resurgence (red) following the end of the initial shelter-in-place order on 1 June royalsocietypublishing.org/journal/rspb Proc. R. Soc. B 288: 20210811 standards of care, safety protocols and the availability of personal protective equipment in long-term care facilities, as well as increased testing to improve the detection of asymptomatic or mildly symptomatic cases all likely played a role as well.
Generally, real-time models face opposing forces: additional time means additional data to aid in parameter estimation; however, as more time passes the epidemiological environment deviates further from that which the early model was built to address, eventually reducing model accuracy [17, 19] . Our case study and others (e.g. [85] ) clearly demonstrate these opposing forces. Rapidly changing US policy forced many COVID-19 models to consider continuous-time estimates of movement (e.g. [42, [86] [87] [88] [89] ) instead of constant intervention strengths. Yet, even these more detailed models faced a changing epidemiological context, which included the implementation of mask mandates and other behavioural changes (e.g. [90, 91] ), seasonality modifying viral kinetics and behavioural contact patterns, holidays altering mixing and travel patterns, and emerging virus variants with new epidemiological properties. In general, changes such as these cause unexpected variation in disease dynamics across time and space, limiting the accuracy of long-term forecasts [13, 17, 19, 20] and potentially reducing the accuracy of our model, which relied partially on fixed parameters derived from early outbreaks in other locations (e.g. China [51, 57, 61, 66] , Italy [44] and Singapore [50] ). While simplifying assumptions are necessary in early and real-time models, those assumptions must be frequently reevaluated to ensure continuing accuracy [11, 19, 22, 24, 27, 32, 33] . We made a series of simplifying assumptions for our early model in an attempt to overcome a lack of data (electronic supplementary material, table S1). While we relaxed some of these in a follow-up model [42] , several alternative technical decisions could have improved our model from the beginning. For example, we used single compartments for each state rather than subdividing the infectious states into multiple compartments, resulting in geometrically distributed transition periods instead of more realistic Erlang-distributed periods [45, 46] . Though it saved us only one parameter, we also assumed a Poisson observation process rather than the more flexible negative binomial observation process which is often estimable without too much difficulty (e.g. [25, 72] ). Finally, we did not consider uncertainty in the estimated parameter values, for example by using importance sampling [92] .
Our case study highlights key lessons for the practice of early, real-time modelling of emerging epidemics. In our analysis, we focused on quantifying near-term forecast accuracy, a common strategy for evaluating model performance [85, 93] , which could be used more frequently from the outset to monitor changes in model performance over time. We also advocate tracking variation in parameters and signs of declining parameter identifiability. Together, systematically applying these approaches will provide warning signs of model inaccuracies sooner and help to ensure that hidden mis-specifications are identified and corrected promptly. Early models will inevitably be imperfect reflections of reality constrained by limited data; using them responsibly requires considering and communicating uncertainty, a benefit of stochastic frameworks like that employed here [25] . Our model struggled to fit a relatively complex structure with a short time series of available data, particularly early in the epidemic, which resulted in some implausible epidemic dynamics (for example, unrealistically rapid depletion of susceptible individuals for some parameter sets from 1 April fits, electronic supplementary material, figure S12 ). This further highlights the difficulty of fitting complex models early in an epidemic and reinforces the importance of thorough evaluation of early predictions in order to avoid making biologically implausible claims. Our model was also particularly limited in its ability to estimate the initial number of infected individuals (electronic supplementary material, figures S2 and S3). In light of our and others' struggles with estimating E 0 for COVID-19 and other emerging infectious diseases [24, 29, 65] , surveillance programmes (including genomic surveillance [94] ) may assist with both disease control and modelling in early stages of an epidemic.
Regularly updated, centralized databases for parameter estimates (e.g. [95] ) and relevant time series data on epidemiology and mobility (e.g. [41, 96, 97] ) are major assets for modelling in emerging epidemics. These tools facilitate more rapid development of early models and streamline comparisons among models [16] . Changes to any model, especially those addressing flaws in previous versions, should be clearly communicated and shared with adequate documentation to ensure that outdated versions of the model are not used and do not guide others' model development [17] . However, especially in early stages of a pandemic, a relatively common and simple mechanistic modelling framework with a long and robust history, as outlined here, may be able to provide quicker and more reliable insight into disease dynamics than developing new model structures from scratch. For example, SIR-type models can do surprisingly well predicting epidemic metrics even with limited data [98] [99] [100] . Furthermore, simple transmission functions can be an effective alternative to more complicated functional forms, as illustrated here for the period of strong social distancing between approximately mid-March and mid-May 2020 (figure 1). Simple mechanistic models, unlike phenomenological models (e.g. statistical curve fits), also allow for scenario analysis through alterations in inputs and parameters, which allows for longer-term forecasts comparing alternative interventions.
Given the similarity between our model and others developed concurrently (e.g. [91] ), we recommend developing infrastructure to facilitate collaboration, rapid communication, and workflows to minimize duplication of effort, facilitate troubleshooting, and aggregate and analyse projections across sets of models [101] . Further, we identified human behavioural changes as a key source of inaccuracy in our model predictions, suggesting the importance of collaboration between disease modellers and behavioural scientists, as well as guidelines for proper incorporation of mobility data [88] . Greater engagement between policymakers and scientists, particularly to clarify types and timings of interventions being considered, the importance of key modelling decisions, and the differences between early models considered in policymaking (e.g. the strengths and weaknesses of both phenomenological and mechanistic models) would ensure that a model is appropriately designed (e.g. distinguishing between symptomatic and asymptomatic infections to capture the dynamical implications of testing and isolating only symptomatic infections) and applied to relevant scenarios. Despite all that has been learned about the impact of nonpharmaceutical interventions such as social distancing and mask wearing, and the approval of effective clinical therapies and vaccines, the US experienced two additional major epidemic waves within a year that each dwarfed the one in the early epidemic and control period we studied here. Given the order-of-magnitude difference in deaths and over three orders-of-magnitude difference in cases observed between the spring 2020 and winter 2021 periods [83] , it may be tempting to conclude that non-pharmaceutical interventions and public health orders did not work, or were too economically and socially costly to justify their use. However, this is a dangerous conclusion. Mechanistic models like those we present here make it clear that, however imperfect, these interventions saved large numbers of lives: as of 1 July 2021, Santa Clara County has seen 2201 total deaths [83] , a terrible toll, but one that is only 30% of our median prediction occurring from an unmitigated epidemic. As of July 2021, vaccination coverage in adults in Santa Clara County has reached approximately 75%, which, combined with the anticipated eligibility for younger children [102] , portends an end to the epidemic locally in the coming months. Even during the peak of the winter surge, the county saw just over 700 concurrent hospitalizations, far shy of our median estimated value of 12 975 (95% CI: 760-28 927) that could have occurred without control measures in place. Although the US COVID-19 response clearly could have been better at controlling transmission, illness, and death, mechanistic models make it clear that the situation also could have been much worse without the control measures that remained in place, which were at least in part motivated by early models. Moving forward with COVID-19 and in future epidemics, models that incorporate changes in contact behaviour, population immunity derived from natural infection and vaccination, population heterogeneity in behaviour and immunity, and changes in immunity over time due to natural waning and emerging immune-evading variants will be critical for determining how to safely transition between initial and long-term interventions.
Data accessibility. Data used in this study are available at: https://github. com/nytimes/covid-19-data. Code used to produce the results in this study are available at: https://github.com/marissachilds/COVID19_ early_model. Output from model fits and simulations are available from the Dryad Digital Repository: https://doi.org/10.5061/dryad. cvdncjt4t [103] .
have been investigated as potential therapies for preventing or treating COVID19. The aim of this review is to discuss specific micronutrients such as melatonin, zinc, selenium, vitamin C and vitamin D and their evidence behind their efficacy in treating or preventing . Micronutrients as a potential therapy for COVID-19 has gained attention recently due to their anti-inflammatory properties, cost effectiveness, availability to the public and relative safety.
Discussion:
SARS-CoV-2 is a non-segmented, enveloped, positive-sense RNA virus which targets the nasal and bronchial epithelium of host cells [6] . SARS-CoV-2 enters these cells via a spike protein by attaching onto angiotensin-converting enzyme 2 [ACE2] receptors whereby it replicates, forming new viral structures which are released from the host cell to continue this process [7, 8] . Since nasal epithelial and bronchial cells have a high expression of ACE2 receptors, SARS-CoV-2 targets these cells and begins the replication cycle here. In most cases, there is a local immune response via interferon-beta [IFN-] and C-X-C motif chemokine ligand 10 [CXCL-1O] which results in upper respiratory tract infection and produces symptoms such as cough, fever and rhinorrhea [8] . This local immune response usually resolves the infection, however in severe instances, the infection progresses, and the virus infects type II alveolar cells. Infection of these cells propagates and destroys these cells which results in diffuse alveolar damage [9] . During this phase, multiple interleukins and cytokines such as IL-1, IL-2, IL-6,IL-8,IL-10, IFN-, CCL3, IP-10 and TNF-are released and induces a "cytokine storm" [10]. The "cytokine storm" attracts neutrophils, CD4 helper and CD8 cytotoxic cells which fight the infection but in doing so produce a constant inflammatory state. The inflammatory J o u r n a l P r e -p r o o f response promotes apoptosis and necrosis of the surrounding tissue which then induces further inflammation. This then damages both type I and II alveolar cells and as a result increases permeability of blood vessels and subsequently causes acute respiratory distress syndrome (ARDS) [9, 10] .
Given the pathophysiology of COVID19, selecting micronutrients that inhibit any of these pathways would be beneficial for therapy in treating or preventing a COVID-19
Melatonin is produced by the pineal gland and released at nighttime to regulate the sleep-wake cycle and blood pressure [11] . Melatonin also has a role in the immune system as it has anti-inflammatory properties and immune modulation properties [11] .
Regarding its anti-inflammatory properties, melatonin has been shown to down regulate pro-inflammatory cytokines such as IL-2,6,12,TNF-, 13] . These pro-inflammatory cytokines, especially NF-k are all increasing in acute lung injury and in ARDS which occur during an infection with COVID-19 [13] . As an immunomodulator, melatonin increases proliferation of both T and B lymphocytes [14] .
A meta-analysis of 22 trials, of varying melatonin doses, showed that melatonin reduced TNF-and IL-6 levels [15] . Although the doses did vary, the utility of it as a treatment for COVID-19 cannot be extrapolated based on this as an effective dose was not established. Regarding its safety, melatonin is a safe supplement, even in doses of 1 gram J o u r n a l P r e -p r o o f a day [13] . As a preventative treatment for COVID19, a recent observational study of 26,779 individuals was done by Zhou et al which showed a 28% reduced likelihood of a positive SARS-CoV-2 laboratory test in all combined populations when adjusting for age, sex, race and other comorbidities [16] . Although it is encouraging as a preventative measure for COVID19, there were several limitations to the study as cited by the authors.
One of which was that their data set did not include individuals that were asymptomatic or have minimal symptoms and therefore melatonin use in this group could not be evaluated [16] . Based on these findings, further studies must be done to evaluate the efficacy of melatonin as a preventative treatment, however initial results are encouraging.
Zinc is a trace element that is required for the functioning of enzymes, transcription factors, cellular signaling and most pertinent to this topic; immune functioning [17] . Zinc has a role in immune function by acting as an anti-inflammatory agent as well as a signaling molecule in the immune system. As an anti-inflammatory, zinc reduces the production of IL-6 as well as IL-1 which are proinflammatory cytokines [18] . Zinc affects the immune system as an important signaling molecule in the production of IL-2, IFN-Y and IL-12, which stimulate CD8+ T cells [19] . In contrast, patients with zinc deficiency characteristically have a poor immune system response due to a lack of activation of both T helper cells and CD8 T cells as well as a blunted production of IFN-, due to the absence of zinc [20] . Zinc does have anti-viral properties as in vitro studies have shown that zinc does play a role in inhibiting RNA-dependent RNA polymerase in influenza, RSV and in SARS-coronavirus [21, 22] . This study was limited to zinc being coupled with ionophores and it was done in vitro so one cannot J o u r n a l P r e -p r o o f extrapolate if it is applicable to SARS CoV-2. The use of zinc as a therapy for COVID-19
is an ongoing area of research and currently there exist only trials for its use against COVID19 [23] . What has been found however, is that patients with COVID-19 had poorer outcomes; as measured by longer hospital stays and an increased mortality, if they had hypozincemia [24, 25] and as shown by Jothimani et al, the odds ratio of developing a complication from COVID19 was 5.54 in zinc deficient patients [25] . As a preventative treatment, zinc does increase mucociliary clearance by increasing ciliary beat frequency in respiratory epithelium [26, 27] . This could prevent infection as viruses such as SARS-CoV-2 damage ciliary epithelium and can potentiate viral entry and secondary bacterial infections [26] . Zinc is also responsible for preserving tissue barriers, especially in lung parenchyma which prevents viral entry [26] . Although both properties hypothetically show the potential zinc has as an adjunct for prevention, doses of zinc at which these occur have not been established. What is also interesting is that the intake of too much zinc; 300mg/day for 6 weeks has been shown to impair neutrophil and lymphocyte function [28] . By taking in too much zinc, this can blunt a local immune response and make one more susceptible to infections.
Selenium is a trace mineral which is required for the formation of selenoproteins and selenite, which are used for anti-inflammatories, antioxidants, prevents thrombosis and defense against viral infections [29, 30] . The mechanism in which selenium exerts its effect as an antiviral is that it inhibits the enzyme protein disulfide isomerase; which is responsible for a viral glycoprotein attachment and therefore preventing the virus from entering its host cell [30] . Selenium also decreases IL-1 and IL-6 which pro- [33]. Moreover, it was also reported that Enshi, a city of Hubei, China, which had the highest selenium concentration, had the fastest the recovery rate of COVID-19, which was triple that of the rest of the cities sampled [33] . A similar study done in Germany by Moghaddam et al. also showed low blood levels in selenium was associated with a higher mortality in COVID19 patients [34] . These studies only showed correlation and not causation so further investigation would be needed. Selenium does play an important role in the immune system however no data is available on its use as an adjunct for treatment or prevention of COVID-19.
Vitamin C, or ascorbic acid, is a water-soluble essential vitamin that may only be obtained through consumption of nutrient-rich foods [35] . Vitamin C protects cells against oxidative damage by scavenging reactive oxygen species (ROS) and is an essential cofactor for enzymes required in the production of cortisol, vasopressin, and catecholamines [36] . Vitamin C is present in high intracellular concentrations in leukocytes but is rapidly depleted during infection causing a shift in the ratio of antioxidant defenses to oxidant generation. This increase in oxidant generation causes J o u r n a l P r e -p r o o f proinflammatory cytokine release and initiation of the inflammatory cascade [35] .
including diabetes, chronic obstructive pulmonary disease (COPD), and hypertension, all of which are predictors of mortality in COVID-19 infection [37] . Low levels of vitamin C is also widely prevalent among critically ill patients, patients with acute respiratory infections, and patients with acute respiratory distress syndrome [35, 36] .
Supplementation with high-dose intravenous vitamin C has been demonstrated to be safe and feasible while significantly reducing vasopressor support, limiting organ injury, decreasing the duration of mechanical ventilation, and decreasing ICU stay [36] .
In severe cases of COVID-19, a rapid increase in cytokines causes neutrophil sequestration in lung tissues. This "cytokine storm" causes damage to the alveolar capillaries and is the underlying mechanism in acute respiratory distress syndrome (ARDS) [36] . The proinflammatory cytokines, TNF-alpha and IL-1 rapidly increase during acute infection with SARS-CoV-2 and promote increased secretion of IL-6 and IL-8 thereby facilitating the ongoing pro-inflammatory state in COVID-19 infection.
Vitamin C is known to counteract the increase in TNF-ɑ and increases the antiinflammatory cytokine IL-10 that provides a negative feedback on IL-6 [35] .
The CITRIS-ALI trial demonstrated that intravenous vitamin C supplementation of 50 mg/kg every 6 hours for 96 hours did not significantly alter QSOFA scores or levels of inflammatory markers compared to placebo in patients with sepsis and ARDS, but it did significantly improve 28-day all-cause mortality [38] . A recently published, multicenter, double-blinded, randomized controlled clinical trial demonstrated that 12 g of vitamin C infused intravenously two times per day for seven days to patients with J o u r n a l P r e -p r o o f severe COVID-19 infection admitted to the ICU did not significantly improve invasive mechanical ventilation-free days, QSOFA scores, or inflammatory markers at 28 days.
Consistent with the CITRIS-ALI trial, this study did demonstrate significant improvement in the ICU mortality in the high-dose vitamin C group [36] . These two trials suggest that supplementation with high-dose intravenous vitamin C may provide some benefit in severely ill, vitamin C-deficient patients [35] . However, adverse events in the setting of high-dose vitamin C supplementation in the absence of deficiency justifies caution against advising this as a preventive strategy. Most concerning is the association of high doses of vitamin C with formation of kidney stones, especially among patients with high oxalate levels at baseline [39] .
Vitamin D Vitamin D is a fat-soluble nutrient that acts as a steroid hormone precursor.
Ultraviolet B (UVB) radiation exposure of the epidermis transforms 7-dehydrocholesterol to cholecalciferol, the circulating precursor of vitamin D [35] . Vitamin D is also present in foods and supplements as ergocalciferol and cholecalciferol, both of which undergo further metabolism in the liver, and then in the kidneys to the active form 1,25hydroxyvitamin D [35, 39] . Risk factors for Vitamin D deficiency include age, smoking, obesity, hypertension, and diabetes [40] . Epidemiological observations of influenza A epidemics have implicated vitamin D as the driver of the "seasonal stimulus" hypothesis, suggesting that peak influenza cases in the winter months coincides with reduced sun in their absence or dysfunction, the risks of infection and pulmonary edema are increased [35] . Vitamin D may decrease or prevent cytokine storm in patients with COVID-19 through immunomodulatory effects on production of pro-inflammatory cytokines [40] .
Pro-inflammatory cytokines are produced by T helper type 1 cells, and vitamin D reduces this cell response by increasing T helper type 2 cell response that produce antiinflammatory cytokines [35] .
Low vitamin D levels have been consistently associated with acute respiratory infections in observational studies, and the risk of ARDS may also be increased in this setting [39] . Vitamin D deficiency has been consistently demonstrated in COVID-19 patients and is associated with poorer outcomes [41] . Supplementation of Vitamin D has been suggested to reduce mortality in COVID-19, but trial evidence has been inconsistent to date [39] . While large-scale randomized controlled trials are still needed to support specific recommendations, expert opinion supports avoidance of vitamin D deficiency, and supplementation in accordance with government guidelines in the setting of deficiency and COVID-19 infection [42] .
The COVID-19 pandemic is posing severe challenges to healthcare systems, leading to shortages of personal protective equipment (PPE) which leaves frontline healthcare workers (HCW) at grave risk. Among the lessons learned during the outbreaks of Severe Acute Respiratory Syndrome coronavirus (SARS-CoV) in 2003 and the Middle East Respiratory Syndrome coronavirus (MERS-CoV) in 2012 were the need for stringent infection control in healthcare settings, clear criteria for isolation and quarantine measures, and continued evaluation of the effectiveness of PPE in infection prevention. 1, 2 The World Health Organisation (WHO) has warned of the risk of the global supply of PPE rapidly depleting. 3 Despite these efforts, hospitals have faced severe shortages of PPE. 4 Shortages of PPE lead to healthcare workers being dangerously ill-prepared to care for patients with COVID-19. 5 Studies showed that N95 respirators (N95s) offer good protection against viral respiratory pathogens, especially during aerosol-generating procedures or when a patient's COVID-19 status is positive or unknown. [6] [7] [8] N95s are of critical importance for confronting the SARS-CoV-2 pandemic among the various PPEs. Strategies to optimise the usage of N95s are therefore vital. These should focus on conserving N95 use while providing adequate protection for HCWs working with COVID-19 patients. HCWs may need to have contact with confirmed or suspected patients and their surroundings within inpatient facilities, often over an entire work shift. Extended-use-or even reuseof N95s has therefore been suggested, with recommendations in place to use respirators for up to 4 hours if N95s are in short supply. [9] [10] [11] A study that screened 27 countries or regions revealed that only 5 countries (19%) allowed extended use; 2 countries (7%) mentioned reuse; and 3 countries (11%) recommended both strategies for rationing N95 respirators. 12 Several ways to decontaminate N95s for reuse have also been well studied. 13 The potential extended-use of N95s requires serious consideration as the demand for respirators will increase in response to rising case counts, especially among those with severe disease who require intensive care and prolonged hospitalization. One in five confirmed cases in a hospital setting in China were reported to have severe symptoms requiring long stays at hospital facilities, 14 while a quarter of those required admission to the intensive care unit (ICU) due to complications of acute respiratory distress syndrome, arrhythmia and shock, which significantly increase hospital stay. 15 The same study observed a median hospital stay of 10 days (IQR 7-14) among those discharged. Per patient, multiple N95s are consequently required to avoid transmission between HCWs and patients.
In Singapore, a city of ~5.6M people in the Malay Archipelago, 51,197 cases have been reported as of 2020-07-28. 16 This number is expected to rise in the coming months, making it paramount to project the number of N95s required to ensure adequate provisioning. Here we estimate the number of N95s required over the course of the epidemic under three strategies: single-use, extended-use and prolonged-use. Through estimating the demand, we aim to aid planning efforts by assessing what preparations are required over the coming months for the healthcare system to run with the maximized possible protection for its workers. Although described for the Singapore setting, our results hold for planners in similarly sized cities elsewhere.
We developed a Susceptible-Exposed-Infectious-Recovered (SEIR) model to estimate the number of cases with COVID-19 being present in healthcare facilities, and necessary N95 use under three policies: single-use, extended-use and prolonged-use. For planning purposes, we developed three variants of the SEIR model which describe the relative outbreak size, labelled as mild, severe and moderate.
In the SEIR model, we classified the outcomes of COVID-19 infection into six levels, relating to the need for hospitalization or being admitted to the ICU, and the risk of onward transmission (Figure 1 ). These were: level 1 for asymptomatic infections; level 2a for mild symptoms that remain undiagnosed at healthcare facilities; level 2b for mild symptomatic ambulatory cases which are identified; level 3 for those who required hospitalisation but not the ICU for their recovery; level 4 for those who entered ICU on admission; and level 5 for those dying among levels 3 and 4. We combined levels 3-5 as severe and levels 1-2 as mild. We also combined levels 2b-4 as known and levels 1-2a as cryptic i.e. cases which are not documented by the healthcare system.
We varied the parameters in the mild to severe scenarios regarding the: (i) proportion of cases seeking medical attention (30%, 50% and 70% separately), (ii) proportion of cases requiring hospitalisation (10%, 20% and 30%, respectively), (iii) proportion of cases admitted in ICU among hospitalised cases (20%, 30% and 40% from mild scenarios to severe scenarios), (iv) case fatality rate (4.2%, 1.8% and 0.43%), 17
Three N95 use policy scenarios were suggested based on the frequency of these inpatient visits ( Supplementary Figures 2 and 3
Suspected cases are divided into negative cases and positive cases based on two consecutive coronavirus RT-PCR tests. Negative cases included those with mild symptoms that remain undiagnosed at healthcare facilities (level 2a) and suspected but negative cases. The positive cases included mild symptomatic ambulatory cases (level 2b), those requiring hospitalisation (level 3) and ICU cases (level 4). All suspected cases, who were quarantined in the isolation wards, were assumed to wait for three days in the isolation ward until laboratory confirmation. After three days, mild symptomatic ambulatory cases (level 2b) and mild symptoms that remain undiagnosed (level 2a) are discharged from the isolation ward.
To assess the effects of case misclassification, six ratios were used to reflect the proportion of suspected but negative cases with positive cases, ranging from 0 to 10 (0, 0.5, 1, 2, 5, 10). A zero ratio (1:0) has no case misclassification and conversely a ratio of 10 indicates significant uncertainty in preliminary diagnosis upon admission where a substantial proportion are incorrectly identified as being COVID-19 positive but later are confirmed to be negative. A separate set of analyses were conducted where rapid testing was implemented, or a zero ratio, at 2 to 5 months from the epidemic start point, to estimate its potential effects on usage of N95s.
A total of 4.5 million infections (80% of the total population) were projected to occur in the severe scenario and 2.6 million (45.6%) in the moderate or mild scenario over the span of a year. Hospitalized cases reached a peak at approximately 6.5 months (day 194), and 11 months (day 324) after the first case. At the peak, 65,000 hospitalised cases were estimated to occur in the severe epidemic, 15,000 in the moderate and 5000 in the mild scenario ( Figure 2A ). The peak in ICU admission occurred 3-4 weeks later than hospital admission (day 197 and day 328). At this time, the number of patients staying in the ICU amounted to 57,000, 9000 and 2000 ( Figure 2B ).
HCWs working in an ICU will use 72, 39, or 6 N95s per case per day in the single-use, extended-use and prolonged-use policies, respectively. Workers in the respiratory isolation ward will use 29, 20, or 4 per case per day in these respective policies. Full details are presented in the Supplementary Table 2 and 3.
In the moderate scenario with the ratio of confirmed cases versus suspected cases at 1:0, the number of required N95s was found to be 117.1 million, 71.6 million and 12.8 million under the three policies. A reduction of 38.9% and 89.1% was observed in the final consumption when an extended-use and prolonged-use policy was implemented ( Figure 3D and Supplementary Table 4 ). In the severe scenario, the final consumption amounted to 512.8 million, 304 million, and 52.4 million for each policy. The implementation of an extended or prolongeduse policy created further savings of 40.7% and 89.8% ( Figure 3A and Supplementary Table 4 ). In the mild scenario, a total of 34.3 million, 21.9 million and 4.1 million N95s were estimated to be required for three policies. The reduction with extended and prolonged-use was 36.2% and 88.1% compared to the single-use policy ( Figure 3G and Supplementary Table 4 ).
Under the mild scenario, the peak month of N95 consumption occurs 11 months later with the consumption of 10.3 million sets during the peak month according to the single-use policy, with a reduction of 35.9% for extendeduse and 87.4% in the prolonged-use policy (Supplementary Table 5 ). The peak month occurs 7 months later than in the severe scenario (Supplementary Table 5 ) where N95 consumption reached 188.1 million.
Compared to the single-use policy, the extended-use policy reduces N95 consumption by 40.6% and the prolonged-use policy by 89.7% (Supplementary Table 5 ).
We found that N95 consumption was sensitive to different ratios of confirmed cases versus suspected cases across the epidemic scenarios. With a ratio of 1:2, 313.4 million, 207 and 39.9 million N95s were estimated for use in each policy within the moderate scenario. The consumption increased by ~3 times in the severe epidemic scenario and reduced by ~0.5 in the mild scenario ( Figure 3B, E and H) . With a high misclassification rate from a ratio of 1:10, N95 consumption amounted to 1.10 billion, 748.4 million, and 148.2 million in the moderate scenario across the three policies. This was substantially reduced by 31.8% and 86.5% within the extended and prolonged-use policies ( Figure 3F and Supplementary Table 4 ).
In the severe scenario, 2.84 billion, 1.98 billion, and 386 million N95s were estimated to be used, representing a 30.4% and 86.4% reduction in the extended and prolongeduse policies ( Figure 3C and Supplementary Table 4 ). In the mild scenario, N95 consumption was estimated to be 623.2 million, 428 million and 85.3 million, reducing final N95 consumption by 31.3% and 86.3% in the extended and prolonged-use policies ( Figure 3I and Supplementary Table 4 ).
The peak month of N95 use in the moderate scenario was at 11 months where N95 demand was at 317.5 million according to single-use, 216.3 million for extended-use and 42.8 million for prolonged-use. However, for the severe scenario in the peak month occurring at seven months, N95 consumption increased to a high of 1.14 billion, 767.6 million and Table 5 ).
We also considered N95 usage in three epidemic scenarios under different policies with the introduction of rapid testing kits. The introduction of rapid diagnostic kits can reduce N95 demand but are still required. In the severe scenario, a 5-month delay in switching to the the use of rapid diagnostic testing results in the usage of 620.6 million N95s with single-use, 413.1 million with extended-use and 80.1 million with prolonged-use policies. In the same epidemic scenario, if the rapid diagnostic is released within 2 months of the outbreak, N95 usage reduces substantially to 348.1 million, 225.1 million and 42.5 million, respectively Figure 4A ). In the mild and moderate epidemic scenario, the peaks for N95 use are reduced and delayed (Figure 3 and Supplementary Figure 4) . In the moderate scenario, after a 5-month delay in switching to rapid diagnostic testing, 90.7 million, 60.8 million and 11.9 million N95s are required. A further reduction is observed if introduced at 2 months with 83.0 million, 55.4 million and 10.8 million N95s in demand ( Figure 4B ). Under the mild epidemic scenario, a 5-month and 2-month delay in the introduction of rapid diagnostic testing results in the N95 usage of 33.3 million and 28.6 million for single-use, 22.4 million and 19.2 million for extended-use, and 4.4 million and 3.8 million for prolonged-use ( Figure 4C ).
Optimal N95 use is required for national and global supplies to be sustained, ensuring HCW safety and allowing for proper patient care across all healthcare facilities. We projected N95 usage across a wide range of scenarios where for a moderate epidemic scenario, our projections of consumption for a single-use policy with no misclassification of disease at hospital entry (ratio 1:0) was found to be 117.1 million, 71.6 million and 12.8 million. This reduced by 38.9% and 89.1% when extended and prolonged-use policies were implemented. These considerable reductions can extend the lifetime of N95 stocks across time and allow for a wider distribution between different regions or countries to reduce the stress on demand. When more case misclassification occurrs at a ratio of 1:10, N95 use increased to 1.10 billion, 748.4 million, and 148.2 million, which would be considerably more challenging to obtain and distribute effectively. The introduction of a rapid diagnostic kit at any point of the epidemic which can immediately ascertain if an individual is SARS-CoV-2 positive therefore advantageous and should be immediately implemented when possible provided test accuracy is high. Our approach can be utilised elsewhere with considerations of the implementation of such extended-use policies. This is especially paramount at this time as the WHO, despite already shipping more than 900,000 surgical masks, 62,000 N95s and 34,000 face shields with other PPEs to 133 countries to supplement supplies for those at contingency or crisis capacities, has stated that overall global supplies are rapidly depleting. 3 This is further complicated by the fact that the PPE supply chain landscape is complex. Numerous national PPE stockpiling systems exist, such as the US Strategic National Stockpile, Canada's National Emergency Strategic Stockpile, Australia's National Medical Stockpile and Taiwan's 3-tier stockpile. 19 These often require multiple private contractors and vendors to balance procurement and deployment to facilitate demand, and require prioritization in allocating resources to different hospitals and healthcare centres with the ongoing supply strain. Manufacturers, particularly in China, are also struggling to ramp up production due to the travel restrictions and quarantining procedures in place which requires other strategies to be considered such as extended-use.
During the last pandemic (influenza A(H1N1-2009)), Hashikura and Kizu recommended a minimum of 8 weeks of PPE supply for stockpiling with 4 sets being utilised per day by high-risk workers, 2 sets for medium and low-risk groups, 2 surgical masks for each worker and inpatient, and 1 for every suspected case. 20 Recommendations at the Ministry of Health, Singapore, are more risk averse by aiming to maintain a 3-to 6-month stockpile, with individual capacities set for each medical institution. 21 The overall stockpile capacity is a function of the renewal rate among workers, the protective robustness of the equipment and corresponding costs. Based on our findings, the extended-use policy could be utilised within wards before supply chains are stressed where multiple infected patients are being visited simultaneously, provided infection risk can be mitigated. Should infection become widespread with significant patient load at healthcare facilities, prolonged-use may have to be utilised to prevent supply shortages. Where possible, it is recommended that gowns and N95 use be prioritized for high-risk aerosol-generating procedures including endotracheal intubation and bronchoscopy. 11 With 42% of 52 ICU patients requiring invasive mechanical ventilation and 13.5% of this group acquiring the infection in hospital, considerable transmission risk exists. 22, 23 This risk is further exacerbated by the presence of environmental contamination from patients. 24 Singapore is currently at an orange level of its Disease Outbreak Response System Condition, which reflects the high risk of case importation and ongoing outbreaks within foreign worker dormitories. Although PPE, including N95s, only forms a component of protective procedures, it remains crucial alongside limiting the number of encounters, implementation of negative pressure isolation rooms, physical barriers and exclusion of non-COVID-19 patients by testing. Communication with the public remains the first barrier of defence with recommendations of appropriate mask usage and prioritization of supplies for HCWs. Public engagement will not be sufficient however as infection events will continue to occur; therefore, countries facing even greater short and long-term shortages of N95s could potentially consider the policy findings here for implementation. 25 As with all modelling studies, many assumptions were made. Firstly, the SEIR model did not account for any heterogeneities in the infection or symptomatic rates as a result of factors such as the age structure. We also did not consider the effects of ongoing control methods, primarily social distancing, on the number of infections through time. This is due to Singapore's outbreak in the community being kept relatively small, making estimations of the control's effects very challenging. Secondly, parameters such as the ratio of confirmed and suspected cases remain largely unknown. Thirdly, the number of N95s used assumes all healthcare workers adhere to the guidelines, not counting for accidents or personal preferences, which may inflate the requirements. It is furthermore unknown as to whether the reuse will cause an increased risk of infection among HCWs due to mask degradation. Fourthly, hospital procedures may change where fewer workers stay longer in N95s within the respiratory isolation wards so as to avert N95 use. Lastly, the proportions allocated to each level here may change according to specifications set out by policymakers should resources become more limited.
Determined by the epidemic size, an extended-use policy should be considered where short-term supply chains are strained but planning measures are in place to ensure long-term availability. With severe shortage expectations from a severe epidemic, prolonged-use can be proposed as a necessary policy to significantly prolong supply. Should infection become widespread with significant patient load at healthcare facilities, extensive reuse, via the extended-use and prolonged-use policy, may also have to be utilised to prevent supply shortages.
PPE, personal protective equipment; HCW, healthcare worker; WHO, World Health Organisation; SEIR model, Susceptible-Exposed-Infectious-Recovered model; ICU, intensive care unit.
Risk Management and Healthcare Policy is an international, peerreviewed, open access journal focusing on all aspects of public health, policy, and preventative measures to promote good health and improve morbidity and mortality in the population. The journal welcomes submitted papers covering original research, basic science, clinical & epidemiological studies, reviews and evaluations, guidelines, expert opinion and commentary, case reports and extended reports. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. Submit your manuscript here: https://www.dovepress.com/risk-management-and-healthcare-policy-journal submit your manuscript | www.dovepress.com
Risk Management and Healthcare Policy 2020:13
Tight-fitting respirator facemasks such as N95 or FFP3 masks are considered to be the gold standard respiratory protective equipment (RPE) for healthcare workers (HCW) working in aerosol generating procedure (AGP) environments 1 involving Covid-19.
Optimal use of facemasks depends on their tight seal with the wearer's skin, assessed via RPE fit testing. Prior to working in AGP environments, HCWs must undergo and pass the mandatory RPE fit test, which is conducted either as a Qualitative fit test (QFT) or a Quantitative fit test (QNFT). QFT is based on subjective assessment of facemask seal competency by detection of a test agent, usually sensed as a bitter or sweet taste, or smell by the wearer. Whereas QNFT gives an objective measure of face fit, by providing a numerical measure of the seal competency.
Fit test is conducted by a certified fit tester, and passing it depends on type of respirator mask tested.
In the event of shortage of successfully fit tested facemasks, HCWs must undergo repeat fit testing with other types of available facemasks.
Evidence suggests that facial hair reduces tight-fitting respirator facemask efficacy with worsening protection with longer facial hair [2] . Conventional fit testing in the presence of facial hair has been shown to have a high test-failure rate [3, 4] . Hence, in line with the available evidence, facemask manufacturers' guidance for fit testing recommends wearers are clean-shaven to enable a good seal of FFP3 mask over their face covering the nose and the mouth.
Individuals unable to shave due to personal or religious reasons are recommended to utilise alternatives such as Powered Air Purifying Respirators (PAPR) [2] . However, these alternatives are expensive, limited in supply and cumbersome to use [5].
They do not allow for fitting of surgical loupes. Dentists are unable to do the all the procedures in their repertoire. Re-deployment to non-AGP areas incur a loss of their skill-sets and need for retraining. Junior doctors have their training impacted with concerns for inadequate and prolonged training.
Therefore, this option of an alternative PPE may not be ideal for some individuals affected by the above-mentioned factors.