## New Research In

### Physical Sciences

### Social Sciences

#### Featured Portals

#### Articles by Topic

### Biological Sciences

#### Featured Portals

#### Articles by Topic

- Agricultural Sciences
- Anthropology
- Applied Biological Sciences
- Biochemistry
- Biophysics and Computational Biology
- Cell Biology
- Developmental Biology
- Ecology
- Environmental Sciences
- Evolution
- Genetics
- Immunology and Inflammation
- Medical Sciences
- Microbiology
- Neuroscience
- Pharmacology
- Physiology
- Plant Biology
- Population Biology
- Psychological and Cognitive Sciences
- Sustainability Science
- Systems Biology

# Evaluating optimal therapy robustness by virtual expansion of a sample population, with a case study in cancer immunotherapy

Edited by Martin A. Nowak, Harvard University, Cambridge, MA, and accepted by Editorial Board Member Rakesh K. Jain June 15, 2017 (received for review February 27, 2017)

## Significance

A successful cancer therapy induces a strong antitumor response while causing minimal side effects. The heterogeneous nature of cancer observed across different regions of the primary tumor, across metastatic sites, across time, and across patients makes designing such a successful therapy challenging. Both standard of care and finely tailored treatment protocols run the risk of not exhibiting a robust antitumor response in the face of these uncertainties. Here we introduce a platform for exploring this robustness question using treatment response data from a sample population. Our method integrates these experimental data with statistical and mathematical techniques, allowing us to quantify therapeutic robustness. Using this approach, we identified a robust therapeutic protocol that combines oncolytic viruses with an immunotherapeutic vaccine.

## Abstract

Cancer is a highly heterogeneous disease, exhibiting spatial and temporal variations that pose challenges for designing robust therapies. Here, we propose the VEPART (Virtual Expansion of Populations for Analyzing Robustness of Therapies) technique as a platform that integrates experimental data, mathematical modeling, and statistical analyses for identifying robust optimal treatment protocols. VEPART begins with time course experimental data for a sample population, and a mathematical model fit to aggregate data from that sample population. Using nonparametric statistics, the sample population is amplified and used to create a large number of virtual populations. At the final step of VEPART, robustness is assessed by identifying and analyzing the optimal therapy (perhaps restricted to a set of clinically realizable protocols) across each virtual population. As proof of concept, we have applied the VEPART method to study the robustness of treatment response in a mouse model of melanoma subject to treatment with immunostimulatory oncolytic viruses and dendritic cell vaccines. Our analysis (*i*) showed that every scheduling variant of the experimentally used treatment protocol is fragile (nonrobust) and (*ii*) discovered an alternative region of dosing space (lower oncolytic virus dose, higher dendritic cell dose) for which a robust optimal protocol exists.

Heterogeneity is a defining feature of cancer (1, 2). Interpatient heterogeneity manifests clinically in variable disease progression and treatment response between patients with the same diagnosis, whereas intrapatient heterogeneity describes variations that exist between tumor cells in a single patient. Intrapatient heterogeneity can be broken down further into intratumor heterogeneity, intrametastatic heterogeneity, intermetastatic heterogeneity, and temporal heterogeneity (2, 3). Intratumor heterogeneity is evident through the presence of multiple genetic subclones within a primary tumor (2), which have even been shown to exist in spatially distinct regions of the primary tumor (4). Intrametastatic heterogeneity is similar to intratumor heterogeneity, but describes heterogeneity within a single metastatic lesion instead of within the primary tumor (2). Intermetastatic heterogeneity, on the other hand, describes variations in subclones between different metastases in the same patient (2). Finally, temporal heterogeneity is defined as changes that take place in the tumor over time, whether they are a result of genomic instability, natural selection, non-Darwinian evolution, or selective pressures imposed by treatment (4⇓–6). Note that, in each case, heterogeneity need not be genetic but may also be epigenetic, phenotypic, or microenvironmental (2, 7, 8).

For decades, cancer patients have been treated using standard of care, meaning they receive the best known treatment that has been deemed as efficacious and safe in epidemiological studies (9). However, in the face of such interpatient and intrapatient heterogeneity, standard of care fails to induce a strong antitumor response in some patients, and often loses its efficacy with time (7, 9). The notion of personalized medicine has emerged as an alternative to standard therapy, and holds the promise of improving clinical care by using patient-specific data to tailor treatment protocols (5, 7, 9⇓–11). The potential benefits of this approach are multifaceted, and include the expectation of a strong patient response with a minimal toxicity profile (5, 9, 10).

As heterogeneity presents challenges for standard of care therapy, it also poses challenges for personalized therapy. If a therapeutic protocol is individually tailored based on measurements from one region of a patient’s tumor at a particular pretreatment time point, it is nontrivial to determine how regions of the primary tumor distinct from the biopsy region, not to mention metastatic legions, will respond to that protocol. Given that both standard of care and personalized cancer treatment protocols must confront the challenge of high levels of variability between and within patients, the design of robust therapeutic protocols is of the utmost importance. A robust treatment should result in the same qualitative response despite a reasonable level of uncertainty (12). In other words, it would be expected to exhibit an antitumor response across a large fraction of patients, and would be more likely to be effective across the range of spatial and temporal variations observed within an individual patient. In this work, we propose to study treatment robustness through a technique that we call Virtual Expansion of Populations for Analyzing Robustness of Therapies (VEPART, for short). VEPART provides a platform for assessing the robustness of treatment protocols by coupling experimental data with statistical techniques and mathematical modeling.

As a methodology, VEPART is distinct from important efforts to analyze dynamical system robustness to perturbations, whether they be perturbations in parameter values, initial conditions, or the functional forms used in the mathematical model (12). Of these forms of robustness for dynamical systems, the parameter robustness problem is the most well studied; see, for instance, refs. 12⇓⇓⇓–16. Parameter robustness in a mathematical model has important consequences, as it lends support that conclusions drawn from the model can be trusted in a real-world setting where noise is inevitable. Although not as well understood, others have considered the questions of robustness to change in initial conditions (17, 18), and to change in the dynamical functions (19⇓–21). These considerations are also of paramount importance when assessing the reliability of model predictions.

Comparatively, VEPART is a data-driven approach for studying robustness to an external control, such as a treatment protocol. The starting ingredients for the VEPART method are time course data for multiple individuals in a sample population, together with a parametric mathematical model that has been validated by fitting to aggregate data from this population. VEPART proceeds by amplifying the sample population using nonparametric statistical techniques, eventually resulting in the generation of a large number of “virtual populations.” These virtual populations, each defined by a parameterization of the mathematical model, can be thought of as statistically plausible subsets of the full patient population.

Virtual populations have been used by others, particularly in the field of quantitative systems pharmacology (16, 22⇓⇓⇓⇓⇓⇓⇓–30). Overwhelmingly (see, for instance, refs. 16, 23, 27, and 28), these virtual populations have been used to tackle the previously mentioned challenge of uncertainty in model parameters. From this perspective, patients in a virtual population can be used to understand how differences in disease manifestation occur, and to propose mechanisms for variable drug response. As an example, virtual populations have been used to demonstrate the clinical importance of accounting for the correlated expression of different metabolic enzymes (26), and to provide mechanistic explanations for variations in response to a drug for rheumatoid arthritis (27).

In some cases, virtual populations have been exploited to make therapeutic design recommendations. For instance, Wang et al. (24) describe the use of virtual populations to select a dose for a phase 2b clinical trial, and Valitalo et al. (30) used virtual populations to propose a revised dosing guideline for the use of two antibiotics in neonates. Another particularly relevant example is the work of Kansal and Trimmer (22), which discusses how virtual populations (designed through weighting of virtual patients) can be used to optimize a target outcome across virtual populations. No details on how to practically achieve such a goal are given in ref. 22, although a case study is mentioned for using virtual populations to select clinical trial endpoints and to optimize clinical trial design.

These virtual population methods require dozens to thousands of samples to create virtual populations that are representative of the target population (23, 26, 28⇓–30). On the other hand, the VEPART procedure proposed herein works with very small amounts of data, and “expands” the available data in an effort to model the heterogeneity expected in the full population. Not only can VEPART create virtual populations despite limited amounts of data, it can also handle the situation in which the data come from different patients and experimental trials. This ability is in contrast to many existing virtual population approaches in which all variables measured come from the same set of individuals.

In this paper, we describe how VEPART also goes beyond the standard use of virtual populations for studying parametric uncertainty, and instead utilizes these virtual populations to assist in designing therapeutic protocols. In particular, we will systematically detail how the VEPART procedure (summarized in Fig. 1) moves from a small amount of data to a virtual population pool to a therapeutic robustness analysis. This process requires the identification of the optimal protocol (possibly among a list of clinically viable options) for each of the virtual populations, followed by an analysis of responses to the optimal protocol(s) across the virtual populations.

## Case Study: Immunoenhanced Oncolytic Viruses with Dendritic Cell Vaccines

As a case study, we have applied this methodology to the experimental data of Huang et al. (31) on a mouse model of melanoma treated with oncolytic viruses (OVs) and dendritic cell (DC) vaccines. OVs are standard viruses genetically engineered to selectively replicate in cancer cells. Upon integration into a tumor cell, OVs replicate and, as part of their normal life cycle, lyse the cell (32). Lysis results in the release of more OVs that can infect additional tumor cells, spreading an infection throughout the tumor that, in theory, results in tumor regression while sparing normal cells. These OVs can be further genetically enhanced to act as a vector for delivering therapeutic genes to the tumor site (32). In ref. 31, an oncolytic virus, the adenovirus (Ad) in particular, is engineered to deliver genes that boost the immune system’s ability to identify, target, and kill cancer cells. The transgenes of interest in this study are 4-1BB ligand (4-1BBL) and interleukin (IL)-12; 4-1BBL is a costimulatory molecule expressed on antigen presenting cells. Binding of 4-1BBL to its receptor, 4-1BB, promotes the development and expansion of type-1 T helper cells and cytolytic effector T cells (31). IL-12 is a cytokine that strongly promotes the differentiation of naïve CD4^{+} T cells to type-1 T helper cells (33).

Huang et al. (31) have demonstrated that oncolytic adenoviruses can successfully infect cancer cells, locally deliver therapeutic genes, and cause tumor regression in their mouse model (Fig. 2*A*). Further, the therapy has been shown to have minimal side effects, as lysis and high expression of therapeutic genes were restricted to cancer cells (31). The efficacy of this treatment protocol has been shown to be enhanced by the use of DC vaccines (Fig. 2*B*). DCs are antigen-presenting cells that stimulate naïve T cells and generate memory T cells (31). Huang et al. (31) harvested these DCs from the bone marrow of mice, pulsed them ex vivo with tumor-associated antigens, and delivered them intratumorally to melanoma-bearing mice. This treatment exhibited antitumor activity in isolation, and potent antitumor effects in combination with oncolytic adenoviruses carrying 4-1BBL and IL-12 (31), which we will refer to as Ad/4-1BBL/IL-12 for brevity (Fig. 2*B*).

We have previously developed a mathematical model that describes well the data from Huang et al. (31) in all experimentally explored situations (Fig. 2): treatment with only OVs, OVs enhanced with one or more immunostimulatory molecules (4-1BBL, IL-12, or both), DC vaccines, and DC vaccines coupled with Ad/4-1BBL/IL-12 (34, 35). In this prior work, we used the best-fit parameters to address the question of the optimal treatment ordering when administering three doses of Ad/4-1BBL/IL-12 with three doses of the DC vaccine. We predicted that exactly one strategy, front loading the cytokine-bearing OVs (OV–OV–OV–DC–DC–DC), resulted in tumor eradication (35). However, upon further exploration, we found that the model displayed some unintuitive responses to altering the dosing order. For instance, the ordering of the protocols from maximal to minimal tumor response displayed no discernible pattern (35). A striking example of this phenomenon was when the optimal protocol became the worst-case scenario by simply moving the DC dose on the last day of treatment to day one (DC–OV–OV–OV–DC–DC). This extreme sensitivity to dosing order could be explained by our finding that the doses of Ad/4-1BBL/IL-12 and DCs used in the experiments of Huang et al. (31) were near a bifurcation point in our mathematical model. As a result, slightly altering the dose or sequence drastically changed the efficacy of the protocol (35).

Herein, we aim to gain insight into the unintuitive behavior observed in our single-population optimization study by performing a robustness analysis using VEPART. Specifically, we generated a large number of bootstrap replicates by sampling with replacement the experimental data in Huang et al. (31). As detailed in *Computational Methods*, pseudorandom sampling of the posterior distributions for the fit parameters approximated from these bootstrap replicates was used to generate 1,000 virtual populations. The procedure for finding the optimal dosing sequence (among a subset of protocols) was repeated for each virtual population. This process allows us to determine the probability that different protocols will successfully result in tumor eradication, and further allows us to compute the likelihood that a particular sequencing of drugs will be optimal across the virtual populations. We undertook this analysis in three different regions of dosing space: (*i*) at the experimentally used dose (31), (*ii*) at a 50% higher dose of Ad/4-1BBL/IL-12 but a 50% lower dose of DC than used in ref. 31, and (*iii*) at a 50% lower dose of Ad/4-1BBL/IL-12 but a 50% higher dose of DC than used in ref. 31.

This robustness analysis uncovered something unexpected: Every scheduling variant of the experimentally used treatment protocol that involves administering three doses of Ad/4-1BBL/IL-12 and DCs is fragile (nonrobust). To detail, the protocol determined to be optimal at the experimentally used dose (OV–OV–OV–DC–DC–DC) only results in tumor eradication in 30% of the virtual populations. All suboptimal protocols are even less effective. The regime of higher OV and lower DC dose also proved to be fragile, as the optimal protocol leads to tumor eradication in only 43% of the virtual populations. On the contrary, a robust optimal protocol exists when treating with higher doses of DCs and lower doses of OVs than those used by Huang et al. (31). In this dosing regime, the protocol of front loading the DCs (DC–DC–DC–OV–OV–OV) not only proved to be optimal in all virtual populations but also resulted in tumor eradication in more than 84% of those virtual populations. This application serves as proof of concept that computational robustness studies, underpinned by experimental data, can aid in identifying treatment protocols predicted to have strong antitumor properties despite interpatient (and intrapatient) heterogeneity.

## Results

The full system of six differential equations described in Eqs. **1**–**6** includes 14 parameters. Previously, we found that the value of 7 of the 14 parameter values (viral production rate, infected cell lysis rate, viral decay rate, naive and cytotoxic T-cell decay rates, base T-cell killing rate, and T-cell differentiation rate) could be reasonably approximated from the literature (Table S1), whereas the other 7 could not. Therefore, it is these 7 parameters that were fit to give the best-fit solution curves to the experimental data, as shown in Fig. 2.

The best-fit parameters for the full model (Ad/4-1BBL/IL-12 + DCs), determined using the hierarchical scheme detailed in *Computational Methods*, are found in Table S2 in *SI Results*. For reference, Table S2 also compares the best-fit parameters found herein to those previously found using a different algorithm and a slightly different metric of fit (35). Finally, Table S2 includes the 95% credible interval for each fit parameter, as calculated from the approximated posterior distributions (Fig. S1). These 95% credible intervals are compared to the results of a local sensitivity analysis, as shown in Figs. S2 and S3 of *SI Results*.

### Robustness at Intermediate OV and DC Dose.

Here we focus on identifying and classifying the response and robustness to 20 treatment protocols (all possible combinations of Ad/4-1BBL/IL-12 with DCs given at three doses a piece, separated by 1 d) at the experimentally used dose of

To this end, we generate 1,000 virtual populations by pseudorandomly sampling the posterior distributions on the fit parameters (Fig. S1), as described in *Computational Methods*. These virtual populations are alternative parameterizations to the model in Eqs. **1**–**6** that, when pooled together, in theory capture the underlying heterogeneity in the full patient population (16). The 20 treatment orderings considered in ref. 35 were administered to each of these virtual populations, allowing us to determine a “population-level” response to drug ordering at the intermediate OV/DC dose (Fig. 3).

Fig. 3*A* demonstrates the extent to which the virtual populations respond differently to the same treatment protocol. For a fixed ordering of the drugs (specified on the *A*). That number grows to 90% if we consider those that can rank in both the top and bottom five positions (Fig. 3*A*).

Another indication of the nonrobust (fragile) response across virtual populations at this dose is that, on average, each of the 20 treatment protocols achieve over 17 different rankings. As an example, the previously found optimal protocol of OV–OV–OV–DC–DC–DC achieves every rank between 1 and 20, with the exception of rank 13 (Fig. 3*A*). This protocol is most likely to rank in the top position, as it ranks in position 1 for 72.2% of the virtual populations (Fig. 3*A*). However, the next most likely ranking for this protocol is in the last position, as it ranks in position 20 for 13.8% of the virtual populations (Fig. 3*A*). The fact that the same protocol is the best case for some virtual populations and yet is the worst case for others strongly suggests the originally identified optimal protocol is not robust.

Because there was minimal support for a given treatment protocol to achieve a consistent ranking across the virtual populations, we then sought to address a simpler question: Do certain protocols result in tumor eradication in a large fraction of the virtual populations, independent of where they appear on the rank-ordered list of schedules? Consistent with the lack of a robust response at this dose, we found no protocols that lead to eradication in a majority of the virtual populations. Instead, we observed that the treatment that ranked as the top protocol most frequently (OV–OV–OV–DC–DC–DC) only results in tumor eradication in 30% of the virtual populations (Fig. 3*B*). All other protocols have a lower likelihood of tumor eradication, lending further evidence to the claim that the tumor eradication response in this dosing regime is fragile.

Digging more deeply into the binary classification of treatment protocols, we found that treatment response is more closely tied to the virtual population under consideration than to the treatment protocol itself (order in which the doses are given). By sorting our data by virtual population (instead of treatment protocol, as done in Fig. 3), we find that just over 76% of the virtual populations have the same binary response (tumor eradication or tumor escape) to all 20 treatment protocols tested. In particular, our simulations reveal that tumor escape will occur independent of the treatment protocol used in just over 66% of the virtual populations. Further, tumor eradication will occur independent of the treatment protocol in 10% of the virtual populations. Because the majority of virtual populations have a binary response (eradication or escape) independent of the dosing order, we learn that the parameters in the virtual population, and not the treatment order itself, are really driving the predictions in the intermediate OV/DC region of dosing space.

To conclude, in this region of dosing space treatment response can vary significantly across virtual populations. None of the treatment protocols, including the optimal one of front loading the OVs, robustly result in tumor eradication across the virtual populations. The situation only gets messier when we consider the frequency with which each protocol achieves different rankings (from top protocol to worst protocol). Finally, the observation that treatment response is driven by the parameters that describe a virtual population, rather than the dosing order itself, further suggests that optimality predictions at this dosage may be of limited value. Because the population-level eradication response is fragile, we conclude that the combination of giving three doses of

### Robustness at Higher OV and Lower DC Dose.

Because the intermediate OV/DC dose was found to be fragile, we next sought to analyze the robustness of the 20 protocols at a 50% higher OV dose (*Computational Methods*.

Similar to what was observed in the intermediate OV/DC dose, no treatment protocols result in a robust eradication response across the virtual populations (Fig. 4*A*). Although the protocol of front loading OVs consistently ranks as the top treatment across all virtual populations (Fig. 4*B*), the likelihood of tumor eradication using this protocol is only 43%. This outperforms the same protocol (which was also optimal) in the intermediate OV/DC dosing regime by 13%, but it still leaves over half of the virtual populations unsuccessfully treated. Further, all other protocols have a lower likelihood of eradication (with OV–OV–DC–OV–DC–DC being the only other one to cross the 33% threshold), suggesting that the tumor eradication response in this dosing regime is fragile.

That said, some responses in this dosing regime are robust, just not when it comes to the desired outcome of tumor eradication. Sixty percent of the protocols result in tumor escape in more than 90% of the virtual populations, and front loading the DCs results in treatment failure in 100% of the virtual populations (Fig. 4*A*). Further, unlike in the intermediate OV/DC regime, there is little variation in the rank each treatment protocol can achieve across the virtual populations (Fig. 4*B*). This robust ranking can be explained by the fact that tumor response at this dosage is driven by the protocol itself, and not by the particular parameters in a virtual population. So, although several robust treatment protocols have been identified in this dosing regime, it is the undesirable tumor escape response that is robust. Therefore, in the high OV/low DC case, we predict that no treatment protocol will have robust antitumor activity in the face of interpatitent and intrapatient heterogeneity.

### Robustness at Lower OV and Higher DC Dose.

We next sought to explore whether a robust tumor eradication response exists for any of the 20 protocols when treating at a 50% lower OV dose (

We began by exploring whether any dosing protocols result in the eradication of a large fraction of the virtual populations. We find that 30% of the treatment protocols lead to tumor eradication in at least half of the virtual populations (Fig. 5*A*). This finding stands in stark contrast to both the intermediate OV/DC dose and the low OV/high DC dose. In those cases, not one protocol could result in tumor eradication in half of the virtual populations (Figs. 3*B* and 4*A*).

As our intention is to find protocols that result in a robust eradication response across virtual populations, we next narrowed our focus to protocols that cause tumor eradication in 70% or more of the virtual populations. Three protocols were found that satisfy this criterion, and they share one common feature: The first two doses are always DCs, meaning the protocol is of the form DC–DC–X–X–X–X. Further, the earlier the third DC dose was given in the treatment protocol, the better the protocol performed across the virtual populations. Front loading with DCs in the DC–DC–DC–OV–OV–OV protocol led to tumor eradication in 84.2% of the virtual populations (Fig. 5*A*). Moving the third DC dose to later in the schedule decreases the likelihood of tumor eradication by the protocol: A DC dose at day 4 results in 76.9% eradication, one at day 5 results in 70.2% eradication, and one at day 6 results in 61.9% eradication.

Not only do we see a robust eradication response for protocols of the form DC–DC–X–X–X–X in this dosing regime, we also find that the ranking of the protocols across the virtual populations is shockingly preserved (Fig. 5*B*). The most striking example is that the schedule of front loading with DCs ranks as the top protocol in all of the virtual populations. As can be calculated from the data in Fig. 5*B*, each treatment protocol, on average, will only rank in 4 of the 20 positions, and, overwhelmingly, the achievable ranks are consecutive (e.g., DC–DC–OV–OV–DC–OV most commonly ranks at position 3 but can also rank at positions 2 and 4). This finding is in stark contrast to the intermediate OV/DC dosing regime, in which each protocol ranks in 17 of 20 positions, on average. The lack of variation in the rank a treatment protocol can achieve across the virtual populations in the case of low OV/high DC further supports our claim that treatment protocols of the form DC–DC–X–X–X–X have a robust tumor eradication response in this dosing regime.

Although of less interest, a pattern also emerges among those protocols that administer Ad/4-1BBL/IL-12 on day 1. All of these protocols result in a robust escape response, with escape occurring in two thirds or more of the virtual populations (Fig. 5*A*). We observe that protocols that give OVs on the first 2 d perform very poorly, resulting in tumor eradication in less than 14% of the virtual populations (Fig. 5*A*). Further, with the exception of one protocol (OV–DC–DC–DC–OV–OV), all protocols that administer an OV on day 1 have a lower probability of tumor eradication than protocols that administer a DC on day 1. To understand why this response occurs, note that, in this regime, the dose of OV is 50% below the dose deemed to have a strong tumor-killing effect with a reasonable toxicity profile in the experimental system. Starting a treatment protocol with a relatively ineffective dose of OV increases the likelihood of tumor escape. On the other hand, the optimal protocol administers high doses of DCs early in treatment, and these high doses can compensate for the relatively low dose of OV administered. Although protocols that front-load OVs would be avoided due to limited antitumor efficacy, this finding further illustrates the robust behavior of several of the treatment protocols in this region of dosing space.

## SI Results

### Analysis of Best-Fit Parameters: 95% Credible Intervals.

To better assess parameter variability across the full population, in the first step of VEPART, we use bootstrapping (63) to form 1,000 new samples (“bootstrap replicates”) that are of the same size as the original sample population. The aggregate data in each of these bootstrap replicates is fit to the mathematical model in Eqs. **1**–**6**, giving a parametric description of each simulated cohort. Note that this process was repeated for each data set in Fig. 2, allowing us to approximate the posterior distributions on the parameters (Fig. S1) while fitting to a minimal subset of the parameters. From these posterior distributions (Fig. S1), we then calculate the 95% credible interval for each of the fit parameters (Table S2). For the parameter values that were fit in multiple models (

We find that the fit parameters can be divided into two classes. In the first class, we have the parameter values with relatively tight 95% credible intervals, meaning points in that interval do not vary by even an order of magnitude from the best-fit parameter value determined by fitting to the original dataset. The second case includes those parameters for which the 95% credible interval contains points that vary by orders of magnitude in comparison with the best-fit parameter value.

The fit parameters with tight credible interval are the net tumor growth rate (

The other class of parameters can be classified as not having a tight credible interval. These parameters include the 4-1BBL-induced T-cell recruitment rate

A possible interpretation of these results is that, for parameters with a tight credible interval (those that are tightly regulated in the population), the model is sensitive to the value of these parameters. Similarly, we could expect that the model is less sensitive to parameters with nontight credible intervals. These hypotheses have been confirmed through a local sensitivity analysis, as detailed below.

### Analyzing Best-Fit Parameters: Local Sensitivity Analysis.

To explore the hypothesis that model fit to the data is sensitive to the parameters with tight credible intervals, we performed a local sensitivity analysis on the corresponding model to identify all points within parameter space that give a fit within 10% of the optimal. The results for the parameters with tight credible intervals are shown in Fig. S2. This analysis confirms our hypotheses: Model fit is quite sensitive to parameters with tight credible intervals. *B*), *A*), and *C* and *D*).

Taken together, the 95% credible intervals and the sensitivity analysis imply that we can have high confidence in our optimal values of

As we did with the parameters with a tight credible interval, we also performed a local sensitivity analysis on the parameters without a tight credible interval. These three parameters happen to be simultaneously fit in the Ad/4-1BBL/IL-12 model (model 3). Therefore, this analysis will allow us to determine not only sensitivity to each of these parameters but also correlations between the parameters (Fig. S3). Correlations between these parameters would not be surprising, as each of them impacts the killing of cancer cells by cytotoxic T cells. Further, correlations between the parameters could partially explain why the 95% credible interval is so large for these parameters, as one parameter could compensate for another and still give a good fit to the data.

We find that the model is least sensitive to the *A*). Conversely,

## Discussion

In this work, we introduced the VEPART platform for assessing the robustness of treatment protocols. VEPART implementation begins with an experimental dataset describing the temporal evolution of some variable (for instance, tumor volume) in response to treatment in a sample population. A mathematical model (for instance, a system of differential equations) is then developed and fit to the experimental data. Once a reasonable mathematical model is attained, nonparametric statistical techniques are used to amplify the sample population and generate a large number of virtual populations. One benefit of this approach is that a large sample size, which can be difficult to attain experimentally, is not required to create these virtual populations. At this point, treatment optimization can be performed for each virtual population.

Although optimization is restricted to a set of clinically realizable protocols in this work, once the virtual populations have been created, classical tools from optimal control theory could also be applied (36, 37). An excellent overview of applying optimal control theory to cancer chemotherapy planning can be found in ref. 38. The use of optimal control in the context of the VEPART method would allow a more general optimal protocol to be identified, where “optimal” could be defined in many ways; for instance, one could seek to minimize tumor volume at an endpoint, minimize tumor volume over a time horizon, minimize drug concentration, minimize toxicity, minimize some weighted average the above, etc. (37⇓⇓⇓⇓⇓⇓⇓–45). Further, this optimization can be performed subject to various types of constraints; for instance, toxicity could be introduced as a constraint on the amount of drug that can be administered (37, 46). These approaches can be used to find the optimal schedule for a single therapeutic agent, or for a mixture of drugs as done in refs. 43 and 47. Further, from any such optimal control problem, one could perform a sensitivity or elasticity analysis (similar to the one performed herein; see *SI Results*) to quantify how parametric changes influence the quality of an optimal treatment protocol, as in ref. 48.

A variety of solution methods exist for solving these optimal control problems (38). Analytical expressions for the optimal control can be obtained for simple models (see, e.g., ref. 49), and, when the model is too complex to obtain an analytic expression, approximation techniques can be used (see, for instance, ref. 43). When the optimal solution is computationally intractable, heuristic optimization algorithms, including genetic algorithm and simulated annealing can be used, as done in refs. 42, 47, and 50. Independent of how the optimization is carried out, an analysis of the optimal protocols across virtual populations using the VEPART method allows for the quantification of protocol robustness.

### Related Work: Robust Cancer Therapies.

In the cancer literature, examples of analogous efforts for designing robust treatment protocols can be found in radiotherapy (51⇓⇓–54). The challenges in designing a robust radiation treatment protocol can largely be attributed to geometric uncertainty and interpatient variability (54). Several studies have addressed the challenges posed by geometric uncertainty, including factors such as organ motion, variations in treatment setup, patient positioning errors, and fluctuations in machine output (54). For instance, Liu et al. (52) used “worst-case robust optimization” to identify intensity modulated proton therapy (IMPT) plans that are robust to uncertainties in the beamlet range and equipment setup. For the same beam arrangement, they computed a different dose distribution accounting for these uncertainties and compared this to the optimal dose distribution. They applied this approach to the lung, skull base and prostate and found that compared with IMPT plans optimized using conventional (nonrobust) methods, their method resulted in radiotherapy plans that are less sensitive to beamlet range and setup uncertainties (52). In a series of papers, robust optimization was used for planning intensity modulated radiation therapy for lung cancer in the face of uncertainty caused by breathing (51, 55, 56). This work introduces a data-driven model of uncertainty to describe variations in breathing motion and allows the authors to determine the trade-off between ensuring the tumor receives sufficient radiation and minimizing the dose to normal tissue (51, 55, 56).

On the other hand, the work by Leder and colleagues (54) was the first to consider the challenge that interpatient variability poses to designing a robust radiotherapeutic protocol. They were particularly concerned about the potential risk that the optimal protocol (determined using the linear–quadratic model) may be weakly efficacious for some individuals, or may cause high toxicity to at-risk organs for others. To address this robustness concern, the authors used a stochastic optimization scheme in which the unknown parameters are assumed to be random variables with known distributions. The goal is to choose a dosing schedule (total dose, number of fractions, dose per fraction, and treatment duration) for which the objective function attains a high level with a given probability, and that the imposed constraints are also satisfied with a given probability. The authors found some important differences in the optimal schedule that accounts for parameter uncertainty compared with the optimal schedule found using the nominal parameter values in the absence of uncertainty (54). Our work is motivated by a similar robustness question to that considered by Leder and colleagues in ref. 54, although not restricted to radiotherapy.

### Case Study: Immunoenhanced OVs with DC Vaccines.

As a proof of concept, we applied VEPART to explore the robustness of treatment protocols that combine three doses of immunostimulatory OVs with three DC injections; to accomplish this, we worked with the experimental data from Huang et al. (31). The nature of this dataset previously led us to use a hierarchical method for fitting the parameter values (*Computational Methods*), which posed a challenge for directly fitting parameters for each of the virtual populations. To overcome this challenge, we began by bootstrapping the experimental data and using the best-fit parameters in each bootstrap replicate to approximate the posterior probability distribution for the parameters in our mathematical model (Eqs. **1**–**6**). Pseudorandom sampling of these distributions, as detailed in *Computational Methods*, allowed for the creation of our 1,000 virtual populations.

In this case study, 20 treatment protocols (all possible combinations of Ad/4-1BBL/IL-12 with DCs given at three doses a piece, separated by 1 d) were ranked across the virtual populations in three regions of dosing space: (*i*) intermediate and experimentally used OV/DC dose, (*ii*) high OV/low DC, and (*iii*) high DC/low OV. VEPART revealed that the first two dosing regimes we considered are fragile. To detail, the protocol of administering Ad/4-1BBL/IL-12 on the first 3 d, and following up with a sequence of DCs, was optimal for both the intermediate dose of OVs and DCs and the high OV/low DC dose. However, in neither case did this protocol cause a robust eradication response: The protocol only causes tumor eradication in 30% of the virtual populations at the intermediate dose, and in 43% of the virtual populations at the high OV/low DC dose. As a consequence of VEPART’s nonrobust prediction, it is not expected that any ordering of three OVs and three DCs at these two doses will exhibit a strong antitumor response in the face of interpatient and intrapatient heterogeneity.

On the other hand, in the high DC/low OV region of dosing space, a robust optimal protocol was identified. This protocol administers DCs on the first 3 d, and follows up with a sequence of Ad/4-1BBL/IL-12 on the last 3 d. Not only did this protocol rank as optimal in all virtual populations, it also resulted in tumor eradication in 84% of those virtual populations. Therefore, VEPART predicts that, in the high DC/low OV region of dosing space, the treatment that front-loads DCs will be effective across different individuals in a population, and will be more robust to spatial and temporal variations within an individual.

In future work, the use of approximate optimal control techniques or heuristic optimization algorithms (38, 42) can extend the current implementation of VEPART. In particular, we can optimize over a wider range of protocols (for instance, considering a variable number of OV and DC doses, and/or allowing variable spacing between doses), and over a more complete range of dosing space. This extended analysis of our system could result in finding robust protocols beyond treating with DC–DC–DC–OV–OV–OV in the high DC/low OV region of dosing space.

This case study is intended to illustrate how the VEPART method can be used to assess therapeutic robustness. By combining experimental data with mathematical modeling and statistical analyses, the VEPART approach can contribute to tackling a significant problem clinicians face when treating cancer using standard of care or an individualized treatment protocol: high levels of interpatient and intrapatient variability.

## Computational Methods

In this section, we introduce our mathematical model that includes immunostimulated OVs and DC vaccines. Although this model has been successfully fit to the experimental data of Yun and colleagues (31, 35), we introduce an alternative approach to fitting the data that results in small but significant improvements in the model fits. To further study the relationship between the model and its parameters, in this section, we also describe two key steps of our proposed VEPART approach: (*i*) We detail the bootstrapping procedure that produces posterior distributions and credible intervals for each of the fit parameters, and (*ii*) we explain how these posterior distributions are used to generate virtual populations that provide the foundation of our robustness analysis.

### Mathematical Model.

In this study, the data-validated model from Wares et al. (35) was used. The model is represented by the following initial value problem:*SI Computational Methods*.

This model was hierarchically developed to fit the increasingly complex data in Fig. 2. Although more details are provided in *SI Computational Methods*, here we briefly note the different hierarchical stages of the model:

*i*) Model 0 is a model of untreated tumor growth in the absence of any virions, T cells, or DCs.*ii*) Model 1 is a model of tumor growth and treatment with oncolytic virotherapy that does not carry any immunostimulatory transgenes. T cells and DCs are still not considered.*iii*) Model 2a is a model of tumor growth and treatment with the immunostimulatory oncolytic virotherapeutic Ad/4-1BBL. The immunostimulation results in the inclusion of tumor-targeted T cells, but naïve T cells and DCs are still not considered.*iv*) Model 2b is a model of tumor growth and treatment with the immunostimulatory oncolytic virotherapeutic Ad/IL-12. The immunostimulation results in the inclusion of tumor-targeted T cells, including the naive T cells, but DCs are still not considered.*v*) Model 3 is a model of tumor growth and treatment with the immunostimulatory oncolytic virotherapeutic Ad/4-1BBL/IL-12. The immunostimulation results in the inclusion of tumor-targeted T cells, including the naive T cells, but DCs are still not considered.*vi*) Model 4 is a model of tumor growth and treatment with Ad/4-1BBL/IL-12 and DC vaccines. At this point, the model includes all equations and all terms within each equation, including the tumor-targeted T cells, the naive T cells, and the DCs. A subcase of model 4 in which all virus-related parameters are set to zero gives us the case of treating with only the DC vaccine.

### Fitting Model to Experimental Data.

Just as the structure of the model was hierarchically developed, the parameter fitting scheme was also hierarchical in nature (34, 35). As illustrated in Fig. S4, parameters that could not be well-estimated from the literature were fit at each hierarchical model stage, and some subset of the fit parameters are inherited to the subsequent models in the hierarchical scheme. Details of this hierarchical fitting scheme can be found in *SI Computational Methods*. The initial condition for the number of uninfected tumor cells was always refit at each model stage, as each model was fit to a different data set. All other initial conditions are identically zero, as the administration of treatment is captured through the time-varying terms [**1**–**6**.

The parameters in models 1 through 4 were previously fit using the Levenberg–Marquadt algorithm as implemented by MATLAB’s lsqnonlin command (35). Like most parameter fitting schemes, the Levenberg–Marquadt algorithm can get stuck at local minimum, and the algorithm’s performance deteriorates as a function of the dimension of parameter space. To have more control over parameter fitting, and to attempt to avoid getting trapped at local minimum, here we instead use simulated annealing for model fitting.

Goodness of model fit to the experimental data is sought by minimizing the objective function **1**–**6**, and **7**, the fractional term is a dimensionless measure of the error: The numerator gives the sum of the square errors between the model predictions and the (average) experimental data, and the denominator is the variance in the experimental data. By dividing the sum of the square errors by the variance, we impose the condition that, at time points where the variance is small, we want a better fit to the average volume than is required at time points where the variance is large, in accordance with the principle of maximum likelihood estimation (57).

Because volume measurements made using calipers are imprecise for smaller tumor sizes [instrumentation error in measuring length and width is independent of tumor volume (58, 59)], the dimensionless term in Eq. **7** is weighted by the average tumor volume at day **7** can be found in *SI Computational Methods* and Table S3 therein.

### Analyzing Best-Fit Parameters.

For each dataset (control, Ad only, etc.) consisting of

A local sensitivity analysis was also performed to give an alternative mechanism for analyzing the fit parameters. Details on this are found in *SI Computational Methods*.

### Ranking and Robustness of Treatment Protocols.

Previous work determined the optimal protocol among the 20 possible orderings of three OV injections and three DC injections at the experimentally used dose of ^{3}) was the one that front-loads the OV (that is, OV–OV–OV–DC–DC–DC) (35). However, further analysis revealed that the dose used in the experimental work was near a bifurcation point in the system, and small changes in the drug dose could result in a different optimal solution (35).

This finding motivated us to extend our analysis to investigate treatment robustness, as such an analysis gives more information about therapeutic efficacy than a single-population optimization study. To explore robustness, the data obtained from the bootstrap replicates were used to construct virtual populations, similar to the nonparametric approach described in ref. 29. In particular, herein, 1,000 virtual populations were created by pseudorandomly picking the six fit parameters (

Note that an implicit assumption in creating virtual populations in this way is that parameters fit at different stages of the hierarchical fitting process are minimally correlated. This assumption is grounded in our fitting methodology, in which parameters fit at each step of the hierarchy are treated as independent biological phenomenon. For instance, the fitting of the tumor growth rate

Therapeutic robustness was assessed by ranking the 20 protocols of interest for each of the virtual populations that were generated. Within a virtual population, a treatment was ranked using a two-component criterion. First, for tumors that were eradicated within 30 d (volume drops below ^{3}), the time until eradication is used to measure the effectiveness of the protocol, with faster eradication times considered superior to slower ones. Second, for tumors that were not eradicated within 30 d, the volume after 30 d was used to measure the effectiveness of the treatment, with smaller volumes considered superior to larger ones. By analyzing the response to the 20 protocols across the 1,000 virtual populations, the robustness of each protocol can be studied.

Given that our previous work suggested that the experimentally used dose does not have a robust optimal solution, we performed this robustness study in three different regions of dosing space: (*i*) at the experimentally used dose (OV dose of *ii*) at a 50% higher dose of OV but a 50% lower dose of DC (OV dose of *iii*) at a 50% lower dose of OV but a 50% higher dose of DC (OV dose of

## SI Computational Methods

### Hierarchical Model Development.

The hierarchical nature of the experimental data (increasing from simple to more complex) allowed the mathematical model presented in Eqs. **1**–**6** to be developed hierarchically (34, 35). Therefore, it is ideal to explain the model terms, and the parameter fitting, from the perspective of how each piece of the model builds off the simpler pieces.

#### Model 0: Untreated tumor growth.

In the absence of any treatment, the model reduces to a single population of uninfected tumor cells, **1**, it is assumed that these cells grow exponentially at rate *A* and *B*, black and white diamond curves). All other parameters in Eq. **1** are set to zero in the absence of treatment, making model 0 a simple exponential growth model for uninfected tumor cells.

#### Model 1: Ad only.

In the absence of immunostimulation, the oncolytic adenovirus is capable of infecting uninfected cells, eventually resulting in infected cell lysis and the release of free virions into the tissue space. To expand the untreated tumor growth model to include the action of the OVs, we include the free virion population (**3**. These virions infect uninfected cells at a rate of **2**, it is assumed that the infected cells lyse at rate **3**). It is assumed that virions decay exponentially at rate **3**). Therefore, model 1 includes only Eqs. **1**–**3**, without the terms that consider the T-cell population; this is because we assume there are no T cells in the tumor tissue in the absence of immunostimulation, an observation supported by the experimental data in Huang et al. (31). Model 1 has been developed to describe the red square data shown in Fig. 2*A*.

#### Model 2a: Ad/4-1BBL.

In model 2a, we expand upon model 1 to include the impact that the 4-1BBL transgene has on tumor killing. Experiments demonstrate that 4-1BBL increases T-cell infiltration at the tumor site, and increases the cytotoxicity of the T cells (31), and therefore we aim to include these phenomena in model 2a. However, instead of directly modeling the immunostimulatory effects of 4-1BBL, we use the fact that this gene is transcribed and translated inside of infected cells, and therefore the infected cell population can be used as a proxy for modeling the actions of 4-1BBL (34, 35). We thus model 4-1BBL-induced T-cell production as proportional to the infected cell population with production/recruitment rate **4**. T cells are assumed to kill tumor cells independent of their infected status (that is, the T cells recognize tumor antigens and not viral antigens) using a frequency-dependent term with rate of killing given by **1**–**3**, along with the T-cell source term and decay term in Eq. **4**. Model 2a has been developed to describe the green triangle data shown in Fig. 2*A*.

#### Model 2b: Ad/IL-12.

Model 2b also builds upon model 1 (note that it does not build upon model 2a), to include the impact that the IL-12 transgene has on tumor killing. Experiments demonstrate that, like 4-1BBL, IL-12 increases T-cell infiltration at the tumor site, while also increasing the cytotoxicity of those T cells (31). However, from what is known about the mechanism of IL-12 action, this increased infiltration of the cytotoxic T cells is indirectly driven by IL-12 through the naïve T-cell population. As with 4-1BBL, IL-12 will not be modeled directly, but will be assumed to be proportional to the infected cell population.

To model the effects of IL-12 through the naïve T-cell population, model 1 is expanded upon to include two new equations. First, the naïve T-cell population is introduced in Eq. **5**, and it is assumed that IL-12 (through the infected cell population) recruits naïve T cells at rate

Just as in model 2a, the cytotoxic T cells can indiscriminately kill tumor cells, and a frequency-dependent kill term with rate **1**–**5**, with the exception that Eq. **4** does not include the first term (no direct recruitment of cytotoxic T cells by infected cells) or the last term (no DCs *A*.

#### Model 3: Ad/4-1BBL/IL-12.

No new terms were introduced to model treatment with Ads expressing 4-1BBL and IL-12. Instead, model 3 is simply the union of models 2a and 2b, and therefore consists of all of Eqs. **1**–**5**, with the exception that Eq. **4** does not include the second-to-last term, as there are no DCs used during this treatment protocol. It is of note that, in model 3, the *A*, and the red square data shown in Fig. 2*B*.

#### Model 4: Ad/4-1BBL/IL-12 and DCs.

Model 4 introduces the injected DC population. These DC vaccines are developed by harvesting DCs from the bone marrow of tumor-bearing mice and allowing them to be pulsed ex vivo with tumor associated antigens for 8 d until maturation (31). These tumor-primed DCs are then injected directly into the tumor site. It is these DCs that we seek to model. Although we recognize that there may be some endogenous DCs at the tumor site, we work under the assumption that the effect of such DCs is wrapped up in the tumor growth rate term **6**, there is a source term for injected DCs, and the DCs natural decay rate is modeled. The DCs stimulate/recruit tumor-targeted cytotoxic T cells at a rate of **4**). Model 4 has been developed to describe the blue triangle data shown in Fig. 2*B*. Note that a subcase of model 4 in which all virus-related parameters are set to zero gives us the case of treating with only the DC vaccine; this describes the green circle data shown in Fig. 2*B*.

### Hierarchical Model Fitting to Experimental Data.

Fig. S4 summarizes the hierarchical nature of the parameter fitting process. Fitting begins with model 0 (no treatment), in which the intrinsic tumor growth rate

Model 1 (Ad only) introduced four new parameters values, three of which (the infected cell lysis rate, number of virions released by a lysed cell, and virion decay rate) can be readily estimated from experiments (35). As the

Focusing on

In model 2b (Ad/IL-12), the noninitial condition parameters that are fit to the data are

In model 3 (Ad/4-1BBL/IL-12), two datasets (for two viral doses) are simultaneously fit to determine the following noninitial condition parameters:

### Details on Fitting Scheme: Simulated Annealing.

Simulated annealing is a stochastic optimization scheme where parameters are changed through random perturbations until, ideally, the globally optimal parameters are found. For this application, the algorithm seeks to minimize the objective function **7**.

The simulated annealing algorithm starts with an initial set of parameters, determines the value of

Eq. **S1** says that, if the new parameter set fits better than the prior one (

For each dataset in Fig. 2 (control, Ad only, etc.), the appropriate model was fit to the data using simulated annealing. Through experimentation with the data, we arrived at the following sufficient stopping criterion for the algorithm: The best-fit parameters have been found when

### Analyzing Best-Fit Parameters: Local Sensitivity Analysis.

For each of the models, we also performed a local sensitivity analysis to complement the results of the credibility interval analysis. This analysis allows us to study correlations between parameters, and how much parameter values can vary from the best-fit values and still give a reasonable fit to the average data. In particular, we performed an exhaustive search in parameter space about the best-fit values to the aggregate data for each model; this allows us to identify all parameter sets that give a fit within 10% of optimal to the original data (as measured by the value of the objective function,

## Acknowledgments

J.L.G. thanks Chae-Ok Yun, Peter Kim, and Joanna Wares for their collaboration, and particularly for the exposure and access they provided to the rich dataset used in this work. M.F.O. was supported by National Institutes of Health (NIH) Grant R01LM011000. The work of E.D.S. was partially supported by NIH Grant 1R01GM100473, Air Force Office of Scientific Research Grant FA9550-14-1-0060, and Office of Naval Research Grant N00014-13-1-0074. The work of J.L.G. was partially done while visiting the Mathematics Department and Center for Quantitative Biology at Rutgers University, with partial support from NIH Grant 1R01GM100473.

## Footnotes

↵

^{1}Present address: Graduate Program in Biological & Biomedical Sciences, Yale University, New Haven, CT 06520.- ↵
^{2}To whom correspondence should be addressed. Email: gevertz{at}tcnj.edu.

Author contributions: E.D.S. and J.L.G. designed research; S.B. and J.L.G. performed research; S.B., M.F.O., E.D.S., and J.L.G. analyzed data; S.B. and J.L.G. wrote the paper; and M.F.O. oversaw statistical analyses.

The authors declare no conflict of interest.

This article is a PNAS Direct Submission. M.A.N. is a guest editor invited by the Editorial Board.

This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1703355114/-/DCSupplemental.

## References

- ↵
- ↵.
- Jamal-Hanjani M,
- Quezada S,
- Larkin J,
- Swanton C

- ↵
- ↵.
- Ling S, et al.

- ↵
- ↵
- ↵.
- Cirkel G,
- Gadellaa-van Hooijdonk C,
- Koudijs M,
- Willems S,
- Voest E

- ↵
- ↵.
- Verma M

- ↵
- ↵.
- Emmert-Streib F

- ↵.
- Lodhi H,
- Muggleton S

- Gunawardena J

- ↵.
- Walter E,
- Pronzato L

- ↵.
- Varma A,
- Morbidelli M,
- Wu H

- ↵
- ↵.
- Allen R,
- Rieger T,
- Musante C

- ↵
- ↵.
- Batchelor E,
- Goulian M

- ↵.
- Thom R

- ↵.
- Mallavarapu A,
- Thomson M,
- Ullian B,
- Gunawardena J

- ↵
- ↵.
- Kansal A,
- Trimmer J

- ↵
- ↵
- ↵.
- Jamei M,
- Dickinson G,
- Rostami-Hodjegan A

- ↵
- ↵.
- Schmidt B,
- Casey F,
- Paterson T,
- Chan J

- ↵.
- Brown D, et al.

- ↵.
- Teutonico D, et al.

- ↵
- ↵
- ↵
- ↵
- ↵.
- Kim P,
- Crivelli J,
- Choi IK,
- Yun CO,
- Wares J

- ↵.
- Wares J, et al.

- ↵.
- Sontag E

- ↵.
- Schättler H,
- Ledzewicz U

- ↵.
- Shi J,
- Alagoz O,
- Erenay F,
- Su Q

- ↵.
- De Pillis L,
- Radunskaya A

- ↵
- ↵.
- Świerniak A,
- Ledzewicz U,
- Schättler H

- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵.
- Hofman A,
- Scherrer A,
- Küfer KH

- ↵
- ↵.
- Gevertz J

- ↵.
- Chang T,
- Mišić V

- ↵
- ↵.
- Betts J, et al.

- ↵.
- Badri H,
- Watanabe Y,
- Leder K

- ↵
- ↵
- ↵.
- Pan J,
- Fang K

- ↵.
- Feldman J,
- Goldwasser R,
- Mark S,
- Schwartz J,
- Orion I

- ↵.
- Ayers G, et al.

- ↵
- ↵.
- Elomaa T,
- Hollmén J,
- Mannila H

- Lodhi H,
- Gilbert D

- ↵
- ↵
- ↵
- ↵.
- Gerritsen B,
- Pandit A

- ↵.
- Chang J, et al.

- ↵.
- Metz P, et al.

- ↵
- .
- Chen Y, et al.

- .
- Jogler C, et al.

- .
- De Boer RJ, et al.

- .
- de Pillis LG,
- Radunskaya AE,
- Wiseman CL

## Citation Manager Formats

## Sign up for Article Alerts

## Article Classifications

- Physical Sciences
- Applied Mathematics

- Biological Sciences
- Systems Biology