## New Research In

### Physical Sciences

### Social Sciences

#### Featured Portals

#### Articles by Topic

### Biological Sciences

#### Featured Portals

#### Articles by Topic

- Agricultural Sciences
- Anthropology
- Applied Biological Sciences
- Biochemistry
- Biophysics and Computational Biology
- Cell Biology
- Developmental Biology
- Ecology
- Environmental Sciences
- Evolution
- Genetics
- Immunology and Inflammation
- Medical Sciences
- Microbiology
- Neuroscience
- Pharmacology
- Physiology
- Plant Biology
- Population Biology
- Psychological and Cognitive Sciences
- Sustainability Science
- Systems Biology

# Parameter-free model discrimination criterion based on steady-state coplanarity

Edited by Stephen E. Fienberg, Carnegie Mellon University, Pittsburgh, PA, and approved August 14, 2012 (received for review October 16, 2011)

## Abstract

We introduce a procedure for deciding when a mass-action model is incompatible with observed steady-state data that does not require any parameter estimation. Thus, we avoid the difficulties of nonlinear optimization typically associated with methods based on parameter fitting. Instead, we borrow ideas from algebraic geometry to construct a transformation of the model variables such that any set of steady states of the model under that transformation lies on a common plane, irrespective of the values of the model parameters. Model rejection can then be performed by assessing the degree to which the transformed data deviate from coplanarity. We demonstrate our method by applying it to models of multisite phosphorylation and cell death signaling. Our framework offers a parameter-free perspective on the statistical model selection problem, which can complement conventional statistical methods in certain classes of problems where inference has to be based on steady-state data and the model structures allow for suitable algebraic relationships among the steady-state solutions.

- chemical reaction networks
- mass-action kinetics
- ordinary differential equations
- singular values
- algebraic statistics

In many branches of science and engineering, one is often interested in the problem of model selection: Given observed data and a set of candidate models for the process generating that data, which is the most appropriate model for that process? Such a situation commonly arises when the inner workings of a process are not completely understood, so that multiple models are consistent with the current state of knowledge. For mechanistic models, e.g., ordinary differential equation (ODE) or stochastic dynamical models, most selection techniques involve parameter estimation, which typically requires some form of optimization, exploration of the parameter space, or formal inference procedure (1, 2). For sufficiently complicated models, however, this task can become infeasible, owing to the nonlinearity and multimodality of the objective function (which penalizes any differences between the data and the model predictions), as well as the high dimensionality of the parameter space (3).

Here, we present a framework for the discrimination of mass-action ODE models (and suitable generalizations thereof) that does not require or rely upon such estimated parameters. Our method (Fig. 1) operates on steady-state data and combines techniques from algebraic geometry, linear algebra, and statistics to determine when a given model is incompatible with the data under all choices of the model parameters. The core idea is to use the model equations to construct a transformation of the original variables such that any set of steady states of the model under that transformation possesses a simple geometric structure, irrespective of parameter values. In this case, we insist that the transformed steady states lie on a plane, which we detect numerically; if the observed data are not coplanar under the transformation induced by a given model, then we can confidently reject that model.

The idea of transformation to coplanarity has been employed before, but previous efforts were limited, in part, by its systematic detection and quantification. For example, in ref. 4, it was necessary to first manually reduce the dimension of the transformed space to three so that coplanarity could be assessed visually. Other related research using similar methods include refs. 5⇓–7. The current work extends existing methodologies by devising a numerical scheme for quantifying the deviation from coplanarity that generalizes to higher dimensions and allows for statistical interpretation. Thus, we provide a richer and more powerful framework for the application of this basic technique. Chemical reaction network theory (CRNT) (8, 9) and stoichiometric network analysis (10) likewise embrace a parameter-free philosophy and can also be exploited for model selection (11⇓–13).

It is worth noting that our method provides a necessary but (generally) not sufficient condition for model compatibility: A model that is compatible with the data must provide a transformation to coplanarity, but a model that achieves coplanarity is not necessarily compatible, due to additional degrees of freedom introduced in the transformation process. This work is in contrast to traditional approaches based on parameter fitting, which provide a sufficient but not necessary condition because local extrema in the cost function surface may prevent a suitable fit. These two approaches are therefore complementary and can be used together for improved model selection.

The remainder of this paper is organized as follows. First, we introduce the concept of steady-state invariants (4, 5), polynomials that vanish at steady state and which depend only on experimentally accessible variables. Then we illustrate how to use steady-state invariants to deduce coplanarity requirements for model compatibility and how to detect such coplanarity numerically; we also discuss invariants in the context of standard parameter fitting techniques. Next, we apply our method to models of multisite phosphorylation and cell death signaling. Finally, we end with some generalizations and concluding remarks.

## Steady-State Invariants

Consider a chemical reaction network model [1]in the species *X*_{1},…,*X*_{N}, where *s*_{ij} and are the stoichiometric coefficients of *X*_{j} in the reactant and product sets, respectively, of reaction *i*, with rate constant *k*_{i}. Under mass-action kinetics, the model has dynamics [2]where *x*_{i} is the concentration of species *X*_{i} (throughout, we follow the convention that lowercase letters denote the concentrations of the corresponding species indicated in uppercase). These equations provide a quantitative description of the model and can, in principle, be used to test its validity by assessing the degree to which they are satisfied by observed data. Unfortunately, in practice, the required variables are rarely all available. In particular, the velocities can be difficult to measure, so we can often consider only the steady state , as we will do here. Furthermore, certain species may be experimentally inaccessible due to technological limitations; we eliminate these variables from the equations if possible.

For simple models, this elimination can be done by hand, but a more systematic approach is required in general. One such approach is to use Gröbner bases (14), a central tool in computational algebraic geometry that provides a generalization of Gaussian elimination for multivariate polynomial systems. Here, we follow the general procedure of Manrai and Gunawardena (4). Let be the polynomial ring consisting of all polynomials in the parameters ** a** = (

*k*

_{1},…,

*k*

_{R}) with coefficients from the rational numbers , and let be its fraction field, comprising all elements of the form

*f*/

*g*, where . Clearly, each , the ring of all polynomials in

**= (**

*x**x*

_{1},…,

*x*

_{N}) with coefficients in . Note that the parameters

**have been absorbed into the coefficient field ; thus, by performing all operations over , we can treat**

*a***symbolically, i.e., without specifying any particular parameter values.**

*a*To characterize the steady state , we construct the ideal generated by , consisting of all polynomials , where each . Clearly, *J* contains all elements of that vanish at steady state. To obtain only those elements of *J* that do not depend on the variables *x*_{1},…,*x*_{i}, we consider the *i*th elimination ideal , where *x*_{obs} = (*x*_{i+1},…,*x*_{N}) denotes the “observable” variables. Here, it is useful to introduce Gröbner bases, which are special sets of generators with the so-called elimination property that if ** g** = (

*g*

_{1},…,

*g*

_{M}) is a Gröbner basis for

*J*under the lexicographic ordering

*x*

_{1}> ⋯ >

*x*

_{N}, then , where are precisely those elements of

**containing only the variables**

*g*

*x*_{obs}. The polynomials

*g*_{obs}generate all elements of that vanish at steady state and so characterize the projection of the steady state onto the variables

*x*_{obs}.

Procedurally, we compute a reduced Gröbner basis ** g** of

*J*with respect to a suitable lexicograhic ordering using standard algorithms, then obtain

*g*_{obs}by subselection. For numerical convenience we further rescale each polynomial in

*g*_{obs}so that all coefficients belong to (i.e., we multiply through by their common denominator). Then the elements of

*g*_{obs}= (

*I*

_{1},…,

*I*

_{Ninv}) have the form [3]where we have applied the relabeling

*x*_{obs}= (

*x*

_{1},…,

*x*

_{Nobs}). Clearly, each

*I*

_{i}is a polynomial in

*x*_{obs}that vanishes at steady state; we call such polynomials steady-state invariants (or sometimes just invariants for short).

Note in general that steady-state invariants may fail to exist because *J*_{i} may be empty. Moreover, invariants and their properties (e.g., degrees) can depend delicately on the choice of monomial ordering. Some manual intervention is therefore often required to obtain useful invariants. We will not treat this important (but subtle) issue here, instead focusing on the analysis of given invariants, however they are obtained. This approach also has the advantage of separating the computation of invariants from their interpretation, in principle allowing the use of invariants from various theories. Steady-state invariants, if they exist, describe relationships between observable variables that hold at steady state for any given realization of parameter values, regardless of other factors such as initial conditions.

For full details on the computational procedure employed, see the accompanying Sage worksheet, which contains code for all computations performed (*Materials and Methods*). For further background on algebraic geometry and Gröbner bases (including the potential problems of obtaining them), see ref. 14; for other methods of variable elimination, see, e.g., ref. 15. Similar algebraic ideas have also appeared in the context of phylogenetics (16, 17).

## Model Discrimination

We start with a set of steady-state measurements for *i* = 1,…,*m*, and a given model with steady-state invariants .

### Data Coplanarity.

An invariant, , can be written somewhat simplified as [4]We first describe a procedure for deciding whether it is possible that the invariant is compatible with the data, i.e., [5]for some choice of ** a**. We therefore rewrite Eq.

**4**as [6]where and

*b*

_{j}=

*f*

_{j}(

**), with**

*a***= (**

*y**y*

_{1},…,

*y*

_{n}) and

**= (**

*b**b*

_{1},…,

*b*

_{n}). Let

*φ*be the map taking

*x*_{obs}to

**. Then compatibility implies that the transformed variable corresponding to any observation , considered as a point in with coordinates , lies on the hyperplane defined by the coefficients**

*y***. In other words, compatibility with the data implies that the corresponding transformed data are coplanar.**

*b*In general, it is possible that the invariant vanishes trivially (** b** =

**0**) under some choice of parameters, for which coplanarity need no longer hold. To discount this case, we can check, for instance, that the denominator of the corresponding

*g*_{obs}is never zero. Then

*I*always has at least one nonzero coefficient; hereafter in this section, we assume that the invariant is non-vanishing in this sense.

Let be the matrix whose rows consist of the . Then the data are coplanar if and only if ** Yb** =

**0**for some nontrivial column vector

**≠**

*b***0**. Such a vector, by definition, resides in the null space of

**, which can be found using the singular value decomposition**

*Y***=**

*Y*

*U***Σ**

*V*^{T}, where the diagonal elements of

**Σ**give the singular values σ

_{i}≥0 encoding the “stretch” of each basis vector in

**. In particular, the smallest singular value σ**

*V*_{min}bounds the norm ‖

**‖ for any**

*Yb***≠**

*b***0**via [7]so if σ

_{min}> 0, then the data cannot be coplanar (18). More generally, σ

_{min}gives the least squares deviation of the data from coplanarity under the scaling constraint ‖

**‖ = 1. This quantity depends only on the data and is therefore parameter-free.**

*b*Note that this requirement holds for any choice of ** b**, regardless of whether it can be realized by the original parameters

**. In this sense, the condition of small σ**

*a*_{min}provides a necessary but not sufficient criterion for model compatibility. The additional degrees of freedom introduced by neglecting the functional forms

*f*

_{j}effectively linearizes the compatibility condition expressed by Eq. [

**5**], allowing for a simple, direct solution.

To account for the presence of noise, suppose that we know each component of a measurement only up to an error , with [8]where is a standard normal random variable. We imagine that the noise parameter *ϵ* is given, for example, by instrument error. Then from the perturbation equation [9]we find, expanding to first order, that the error is propagated to the transformed variables as , where ∇*φ* is the Jacobian of *φ*, with elements (∇*φ*)_{ij} = ∂*y*_{i}/∂*x*_{j}. Therefore, [10]

We now consider the effect of the on σ_{min} under the null hypothesis that the underlying are coplanar with coefficients ** b** (of unit norm). Thus, we study the vector

**, whose entries are perturbed from zero to [11]for each transformed datum . Since ‖**

*Yb***‖ = 1 by assumption, if we rescale each row of**

*b***by its corresponding effective error [12]thus obtaining**

*Y*

*Y*^{′}, then each entry of

*Y*^{′}

**has the form μ**

*b*_{i}

*Z*with |μ

_{i}| ≤ 1, for

*i*= 1,…,

*m*. We hence define the coplanarity error [13]which, from the discussion above, is bounded by the length of a normal random vector with variances , whose distribution function clearly dominates that of the length of a normal random vector with variances . But this latter quantity simply follows the χ distribution with

*m*degrees of freedom. In other words, [14]if

*p*

_{α}is the upper α-percentile for χ

_{m}(e.g., α = 0.05), then [15]which gives an approximate criterion for rejecting coplanarity. As the amount of data increases, the approximation improves since σ

_{min}(

*Y*^{′}) → ‖

*Y*^{′}

**‖ as**

*b**m*→ ∞ by the symmetry of Eq.

**10**.

Depending on the exact situation at hand, it may be appropriate to choose a more conservative significance level α or to invoke additional criteria to decide whether a model is acceptable. In the examples below, however, we will see that whether a model can be rejected is often fairly obvious, and in such cases we will simply use the asymptotic arguments based on the χ_{m} distribution.

### Invariant Minimization.

Steady-state invariants can also be used in conjunction with standard parameter fitting techniques. The basic approach is to minimize the Frobenius norm of the matrix , with entries , over the parameters, which readily provides a sufficient condition for model compatibility since any ** a** producing a small norm provides parameters that fit the data by construction. However, the condition is not necessary because suitable parameters may fail to be found even for compatible models due to the intricacies of the objective function. Clearly, prior knowledge of

**can be used to guide the optimization away from such difficulties.**

*a*Assuming that the model and its parameters are correct, each invariant in principle. However, due to noise, , where [16]by Eq. **11**. Therefore, if we use as the entry of ** θ** corresponding to invariant

*I*and datum , then the invariant error [17]This quantity can be used to compute the likelihood

*L*(

**) = Pr[θ(**

*a***)] and allows, e.g., various likelihood-based selection schemes (19, 20), assuming that the optimization can be performed. Here, we use the Akaike information criterion (AIC), [18]where**

*a**L*

_{max}= max

_{a}

*L*(

**), which penalizes model complexity; the preferred model is the one with the minimum AIC (21).**

*a*## Results

We apply our methods to two illustrative biological processes for which competing models exist: multisite phosphorylation and cell death signaling.

### Multisite Phosphorylation.

We focus first on phosphorylation, a key cellular regulatory mechanism that has been the subject of extensive study, both experimentally (22⇓–24) and theoretically (4, 5, 25⇓–27). Following ref. 4, we consider a two-site system with reactions, [19][20]where *u*,*v*∈{0,1}^{2} are bit strings of length two, encoding the occupancies of each site (0 or 1 for the absence or presence, respectively, of a phosphate), with *u* having less bits than *v*; *S*_{u} is the phosphoform with phosphorylation state *u*; *K* is a kinase, an enzyme that adds phosphates; and *F* is a phosphatase, an enzyme that removes phosphates. Each enzyme can be either processive (P), where more than one phosphate modification may be achieved in a single step, or distributive (D), where only one modification is allowed before the enzyme dissociates from the substrate (*c*_{0011} = 0 for *K*, γ_{1100} for *F*). This mechanistic diversity generates four competing models: PP, PD, DP, and DD; where the first letter designates the mechanism of the kinase, and the second, that of the phosphatase.

As in ref. 4, we consider only the concentrations *x*_{obs} = (*s*_{00},*s*_{01},*s*_{10},*s*_{11}) as observable and use the ordering, [21]with which we are able to eliminate all other variables except *f* from the dynamics of each model. The remaining Gröbner basis polynomials are of the form *p*(*f*,*x*_{obs}) = *f*·*q*(*x*_{obs}), where *f* ≠ 0 unless there is no phosphatase in the system, which we assume not to be the case, so we take only the observable part *q*(*x*_{obs}). It is easy to check that the resulting denominators are always of one sign.

Each model has three steady-state invariants. Matched appropriately, the invariants for model PP share the same transformed variables ** y** =

*φ*(

*x*_{obs}) as those for PD; the same is true for DP and DD. Thus, in terms of the transformed data, only the kinase mechanism is discriminative. Between PP/PD and DP/DD, two invariants (

*I*

_{1}and

*I*

_{2}) are discriminative in principle, though only one (

*I*

_{2}) succeeds numerically: For simulated data from the PP/PD models, provided that the noise level is sufficiently low, coplanarity on

*I*

_{2}is able to correctly reject the DP/DD models at significance level α = 0.05 (Δ ∼ 10

^{5}versus Δ ∼ 1 for PP/PD at

*ϵ*= 10

^{-9}, against a threshold of

*p*

_{α}= 11.2). The corresponding test using DP/DD data is not successful due to the form of

*I*

_{2}, which has transformed variables, [22][23]for PP/PD and DP/DD, respectively, i.e.,

*y*^{PP/PD}has the additional variable

*s*

_{00}

*s*

_{10}over

*y*^{DP/DD}. Therefore, PP/PD models can be made to fit DP/DD data simply by setting the coefficient corresponding to

*s*

_{00}

*s*

_{10}to zero, which is in fact what we observe. No model is rejected on the basis of data generated from it.

We emphasize that these results are specific to the particular ordering chosen. Indeed, one can make the phosphatase mechanism discriminative instead by reversing the order of the variables *x*_{obs} in Eq. **21**. The exhaustive analysis of such orderings is beyond the scope here; rather, we aim to illustrate the potential uses (and usefulness) of this type of approach using concrete examples.

Although the condition of coplanarity is technically valid only at steady state, there should nevertheless be some convergence over time to coplanarity for any compatible model. We hence compute Δ for the PP/PD and DP/DD models along time course trajectories simulated from model PP at various levels of *ϵ* (Fig. 2*A*). For low noise, the results confirm convergence for invariants previously identified as compatible (all *I*_{i} for PP/PD; *I*_{1} and *I*_{3} for DP/DD), with stagnation for incompatible invariants (*I*_{2} for DP/DD); these results suggest wider applicability of this method, provided that the data are approaching steady state reasonably fast. As the noise increases, however, Δ decreases inversely proportionally, until the stagnation point hits the basal error level of Δ ∼ 1 and we lose all power to reject. Additional simulations estimate the critical noise level at *ϵ* ∼ 10^{-4} (Fig. 2*B*).

To further discriminate between all four models we next turn to invariant minimization. The required optimization involves highly nonlinear functions, so success should be expected only if we have good initial estimates of the model parameters, which can be rather strong demand. In such a case, however, minimization is indeed capable of identifying the correct model from the data so long as *ϵ* ≲ 10^{-5} (Fig. 2*C*). These results reinforce our belief that the algebraic approach proposed here naturally complements conventional (i.e., parametric) reverse engineering schemes such as optimization or inference procedures.

### Cell Death Signaling.

We next apply our methods to receptor-mediated cell death signaling, the so-called extrinsic apoptosis pathway, which plays a prominent role in cancers and other diseases (28⇓⇓–31). Specifically, we consider the assembly of the death-inducing signaling complex (DISC), a multiprotein oligomer formed by the association of FasL, a death ligand, with its cognate receptor Fas (32, 33).

We investigate two models of DISC formation. The first (34), which we call the cross-linking model is based on the successive binding of Fas (*R*) to FasL (*L*), [24][25][26]where *C*_{i} is the complex FasL∶Fas_{i}. The second (6), which we call the cluster model, posits three forms of Fas (inactive, *X*; active and unstable, *Y*; active and stable, *Z*) and specifies receptor cluster-stabilization events driven by FasL, [27][28][29][30]where the last two reactions represent entire families generated by taking *i* = 2 or 3, with *j* = 1,…,*i* and *k* = 1,…,*j*. The cluster model is capable of bistability, whereas the cross-linking model exhibits only monostable behavior (6).

The two models are structurally very different, and discriminating between them requires some care. Hence, following ref. 6, we establish a correspondence between the models by considering the apoptotic signal ζ transduced by the DISC, defined as ζ = *c*_{1} + 2*c*_{2} + 3*c*_{3} for the cross-linking model and ζ = *z* for the cluster model. We assume that ζ is experimentally accessible; other variables assumed accessible include λ, the total concentration of FasL (λ = *l* + *c*_{1} + *c*_{2} + *c*_{3} and λ = *l* for the cross-linking and cluster models, respectively), and ρ, the total concentration of Fas (ρ = *r* + *c*_{1} + 2*c*_{2} + 3*c*_{3} and ρ = *x* + *y* + *z*, respectively). Eliminating all other variables via the orderings (*c*_{2},*c*_{3},λ,ρ,ζ) and (*y*,λ,ρ,ζ) for the cross-linking and cluster models, respectively (after appropriate variable substitutions), we obtain one non-vanishing steady-state invariant for each model. The dimensions of the transformed spaces are 5 and 15 for the cross-linking and cluster models, respectively.

As for phosphorylation, we compute the coplanarity error for each invariant on time course data simulated from each model at various noise levels. Although results are inconclusive for data from the cross-linking model, the coplanarity criterion can reject the cross-linking model on the basis of cluster model data at α = 0.05, provided that *ϵ* ≲ 10^{-2} (Fig. 3 *A* and *B*). The minimization protocol also correctly identifies the model from the data over the same range of noise levels (Fig. 3*C*).

## Discussion

In this paper, we have presented a model discrimination scheme based on steady-state coplanarity that does not require known or estimated parameter values. Thus we are able to sidestep the parameter inference problem common to many fields including systems biology (3, 35). Such algebraic methods are not always effective, however; steady-state invariants may not exist, and even when they do, the additional degrees of freedom introduced by effective linearization can cause the method to fail. A promising solution to the problem when invariants cannot be calculated using Gröbner bases may be to employ invariants from CRNT (36). Our results also suggest a somewhat low tolerance for noise, which can restrict its applicability. Significantly, our method has the unique feature that it can be applied with complete ignorance of parameter values, and is therefore a useful additional tool in the analysis of inverse problems involving dynamical systems.

Rather than competing directly with current model discrimination techniques, we expect that coplanarity will form one end of an entire spectrum of methods, to be used when no parameter information is available. At the other end lie methods based on parameter estimation (including invariant minimization), which, for dynamical systems, can depend delicately on qualitative and quantitative aspects of the systems under consideration (37, 38). The intermediate regime comprises techniques that can leverage partial knowledge, for instance, constraints on certain parameter values or qualitative features of the dynamics (39). Along this spectrum, naturally, the discriminative power increases with the amount of prior information available. In this broader context, coplanarity can be used to efficiently reject candidate models before employing more demanding parameter estimation tools. Thus, it can serve as a preprocessor to thin out the model space. The real advantages and limitations of any inferential procedure become apparent once their performance can be evaluated in real-world applications, which is perhaps particularly true for this current approach. Certainly a range of theoretical and computational issues surround algebraic methods which will likely impact their applicability. Here we have found that a pragmatic approach yields some useful insights for small and intermediate-sized problems.

Finally, we remark that the presented scheme is perhaps the simplest of a potential class of parameter-free selection methods based on the detection of geometric structure. In this view, transformation to coplanarity is just one of many low-dimensional descriptions of such structure. The existence of low-dimensional representations has recently been predicted in neuronal signaling (40), and can ultimately be attributed to the inherent robustness of biological systems (41, 42).

## Materials and Methods

### Gröbner Basis Calculation.

All reduced Gröbner bases are computed over the field of rational functions in the parameters ** a** with rational coefficients, under a suitable lexicographic ordering with the observables

*x*_{obs}located at the end of the variable list, using the computer algebra system Singular (http://www.singular.uni-kl.de/) as interfaced through Sage (http://www.sagemath.org/).

### Data Generation.

For each model parameters are drawn independently from a log-normal distribution with median μ^{∗} = *e*^{μ} = 1 and multiplicative standard deviation σ^{∗} = *e*^{σ} = 2, where μ and σ are the mean and standard deviation, respectively, of the underlying normal distribution. Using these parameters *m* = 100 time course trajectories are computed for each model via integration of the model ODEs over the time interval 0 ≤ *t* ≤ 100; each trajectory is seeded by random initial conditions sampled from a log-normal distribution also with μ^{∗} = 1 and σ^{∗} = 2. Integration is performed using the solver LSODA as wrapped in SciPy (http://www.scipy.org/). The data are then corrupted by noise of varying levels from *ϵ* = 10^{-9} to 10^{-1}, for each *ϵ*, multiplying the nominal data by random log-normal samples with μ^{∗} = 1 and σ^{∗} = 1 + *ϵ*.

### Invariant Minimization.

Invariant error likelihood maximization is performed in two phases. First, an approximate optimal parameter set is obtained by minimizing the Frobenius norm of the matrix , where each entry corresponds to an invariant-datum pair as in ** θ**, but with values , where [31]The result is then taken as an initial parameter estimate to compute

*L*

_{max}. All optimizations are performed using L-BFGS-B (43) through SciPy, with lower and upper bounds of 0.01 and 100, respectively, for each variable. The minimization of ‖

**‖**

*η*_{F}is seeded with initial value 1 for all variables.

### Computational Platform.

All computations are performed centrally in Sage, making use of its interfaces to various programs. Plots were produced using matplotlib (http://matplotlib.sourceforge.net/). The Sage worksheet for this paper, which contains code for all computations performed, is available at http://www.sagenb.org/home/pub/3462/.

## Acknowledgments

We thank Carsten Wiuf, Elisenda Feliu, and Sarah Filippi for their comments on the manuscript. H.A.H. and M.P.H.S. gratefully acknowledge the Leverhulme Trust. K.L.H. was funded in part by National Science Foundation Grant DGE-0333389. T.T. and M.P.H.S. also acknowledge funding from the Biotechnology and Biological Research Council. M.P.H.S. is a Royal Society Wolfson Research Merit award holder.

## Footnotes

↵

^{1}H.A.H. and K.L.H. contributed equally to this work.- ↵
^{2}To whom correspondence should be addressed. E-mail: m.stumpf{at}imperial.ac.uk.

Author contributions: H.A.H., K.L.H., and M.P.S. designed research; H.A.H., K.L.H., and T.T. performed research; T.T. contributed new reagents/analytic tools; H.A.H. and K.L.H. analyzed data; and H.A.H., K.L.H., and M.P.S. wrote the paper.

The authors declare no conflict of interest.

This article is a PNAS Direct Submission.

Freely available online through the PNAS open access option.

## References

- ↵
- Toni T,
- Welch D,
- Strelkowa N,
- Ipsen A,
- Stumpf MPH

- ↵
- Vyshemirsky V,
- Girolami MA

- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- Conradi C,
- Saez-Rodriguez J,
- Gilles ED,
- Raisch J

- ↵
- ↵
- ↵
- Cox D,
- Little J,
- O’Shea D

- ↵
- ↵
- ↵
- ↵
- Belsley DA,
- Kuh E,
- Welsch R

- ↵
- Casella G,
- Berger RL

- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- Gunawardena J

- ↵
- Huang CYF,
- Ferrel JE Jr.

- ↵
- ↵
- Thompson CB

- ↵
- ↵
- ↵
- ↵
- Ashkenazi A,
- Dixit VM

- ↵
- ↵
- ↵
- Lodhi HM,
- Muggleton SH

- Gunawardena J

- ↵
- ↵
- ↵
- ↵
- ↵
- Barbano PE,
- et al.

- ↵
- Csete ME,
- Doyle JC

- ↵
- ↵

## Citation Manager Formats

## Sign up for Article Alerts

## Article Classifications

- Biological Sciences
- Biophysics and Computational Biology

- Physical Sciences
- Applied Mathematics