## New Research In

### Physical Sciences

### Social Sciences

#### Featured Portals

#### Articles by Topic

### Biological Sciences

#### Featured Portals

#### Articles by Topic

- Agricultural Sciences
- Anthropology
- Applied Biological Sciences
- Biochemistry
- Biophysics and Computational Biology
- Cell Biology
- Developmental Biology
- Ecology
- Environmental Sciences
- Evolution
- Genetics
- Immunology and Inflammation
- Medical Sciences
- Microbiology
- Neuroscience
- Pharmacology
- Physiology
- Plant Biology
- Population Biology
- Psychological and Cognitive Sciences
- Sustainability Science
- Systems Biology

# Network synchronization landscape reveals compensatory structures, quantization, and the positive effect of negative interactions

Edited by Giorgio Parisi, University of Rome, Rome, Italy, and approved April 7, 2010 (received for review November 6, 2009)

## Abstract

Synchronization, in which individual dynamical units keep in pace with each other in a decentralized fashion, depends both on the dynamical units and on the properties of the interaction network. Yet, the role played by the network has resisted comprehensive characterization within the prevailing paradigm that interactions facilitating pairwise synchronization also facilitate collective synchronization. Here we challenge this paradigm and show that networks with best complete synchronization, least coupling cost, and maximum dynamical robustness, have arbitrary complexity but *quantized* total interaction strength, which constrains the allowed number of connections. It stems from this characterization that *negative* interactions as well as link *removals* can be used to systematically improve and optimize synchronization properties in both directed and undirected networks. These results extend the recently discovered compensatory perturbations in metabolic networks to the realm of oscillator networks and demonstrate why “less can be more” in network synchronization.

Flocking animals (1, 2), self-coordinated moving sensors (3), bursting neurons (4), pace-matching chemical oscillators (5), and frequency-locked power generators (6) are some of many physical manifestations of spontaneous synchronization. Like other forms of collective phenomena (7), synchronization depends critically on the properties of the interaction network (8). Common wisdom suggests that synchronization is generally easier to achieve with more interactions, that synchronization properties change monotonically as the number of available interactions is varied, and that certain network structures facilitate while others inhibit synchronization. These three expectations, however, are all false because they ignore the possibility of compensatory structural effects. For example, removing a link from a globally connected network makes it less likely to synchronize, but targeted removal of additional links can enhance synchronization (9–15). Heterogeneous distribution of coupling strengths or connectivity generally inhibits synchronization (16, 17), but when combined they can compensate for each other (18, 19). Bringing this argument one step further, while previous studies have focused mainly on positive interactions (but see refs. 20–22)—presumably because negative interactions alone generally do not support synchrony—it is actually easy to provide examples in which negative interactions help stabilize synchronous states. This is illustrated in Fig. 1, where the network composed of black and blue links is not optimal for synchronization but the removal of the blue interactions or, alternatively, the addition of interactions with negative strengths (red links) makes it optimal; the same is achieved by weighting the strengths of all three input interactions of each purple node by a factor of 2/3. However, the counterintuitive properties that start to emerge from such case studies currently lack a common in-depth explanation that is both comprehensive and predictive in nature.

Here we show that these and other apparently disparate properties follow from the discovery we present below that networks optimal for synchronization have a quantized number of links, in multiples of a constant that depends only on the number of nodes and on the connection strengths. We derive our results for the local stability of synchronous states in networks of identical units and provide evidence that our main results remain valid for networks of nonidentical units. We choose to focus on optimal networks because, as we show, this class is very rich, can be dealt with analytically, and forms a multicusped synchronization landscape, which underlies all synchronizable networks and from which suboptimal networks can be studied using perturbation and numerical methods. An immediate consequence of our quantization result is that the addition of links to an optimal network generally results in a suboptimal network, providing systematic examples of link removals that enhance synchronizability. Similar quantization results hold true for optimal networks with negative interactions, which we derive using a generalized complement transformation that maps them into networks with only positive interactions. We show that negative interactions can reveal antagonistic structural relations and counterbalance structural heterogeneities, with potential implications for inhibitory interactions in neural (23–25), power-grid (6, 26), and cell-regulatory networks (27). The interactions between power-grid generators, for example, can be positive or negative depending on the inductive vs. capacitive nature of the corresponding network elements [e.g., for the Western U.S. power grid they are split 97% vs. 3% (26)].

## Results

### Optimal Networks for Synchronization.

We represent the structure of a network with *n* nodes using its adjacency matrix *A* = (*A*_{ij})_{1≤i,j≤n}, where *A*_{ij} is the strength of the link from node *j* to node *i*. We consider the network dynamics governed by [1]where **x**_{i} is the state vector of the *i*th dynamical unit, **F** represents the dynamics of an isolated unit, **H**(**x**_{j}) is the signal that the *j*th unit sends to other units (28), and is the global coupling strength *ε*≥0 normalized by the average coupling strength per node . As a result of this normalization, the dynamics for a given *ε* is invariant under scaling of *A* by a constant, which does not change the network structure. This system has served as a workforce model to study synchronization because it allows analysis of the network influence without detailed specification of the properties of the dynamical units. For example, using a stability function Λ(·) that is independent of the network structure (7, 28, 29, 30), the condition for a synchronous state **x**_{1}(*t*) = ⋯ = **x**_{n}(*t*) = **s**(*t*) to be linearly stable is for *i* = 2,…,*n*, where *λ*_{2},…,*λ*_{n} are the nonidentically zero eigenvalues of the Laplacian matrix *L* defined by (see *Materials and Methods*). Thus, the smaller the normalized spread of the eigenvalues in the complex plane, which we quantify using [2]the more synchronizable the network will generally be. Another measure of synchronizability is the coupling cost at the synchronization threshold , whose minimization is equivalent to the condition [3]which is also equivalent to the condition *σ* = 0. This condition constrains all the eigenvalues to be real, even though the networks can be directed and directed networks have complex eigenvalues in general. Condition **3** is also equivalent to the maximization of the range of that allows for stable synchronization as well as to the maximization of dynamical robustness, in that it can achieve the fastest exponential convergence to synchronization (see *Materials and Methods*). [This may relate, for example, to the finding that heterogeneous distribution of links, which tend to make the distribution of *λ*_{i}’s heterogeneous, leads to longer transients to synchronization in ecological network models with strong couplings (31).] We emphasize that the equivalence of these conditions holds for a wide variety of possible stability functions, including those for which the region defined by Λ(·) < 0 may be finite or semiinfinite, or have multiple disconnected components. Having the maximum range, minimum coupling cost, and maximum dynamical robustness, the networks satisfying Eq. **3** are called the optimal networks for synchronization. Similar results can be derived for networks of nonidentical units, in which the functions **F** and **H** are possibly different for different nodes, and this more general case will be discussed below.

### Quantized Number of Links.

A surprising consequence of having these optimal synchronization properties is the quantization of the number of links in the networks. We find, for example, that for binary interaction networks (i.e., *A*_{ij} = 0,1) satisfying condition **3**, the number of links is quantized to multiples of *n* - 1. That is, [4]This follows from the identity , combined with the fact that condition **3** constrains the real eigenvalue further to be an integer for networks with integer-valued interactions (see *SI Text*, Section 1). Consequently, any network with *m* strictly between these quantized values must have *σ* > 0 and hence cannot be optimal. What is then the minimum *σ* for all such networks with a given *m*? We denote this minimum by *σ* = *σ*_{min}(*m*). Based on our analysis of all networks with *n* ≤ 6, we conjecture that the condition to achieve this minimum is that the Laplacian eigenvalues (counting multiplicity) have the form [5]where *k* is the integer satisfying *q*_{k} ≤ *m* ≤ *q*_{k+1}. Note that, analogously to Eq. **4** for *m* = *q*_{k}, this condition asserts that the eigenvalues are not only real but also integer for any *m*. This leads to our prediction that [6]which is expected to be valid for binary interaction networks with arbitrary number of nodes and links. Indeed, Fig. 2*A* shows that for 10 ≤ *n* ≤ 12, a simulated annealing procedure identified networks (blue dots) with *σ* within 10^{-3} of (but not smaller than) *σ*_{min}(*m*) predicted by Eq. **6** (black curve). Starting from the (optimal) fully connected network [with *m* = *n*(*n* - 1)] at the right end of the curve *σ*_{min}(*m*), any initial link deletion necessarily makes synchronization more difficult. Further deletions, however, can make it easier, bringing the networks back to optimal periodically as a function of *m* at the cusp points and eventually leading to (optimal) directed spanning trees with *m* = *n* - 1 (see Movie S1). The optimization of synchronization at these cusp points is analogous to the optimization of the dynamical range of excitable networks at the critical points of phase transitions (32). Similar cusps are also observed for the Laplacian eigenvalues of a structurally perturbed optimal network as a function of the perturbation (see *SI Text*, Section 2). Note that, although the cost generally depends on *m*, it is actually independent of *m* for optimal networks of given size *n* because the synchronization threshold compensates for any change in *m*.

To reveal the intricate dependence of synchronization on the number of nodes and links, we now consider *σ*_{min} as a function of both *m* and *n* (Fig. 2*B*) based on our prediction **6**. Each point on this synchronization landscape represents *all* networks with *σ* = *σ*_{min} for a given *m* and *n*, and the number of such networks is expected to grow combinatorially with *n* (see *SI Text*, Section 3). All other networks with the same *m* and *n* are represented by points directly above that point. The evolution of one such network by rewiring links under pressure to optimize synchronizability can be visualized as a vertical descent from a point above the landscape toward a point on the landscape that minimizes *σ* (red arrow in Fig. 2*B*). The sharp valleys are observed along the lines *m* = *q*_{k} and therefore correspond to the cusp points in Fig. 2*A*. Because two adjacent points on the landscape may include networks that do not have similar structures, it is surprising that one can actually move from one point to the next often by a simple structural modification (see Movie S1). Indeed, moving in the direction of the *m* coordinate can be achieved by simply adding or removing a link that induces the smallest increase or largest decrease in *σ*. Along the line *m* = *q*_{k}, an optimal network can be “grown” and kept optimal, by adding a node and connecting any *k* existing nodes to the new node (see *SI Text*, Section 3). The flexibility of choosing new links in this construction strongly suggests that optimal networks can have arbitrarily complex connectivity structures as the network size grows. Another interesting landscape appears when we compute the cost *K* as a function of *m* and *n* (Fig. 2*C*) based on condition **5**. In this landscape, optimal networks lie along the lines of discontinuity, resulting from the discontinuous change in that occurs when *m* changes from *q*_{k} - 1 to *q*_{k} and defines a sawtooth function along the *m* coordinate. Note that for optimal networks, the cost *K* is independent of *m*, as mentioned above, but increases linearly with *n* and can be expressed as *K* = *K*_{2}(*n* - 1), where *K*_{2} is the coupling cost for a network of two coupled units. A different but potentially related structure is the roughness of the time horizon considered in the synchronization of parallel processing units in distributed computing (33, 34).

While the presentation above focused on networks of identical units, the main results also hold true for networks of nonidentical units. Adopting the framework of ref. 35 and developing further analysis for networks of one-dimensional maps, we show in *SI Text* and Figs. S1 and S2 (Section 4) that complete synchronization is possible even for nonidentical units. Moreover, we show that this is possible only for networks satisfying Eq. **3** in addition to the condition that the Laplacian matrix *L* is diagonalizable. Since any such networks must also exhibit the same quantization expressed in Eq. **4**, we also expect cusps similar to those shown in Fig. 2. For each quantized number of links *q*_{k}, we can show that there is a network that can synchronize completely despite the heterogeneity of the dynamical units. Therefore, for both identical *and* nonidentical units, condition **5** can be used to systematically construct examples of suboptimal networks that can be made optimal by either adding or removing links.

### Stabilizing Effect of Negative Interactions.

Interestingly, the exact same quantization effect described above for binary interaction networks is also observed when we allow for negative interactions and interpret as the *net* number of links. To see this, we use a generalization of the complement network, which we define for a given constant *α* to be the network with adjacency matrix *A*^{c} given by [7][This includes the special case *α* = 1, which for undirected unweighted networks corresponds to the previously studied case of complement graphs (36).] The transformation from a network to its complement maps *m* to *αn*(*n* - 1) - *m*, *λ*_{i} to *αn* - *λ*_{i}, *σ* to , and thus an optimal network to another optimal network when (see *Materials and Methods*). This also establishes a mapping between networks capable of complete synchronization of nonidentical units, since the Laplacian matrix for such a network remains diagonalizable under the transformation (see *SI Text*, Section 4). The condition on *α* avoids (nonsynchronizable) networks having eigenvalues with negative real part as an artifact of the transformation. We argue that this generalized complement transformation is a powerful tool in analyzing networks with negative interactions because it reduces problems involving negative interactions to those involving only positive interactions when we choose *α*≥ max *A*_{ij}.

As an example of quantization with negative interactions, we consider the class of networks for which *A*_{ij} = 0, ± 1 and assume that *λ*_{2},…,*λ*_{n} have positive real parts to ensure that the network can synchronize. Mapping these networks under the complement transformation with *α* = 1, we obtain positive interaction networks with *A*_{ij} = 0,1,2. Conjecture **5** applied to the resulting networks then leads to a prediction *identical* to Eq. **6**, which we validated by simulated annealing for networks of size up to 12 (blue dots, Fig. 2*A*). Thus, all the results discussed above based on Eq. **6**, including Eq. **4** and the shape of the synchronization landscape, are predicted to be valid even when negative interactions are allowed. Whether the networks with *σ* = *σ*_{min}(*m*) actually do have negative interactions is not a priori clear because the synchronization landscape can be built entirely by using the subset of networks with only positive interactions. Our simulated annealing shows, however, that many optimal networks (i.e., with *σ* = 0) have negative interactions, as illustrated by an example in Fig. 1. In addition, complete synchronization of nonidentical units is possible in the presence of negative interactions (see *SI Text*, Section 4). These networks provide clear and systematic examples of negative interactions that improve synchronization, as removing negative interactions from any such network would in general push *m* off the quantized values (*m* = *q*_{k}) and make the network suboptimal.

The quantization of *m* and the shape of the synchronization landscape, though they were identical for the two examples above, do depend critically on the constraints imposed on the interactions. Consider, for example, the fully connected networks with interaction strengths ± 1. It can be shown that the implied constraint *A*_{ij} ≠ 0 leads to a different quantization compared to Eq. **6**, which involves multiples of 2(*n* - 1) rather than *n* - 1, [8]for *q*_{n-2(k+1)} ≤ *m* ≤ *q*_{n-2k} (red dots, Fig. 2*A*). The synchronization landscape can be drastically different in some extreme cases. On the one hand, within the widely studied class of undirected unweighted networks (corresponding to a stronger set of constraints, *A*_{ij} = *A*_{ji}, *A*_{ij} = 0,1), no network with *m* < *n*(*n* - 1) satisfies the optimality condition **3** (10, 37), which indicates that *σ*_{min}(*m*) > 0 for all *m* < *n*(*n* - 1). On the other hand, within the class of weighted networks corresponding to having no constraint on the interaction strengths, an optimal network can be constructed for any number of links ≥*n* - 1 (e.g., the hierarchical networks in ref. 10), which implies that *σ*_{min} is identically zero in this particular case.

To demonstrate that the synchronization enhancement by negative interactions goes much beyond the realm of optimal networks, we propose a simple algorithm for assigning strength -1 to directional links in an arbitrary network with all link strengths initially equal to +1. Our strategy is based on the observation that the in-degree distribution is a main factor determining synchronizability (17–19, 38, 39), where the in-degree (or the total input strength) of node *i* is defined to be . Because heterogeneity in the in-degree distribution tends to inhibit synchronization (17–19, 38, 40), here we use negative interactions to compensate for positive interactions and to homogenize the in-degree distribution. For a given network, we first choose randomly a node with the smallest in-degree and change the strength of each out-link of that node to -1, unless it makes the in-degree of the target node smaller than the mean in-degree of the original network. We keep strength +1 for the other out-links, as well as all the in-links. Having treated this node, we repeat this process for the subnetwork of untreated nodes, considering links and hence degrees only within that subnetwork. We continue this process until all nodes are treated. Applying this algorithm to the network in Fig. 3*A*, we see that the high in-degree nodes of the initial network (those in the shaded area) receive all of the compensatory negative interactions (red arrows), reducing *σ* by nearly 35% (Fig. 3*B*). The effectiveness of the method was further validated (Fig. 3*C*) using random scale-free networks (41) generated by the standard configuration model (42), where we see more dramatic effect for more heterogeneous networks, reducing *σ* by as much as 85%. The synchronization enhancement was indeed accompanied by the homogenization of the in-degree distribution (see *SI Text* and Fig. S3, Section 5). The use of negative directional interactions in our algorithm suggests that the enhancement is partly due to link directionality (see ref. 43 for an enhancement method purely based on link directionality), which generally plays an important role in synchronization (see, for example, refs. 21 and 44). However, negative strength of interactions alone can also produce similar enhancement in random scale-free networks when they are assigned to bidirectional links between hubs (see *SI Text* and Fig. S4, Section 6).

## Conclusions

Even though negative interactions and link removals by themselves tend to destabilize synchronous states, we have shown that they can compensate for other instabilities, such as those resulting from a “forbidden” number of interactions or from a heterogeneous in-degree distribution. This establishes an unexpected relation between network synchronization and recent work on metabolic networks, where locally deleterious perturbations have been found to generate similar compensatory effects that are globally beneficial in the presence of other deleterious perturbations (45, 46). For example, the removal of a metabolic reaction—or, equivalently, of its enzyme-coding gene(s)—can often be partially or totally compensated by the targeted removal of a second metabolic reaction. That is, the inactivation of specific metabolic reactions can improve the performance of defective or suboptimally operating metabolic networks, and in some cases it can even rescue cells that would be nonviable otherwise (45). Other research has shown that similar reaction inactivation occurs spontaneously for cells evolved to optimize specific metabolic objectives (47). These apparently disparate examples share the common property that the collective dynamics is controlled, and in fact enhanced, by *constraining* rather than augmenting the underlying network. Such network-based control, including the conditionally positive impact of otherwise detrimental interactions and constraints, is most likely not unique to these contexts.

In neuronal networks, for example, inhibitory interactions have been predicted to facilitate synchronous bursting (23–25, 48–50). Even more compelling, it has been established for both animal (51) and human (52, 53) cerebral cortices that the density of synapses first increases and then decreases during early brain development, suggesting a positive role played by link removal also in neuronal networks. More generally, we expect that the eigenvalue spectrum analysis that led to our conclusions will help investigate analogous behavior in yet other classes of network processes governed by spectral properties, such as epidemic spreading (54) and cascading failures (55, 56).

Taken together, the highly structured characteristics revealed by the synchronization landscape explain why the characterization of the network properties that govern synchronization has been so elusive. Numerous previous studies performed under comparable conditions have sometimes led to apparently conflicting conclusions about the role played by specific network structural properties such as randomness, betweenness centrality, and degree distribution (30). Our results indicate that at least part of these disagreements can be attributed to the sensitive dependence of the synchronization properties on the specific combination of nodes and links, as clearly illustrated by the nonmonotonic, periodic structure of cusps exhibited by the synchronization landscape. We suggest that insights provided by these findings will illuminate the design principles and evolution mechanisms of both natural and engineered networks in which synchronization is functionally important (57).

## Materials and Methods

### Synchronization Stability Analysis.

The stability analysis can be carried out using a master stability approach based on linearizing Eq. **1** around synchronous states (28). We apply a coordinate transformation to reduce the Laplacian matrix *L* to its Jordan canonical form and decompose the variational equations into components along the corresponding eigenspaces (9, 10). This leads to a master stability equation, [9]where for the eigenspace corresponding to *λ*_{i}. The stability function Λ(*β*) is the maximum Lyapunov exponent for the solution **y**(*t*) = **0** of Eq. **9**. The *A*_{ij} are not required to be nonnegative and, since the derivation of Λ(*β*) is based on a Jordan form, *L* is not required to be diagonalizable either (9, 10). Geometrically, the stability condition is that all the (possibly complex) numbers lie in the region of the complex plane. Because Λ(*β*) is generally positive on the left side of the imaginary axis, we consider only the networks whose Laplacian eigenvalues have a nonnegative real part, which ensures that complete synchronization is possible.

### Dynamical Robustness of Optimal Networks.

For a given stability function Λ(*β*), a network satisfying condition **3** has the maximum rate of exponential convergence to synchronization among all networks. This is so under the assumption that there is a physical limit *M* to the coupling strength of individual links, , and that the stability function Λ(*β*) is continuous and monotonically increasing with respect to the distance from the real axis, which appears to hold true for all known examples of Λ(*β*) in the literature (58). From the limit on coupling strengths, it follows that the real part of is bounded by a constant *Mn*(*n* - 1), which, combined with the assumption on Λ(*β*), implies that the exponential rate of convergence is completely determined by the value of Λ(*β*) in the interval [0,*Mn*(*n* - 1)] on the real axis. In this interval Λ(*β*) has a global minimum at some *β*^{∗}, and thus *r*^{∗}≔-Λ(*β*^{∗}) > 0 is the maximum possible rate of exponential convergence. If the network satisfies condition **3**, all perturbation eigenmodes converge at the maximum rate *r*^{∗} when we choose . In contrast, if the network violates condition **3**, there must be at least one eigenvalue *λ*_{i} such that [excluding the exceptional situation where multiple values of fall precisely on multiple global minima of Λ(*β*)], resulting in an eigenmode that converges at a slower rate . Therefore, although optimal networks may suffer from initially slower convergence to synchronization that is polynomial in time (9, 10), the long-term convergence rate is dominated by *r*^{∗} and is faster than for any other network. Indeed, the deviation from the synchronous state can be written as *P*(*t*)*e*^{-r∗t} for an optimal network and as *Q*(*t*)*e*^{-rt} for a suboptimal network, where *P*(*t*) and *Q*(*t*) are polynomials. The ratio between the deviations in the two cases is then [10]and, in particular, is less than 1 for sufficiently large *t*, implying that the deviation will eventually become smaller for the optimal network.

### Laplacian Spectrum of Generalized Complements.

We show that if the eigenvalues of *L* are 0, *λ*_{2},…,*λ*_{n} (counting multiplicity), then the eigenvalues of the Laplacian matrix *L*^{c} of the generalized complement (defined by Eq. **7**) are 0, *nα* - *λ*_{2},…,*nα* - *λ*_{n}. This result follows from the relation [11]where *μ*(*L*,*x*) = det(*L* - *xI*) is the characteristic polynomial of the matrix *L* and *I* is the *n* × *n* identity matrix. We derive this relation by following the strategy of the proof of Lemma 2.3 in ref. 59 for undirected networks with nonnegative link strengths, to now consider directional and weighted links, possibly with negative strengths. From the definition of the complement transformation, we can write *L* + *L*^{c} = *nαI* - *αJ*, where *J* is the *n* × *n* matrix with every entry equal to one. Using this and well-known properties of the determinant, we have [12]where *L*^{T} denotes the transpose of *L*. Eq. **11** then follows from the fact that *μ*(*L*^{T} + *αJ*,*z*)/(*nα* - *z*) = -*μ*(*L*^{T},*z*)/*z* whenever *L* has row sums equal to zero (which is the case, because *L* is the Laplacian matrix of a network). To prove this fact, we will use elementary row operations, which do not change the determinants. First, we replace the first row of the matrix *L*^{T} + *αJ* - *zI* by the sum of all rows, making each entry in this row *nα* - *z*. Next, we subtract this row multiplied by *α*/(*nα* - *z*) (which makes it a row with all entries equal to *α*) from the remaining rows canceling out the contribution from the term *αJ* in these rows. We denote the resulting matrix by *M*_{1}. Finally, we take the matrix *L*^{T} - *zI* and replace the first row by the sum of all rows. This results in a matrix *M*_{2} with exactly the same entries as *M*_{1} except for the first row. For this matrix, this is a row with all entries equal to -*z* rather than *nα* - *z*. Dividing the first rows of *M*_{1} and *M*_{2} by *nα* - *z* and -*z*, respectively, which will scale the determinants accordingly, we see that [13]As an immediate consequence of this result, the normalized standard deviation *σ* changes to under the complement transformation, while the total link strength *m* is mapped to *αn*(*n* - 1) - *m*. In particular, this implies that the complement of any optimal network is also optimal if .

## Acknowledgments

The authors thank Marian Anghel for discussions on the power-grid network and Sara Solla for providing important references on the role of inhibitory neurons. This work was supported by NSF under Grant DMS-0709212.

## Footnotes

^{1}To whom correspondence may be addressed. E-mail: tnishika{at}clarkson.edu or motter{at}northwestern.edu.Author contributions: T.N. and A.E.M. designed research; T.N. and A.E.M. performed research; T.N. analyzed data; and T.N. and A.E.M. wrote the paper.

The authors declare no conflict of interest.

This article is a PNAS Direct Submission.

This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.0912444107/-/DCSupplemental.

## References

- ↵
- ↵
- ↵
- ↵
- ↵
- Taylor AF,
- Tinsley MR,
- Wang F,
- Huang Z,
- Showalter K

- ↵
- Grainger J,
- Stevenson W Jr

- ↵
- Barrat A,
- Barthelemy M,
- Vespignani A

- ↵
- ↵
- Nishikawa T,
- Motter AE

- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- Myers SA,
- Anghel M,
- Motter AE

- ↵
- McAdams HH,
- Shapiro L

- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- Korniss G,
- Novotny MA,
- Guclu H,
- Toroczkai Z,
- Rikvold PA

- ↵
- ↵
- Diestel R

- ↵
- ↵
- ↵
- ↵
- ↵
- Barabási AL,
- Albert R

- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- Wang XJ,
- Buzsaki G

- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- Kiss IZ,
- Rusin CG,
- Kori H,
- Hudson JL

- ↵
- ↵
- Mohar B,
- Poljak S

## Citation Manager Formats

### More Articles of This Classification

### Physical Sciences

### Related Content

- No related articles found.