Synchronous neural activity in scalefree network models versus random network models
See allHide authors and affiliations

Communicated by Charles H. Bennett, IBM Thomas J. Watson Research Center, Yorktown Heights, NY, May 18, 2005 (received for review October 14, 2004)
Abstract
Synchronous firing peaks at levels greatly exceeding background activity have recently been reported in neocortical tissue. A small subset of neurons is dominant in a large fraction of the peaks. To investigate whether this striking behavior can emerge from a simple model, we constructed and studied a model neural network that uses a modified Hopfieldtype dynamical rule. We find that networks having a powerlaw (“scalefree”) node degree distribution readily generate extremely large synchronous firing peaks dominated by a small subset of nodes, whereas random (Erdös–Rényi) networks do not. This finding suggests that network topology may play an important role in determining the nature and magnitude of synchronous neural activity.
This paper explores a problem at the intersection of two lines of research. The first is the finding of large temporal fluctuations in the level of spontaneous activity within a biological neural network, particularly as shown recently by Yuste and collaborators (1, 2) but also by earlier workers. The second line of research is the finding that many realworld complex networks in engineering, computer science, biology, and social systems have topological structure quite different from randomly connected [Erdös–Rényi (ER)] networks (3), having very different distributions of node degrees (i.e., the number of edges connected to a node) and different node clustering properties. Many realworld networks appear to be approximately scalefree (SF) (4), meaning that their node degrees follow a powerlaw distribution. In this paper, we study relationships between the statistical properties of the network's topological structure and the dynamical behavior of the network's activity, where the network dynamics are given by a simple neurally inspired model.
Background
Synchrony Peaks in Recordings of Spontaneous Neural Activity. In studies of spontaneous synaptic activity in mouse visual cortex slices, Yuste and collaborators (1, 2, 5) have found that the fraction of cells active during a given time frame (e.g., 1 s) has extremely large fluctuations: Many more frames have synchrony peaks (i.e., a much largerthanaverage fraction of active cells) than would occur if the firing of the cells were independent and at random. For example, within the ≈280 frames of data for n = 754 cells shown in figure 1d (top) of ref. 1, the frameaveraged activity level is 0.4%, yet there are seven distinct synchrony peaks with activity levels between 2.5% and 6.8%.
When cells participate in a synchrony peak, they tend to be in a suprathreshold UP state (1) that persists for an interval that varies from 60 ms to many seconds. Voltageclamp recordings show that the time course of the membrane current (a pattern called a motif) is substantially similar during repeated firings of the same cell during multiple synchrony peaks (2). A small subset of the cells participates in many synchrony peaks, and certain sequences of three or more particular cells tend to repeat more often than would be expected if the cell activations were independent. The level of coactivation of cells within a synchrony peak tends to increase with repetition of a sequence containing that peak, suggesting a learning process. Supersequences or “songs” (2) occur in which several sequences repeatedly occur in the same order but with variable intervals between sequences. These supersequences tend to become temporally compressed upon repetition, also suggesting a learning process.
Other studies have shown that the spontaneous activity of a single neuron is nonrandom and is strongly related to the pattern of population activity and to the preferred response properties of that neuron and the surrounding population to external stimuli (6). Repeating spike sequences (exceeding random chance) have also been found in rat hippocampus during awake and sleep states, with temporal compression on repetition during slowwave sleep (7).
A well known network model displaying repeated sequences of cell activations is the synfire chain model of Abeles (8). This model was later modified, based on experimental results (9), from a simple feedforward structure to a synfire reverberation model, having recurrent connections within a neural network in which a variety of activation patterns (similar to the synchrony peaks above) share some units in common. Such a model is similar to a Hopfield model (10) using a partially connected network.
Complex Network Structure and Topology. During the past five decades, many rigorous results have been derived for networks of the classical “random” ER type, in which N nodes are connected by L links, with the endpoints of each link being selected independently and at random. In an ER network, the number of nodes having k links approximately obeys a Poisson distribution with a peak close to 〈k〉 = 2L/N, the average node degree. At a different extreme, networks having shortrange connections (e.g., whose nodes and links comprise a regular lattice of local connections) have also been extensively studied in statistical physics. However, much work over the past six years (reviewed in refs. 11 and 12) has shown that a wide variety of realworld networks have node degree distributions, topological properties, and behaviors that are quite different from those of ER and local networks.
Particular attention has been paid to socalled smallworld networks, which are characterized by a short average distance between any two nodes (a property shared by ER networks) and a large degree of node clustering (unlike ER networks). One way to realize this type of network is to specify a node degree distribution that obeys a power law, n(k) ∼ k ^{–α} (often referred to as a SF network), for α > 0 but not too large. Many diverse examples of realworld networks have been found to be approximately SF over a significant range of k values. Another way is to have two classes of links, one connecting nearby nodes and the other comprising a sparse set of longrange connections (13). Korniss et al. (14) have shown the benefit of this latter type of network, relative to nearneighbor networks, in ensuring that the spread between completion times of tasks performed on different nodes of a computer network remains bounded rather than diverging over time. Buzsáki et al. (15) note that using distinct classes of interneurons, each making connections having a different length scale, may enable a system to reduce cortical wiring length while supporting global synchrony and oscillations.
Motivation and Goals of the Present Work. In this paper, we focus on the strikingly large spontaneous fluctuations in average spiking activity found in studies such as those of Yuste and collaborators (1, 2) and on the occurrence of repeated patterns of firing of specific subsets of cells. We explore whether such large, shortlived fluctuations can arise in randomly connected model networks that obey very simple Hopfieldtype dynamical rules (10) with some additional features discussed below. Within the framework of these rules, we also investigate the role of network topology in enabling the occurrence of such fluctuations. We shall see that it is difficult to produce them in ER networks, particularly as the number of nodes becomes large. In SF networks, where large hubs (highdegree nodes) are much more likely, extremely large fluctuations emerge far more readily. More generally, we explore what the observed dynamics of a network can suggest about the topology of its underlying structure.
The Model
Networks. Two types of networks are studied here. For networks of both types, selfloops and multiple directed edges from a given source to a given target are excluded. ER graphs are random graphs (3, 11, 12) in which each pair of distinct source and target nodes is connected with probability p. SF graphs have SF (powerlaw) node outdegree distributions in which the probability of a node i being the source of k_{i} edges falls off as P(k_{i} ) ∝ k_{i} ^{–α} for large k_{i} .† The target nodes for each edge are chosen randomly; thus, the indegree distribution is sharply peaked around its maximum value. We construct a SF network with α = 2.5 (the value used in most of our runs) as follows: For each node i, choose the outdegree k_{i} as the integer part of a real number x between 1 and N, randomly drawn from the normalized distribution proportional to x ^{–α}. Then, for each i, draw k_{i} distinct target nodes uniformly at random from among all nodes other than i and create the directed link from source node i to each target node. The resulting networks typically have an average outdegree (number of outgoing edges per node) of approximately (α – 1)/(α – 2), which equals 3 for α = 2.5.‡ To compare networks having different values of α but similar average outdegree, we use an average outdegree of 3 as our standard. Thus, to construct a SF network with α > 2.5, we first create links as specified above, then (as needed) add outgoing edges to randomly chosen nodes one at a time until the average outdegree reaches 3. [Without such additional edges, the average outdegree for α ≥ 3 would be so small that it would be difficult to maintain sufficient recurrent activation within the network (see “Dynamics” below) to generate the collective phenomena studied here.]
Dynamics. The state of the ith node is a binary variable S_{i} = 1 or 0, respectively representing an on (firing) or off state. The directed edges of the graphs represent interactions between pairs of variables. The interaction strength J_{ij} of a directed edge from node i to node j is chosen randomly from the uniform distribution over an interval from J _{min} to J _{max}, where J _{min} ≥ 0, independently for each edge. If there is no directed edge from i to j, J_{ij} = 0. A threshold value b_{i} for each node i is also chosen randomly from the uniform distribution over the interval from b _{min} to b _{max}, with b _{min} > 0. (For most runs we use b _{min} = b _{max}.) Both J and b are constant in time.
We add to the basic Hopfield model, for reasons given in Results, three elements: (i) a low probability of random endogenous firing, p _{endo}, for each node; (ii) a maximum firing duration such that a node is turned off if it has been on for all of the previous t _{max} time steps; and (iii) a refractory period such that a node that has been active and has been turned off must remain off for a total of t _{ref} consecutive time steps. The network is initialized at time t = 0 by randomly setting each S_{i} (0) = 1 (respectively, 0) with probability p _{init} (respectively, 1 – p _{init}), where p _{init} « 1. At each discrete time t = 0, 1, 2,..., the weighted input to node j is defined as I_{j} (t) ≡ Σ _{i} J_{ij}S_{i} (t). The state S_{j} (t) of node j evolves with t according to the following rules. At each time step, all of the nodes are updated in random order.

If S_{j} (τ) = 1 for all τ = t, t – 1,..., t – t _{max} + 1, then S_{i} (t + 1) = 0 (maximum firing duration rule).

If S_{j} (t) = 0 and S_{j} (τ) = 1 for at least one τ satisfying (t – t _{ref} +1) ≤ τ ≤ (t – 1), then S_{j} (t + 1) = 0 (refractory period).

If neither rule 1 nor rule 2 applies, then

If I_{j} (t) ≥ b_{j} , then S_{j} (t + 1) = 1;

If I_{j} (t) < b_{j} _{,} then S_{j} (t + 1) = 1 (respectively, 0), with probability p _{endo} (respectively, 1 – p _{endo}).

These simple dynamics do not include the UP and DOWN states of different membrane potential (1, 2) nor any adaptation of connection strengths (“learning”) during network operation.
Results
Numerical Results. The simplest model of this type has neither a maximum firing duration, a refractory period, nor any random endogenous firing (t _{max} =∞, t _{ref} = 1, and p _{endo} = 0). In this case, for ER and SF networks, the system typically reaches a fixed point state; i.e., a time after which no S_{j} (t) changes. The maximum firing duration (rule 1) is added to eliminate any fixed point having at least one node on. We are interested in dynamical behaviors in which the average activity level (the fraction of nodes firing at a given time, ) is low (<1%), as in the experimental results of ref. 1. In this regime, the resulting dynamics for the simplest model (again, for ER and SF networks) often ends in the fixed point in which all sites are off. Random endogenous firing (rule 3b) is added to eliminate this fixed point behavior.
Finally, in raising the average ratio of excitation strength J_{ij} to threshold b_{i} to generate the rather large peaks of activity observed in (1) we find, not surprisingly, that the resulting average activity level is often »1%. The refractory period (rule 2) is added to damp the average activity level without suppressing occasional large peaks. (In general, achieving the desired combination of low average activity and much larger peaks requires a balance between increasing and suppressing excitation.) Each of the three features added has a clear analogue in real neural networks.
For SF networks with N = 500 and N = 1,200 nodes, average activity is plotted as a function of time in Figs. 1 a and b , respectively. The parameters used are α = 2.5, p _{endo} = 0.001, J _{min} = 0, J _{max} = 4, b _{min} = b _{max} = 3.0, t _{max} = 3, t _{ref} = 15, and p _{init} = 0.002. Five thousand time steps of data for each of 20 different network realizations with these parameters are shown in Fig. 1 a and b , data for the different realizations being separated by dashed vertical lines. No hint of temporal periodicity is discernible in any of our numerical data, nor are there longrange temporal correlations between the average activities S̄(t) measured at different times t. Temporal correlations of S̄(t) fall off very rapidly, typically disappearing within five or six time steps.
Fig. 1 a and b exhibits relatively large intermittent peaks superimposed on a low background level of activity. This behavior is most sensitive to the ratio (J _{max} + J _{min})/(b _{max} + b _{min}), which can be varied by approximately ±20% without abolishing the behavior. The other parameters can be adjusted more liberally, typically within a factor of two, without affecting the qualitative behavior, provided the refractory period is chosen large enough to ensure that the background activity is small. Although there is considerable variation in peak height among the different network realizations, all share the central qualitative feature of the experimental data. Most of the realizations for N = 1,200 have maximum peak heights of between 0.05 and 0.15 on a background whose typical level is <0.003, similar to ref. 1. For N = 500, the phenomenology is similar, but the maximum peak heights are, not surprisingly, somewhat larger.
For ER networks, Fig. 1 c and d shows, for N = 500 and N = 1,200, respectively, the average activities for 5,000 time steps for each of 12 runs corresponding to 12 different networks with p = 3/(N –1). Other parameters are p _{endo} = 0.001, J _{min} = 0, J _{max} = 10/3, b _{min} = b _{max} = 2.5, t _{max} = 3, t _{ref} = 1, and p _{init} = 0.02.
These values are chosen, as in the SF case above, in an attempt to achieve a high ratio of synchrony peaktoaverage activity, in conjunction with a low average activity. We find that even when these parameter values are optimized, the peaktoaverage activity ratio is substantially smaller for ER than for SF networks. Comparison in Fig. 1 of a and b (SF) with c and d (ER) demonstrates that, although the ER networks also show intermittent peaks sitting atop a low background, the ratio of peak heights (which are lower for ER than for SF) to average background (larger for ER than for SF) is substantially smaller for ER than for SF. This ratio is particularly clear in Fig. 1d (ER for N = 1,200), where the peaks are noticeably more frequent and rise less above the background than in Fig. 1b (SF). Not surprisingly, in other words, the ER network finds it progressively harder to produce peaks that stand out well above background as N and the smoothing effects of statistical averaging grow concomitantly. Because the degrees of the largest hub nodes also grow with N (see “Analytic Results” below), however, the SF networks are affected less by this phenomenon.
An even starker contrast between the two network types is shown in Fig. 2, where histograms of total activity M(t) = Σ _{i}S_{i} (t) = NS̄(t) are displayed in log–log form. Each solid curve in Fig. 2a shows cumulated data from 5,000 time steps averaged over 20 different SF network realizations, all using the parameters of Fig. 1 a and b . Different curves correspond to different values of N, ranging from 500 to 1,200. These histograms approximately follow straight lines having a slope of –2.5, consistent with a powerlaw distribution of total activity,§ Q(M) ∼ M ^{–2.5}.
The dashed curves in Fig. 2a show the number of nodes with outdegree k (averaged over 20 network realizations) plotted against k on the abscissa for each value of N. The slopes of these curves approach the expected –2.5 at large k values but are slightly greater than –2.5 for smaller k values, because of the discretization of the continuous x ^{–2.5} distribution (see The Model “Networks” above).
Fig. 2b shows the corresponding data for ER networks, each curve representing cumulated data from 5,000 time steps for each of 12 different ER networks, all using the parameters of Fig. 1 c and d . Again, different curves correspond to different values of N. The clear downward curvature of these data on a log–log plot shows that the distribution Q(M) falls off faster than a power law in this case. When plotted on a semilog scale (Fig. 2c ), these same data look approximately linear, indicating that Q(M) falls off approximately exponentially in the ER case. Thus, the firing statistics are different for the two types of networks.
Fig. 2d shows data similar to those in Fig. 2a , except for α = 3.0 rather than 2.5. As in Fig. 2a , the activity histograms have a regime within which they are approximately linear with a slope close to α. The linear regime of the histograms is somewhat narrower than in Fig. 2a , as is the linear regime of the degree distributions; the latter difference is a consequence of the extra edges added for α = 3.0 (see The Model “Networks” above).
In our SF networks, we find a relatively small subset of nodes that dominates the firing at the largest peaks. This subset consists of a few large hubs that by chance happen to be the source of particularly strong connections to other hubs and/or many other sites.
To illustrate the extent to which small numbers of nodes control the largest peaks in a particular network, we present firing statistics for a run of 25,000 time steps on a 500node network with α = 2.5, constructed by using the same parameters as in Fig. 1 a and b . Only the peaks whose height exceeded an assigned threshold, 0.06 in this case, were considered. Over the course of the run, 18 peaks exceeded this threshold. On average, 0.26% of the nodes were active at any time.
Thirtysix nodes of the 500 fired 15 or more times during the 18 large peaks, accounting for almost 65% of the total activity in these peaks, whereas nearly 75% of the nodes failed to fire during any of the peaks. High peaks occur only when one or more large, highly connected hubs become activated, triggering a chain reaction of activity that is quite deterministic given the fixed network structure and connection strengths. There is some variation in the firing patterns because of random endogenous activation, maximum duration length, refractoriness, and random order of updating of the sites, but much predictability remains in the patterns. For example, each of the 18 largest peaks in our 500node sample network involves the firing of node 43 during the peak itself and/or in the time step just preceding it. Node 43 is a hub with a large outdegree of 41 nodes, 16 of which are among the 36 most active sites. Large peaks apparently do not occur without its participation.
Fig. 3 shows the 36 most active nodes, plus node 43, and all of the directed edges connecting these nodes. This is a subset of the entire network, which comprises 500 nodes and 1,265 edges. To further illustrate the consistency of the peak firing patterns, we note that the large peaks persist for one or two time steps. Seven nodes, nos. 22, 37, 140, 259, 331, 359, and 440, fire on the first time step of each of the 10 twostep peaks, and five nodes, nos. 264, 316, 325, 331, and 431, fire during every one of the eight onestep peaks. A common firing sequence of these latter five nodes is as follows: Node 43 fires first, usually at the time step before the peak, which directly stimulates the firing of nodes 331, 359, and 431 and hub 128. Node 128 then activates the firing of nodes 264 and 325, whereas node 359 activates node 316. The consistency of this pattern and of similar ones for the twostep peaks is underscored by the fact that 21, 18, and 23 nodes fire in 70% or more of the first steps of the twostep peaks, the second steps of the twostep peaks, and the onestep peaks, respectively. The consistent repeating firing patterns at large peaks are reminiscent of the experimental observations (2) of precisely repeating spontaneous sequences of firing activity by specific cells.
Analytic Results: Mean Field Theory. The central results of the previous section are the observations that, for SF networks, the amplitudes of the large peaks far exceed the mean or the median activity levels, and the distribution Q(M) of total activity M declines roughly as Q(M) ∼ M ^{–α}, for α = 2.5 (Fig. 2a ) or α = 3.0 (Fig. 2d ). We now account at least approximately for these results, by estimating Q(M) and the magnitude of the statistical fluctuations in M.
To analyze the fluctuations in M, we study a meanfield version of our model wherein correlation effects, such as refractoriness, maximum firing duration, and the fact that simultaneous inputs from two or more other nodes may be required to trigger the firing of a given node, are all ignored. In particular, consider a network with a specified outdegree distribution {k_{i} }, wherein each node i that fires at time t causes a random subset containing pk_{i} of its k_{i} target nodes to fire at time t + 1. Assuming that M nodes, chosen at random from the N total nodes, fire at time t, we then calculate the probability of M′ nodes firing at time t + 1.
Suppose that the number of nodes of outdegree k is n_{k} , for 1 ≤ k ≤ N – 1. Now imagine that at a given time t, m_{k} of the n_{k} nodes with degree k fire, where 0 ≤ m_{k} ≤ n_{k} and Σ _{k}m_{k} = M ≡ mN. In the absence of any correlations, the probability of such a firing pattern is whereas the number of nodes M′ firing at the next time step, t + 1, is In this simplified meanfield model, the only source of fluctuations in M′ is the fact that the M active nodes at time t can be distributed in many different ways among the nodes of different degrees.
The average activity of the network at time t + 1 is then the average, 〈M′〉, of M′({m_{k} }): 〈M′〉 = Σ_{{} _{mk} _{}}P({m_{k} })M′({m_{k} }), the sum in this equation being performed under the constraint Σ _{k}m_{k} = M. Given Eq. 2, the calculation of 〈M′〉 reduces to the calculation of 〈m_{k} 〉, which is readily seen to be mn_{k} . Hence 〈M′〉 = pm Σ _{k}kn_{k}.
From Eq. 2, an expression for 〈M′^{2}〉 and, hence, for the fluctuations in M′, namely, ΔM′^{2} ≡〈M′^{2}〉–〈M′〉^{2}, can be written in similar fashion: Evaluation of 〈m_{k}m_{r} 〉 is straightforward, yielding 〈m_{k}m_{r} 〉 _{k} _{≠} _{r} = n_{k}n_{r}M(M – 1)/[N(N – 1)] and 〈m_{k} ^{2}〉 = (n_{k}M/N)[1 + (n_{k} – 1)(M – 1)/(N – 1)], which gives a formula for the strength of the fluctuations of M′ relative to its mean: In this formula, the effect of outdegree distribution on the relative size of activity fluctuations is manifest. In ER networks, wherein the n_{k} values fall off exponentially fast as k increases beyond its most probable value, the second fraction on the right side of Eq. 4 remains of O(1) as N becomes large, whereupon ΔM′^{2}/〈M′〉^{2} decreases as 1/N, the familiar consequence of the central limit theorem for typical fluctuations. For powerlaw distributions, n_{k} ∼ 1/k ^{α}, the same result is obtained provided α > 3. For 2 < α < 3, however, the second fraction in Eq. 4 increases as N ^{3–α} for large N, producing anomalously large fluctuations in large systems. [The origin of this effect can be traced to the well known divergence (12) of the second moment of the distribution 1/k ^{α} for α < 3.] This result makes quantitative the clear intuitive sense that heavytailed distributions, such as power laws with α sufficiently small, should admit anomalously large fluctuations in activity. Note, however, that twopoint correlations are insufficient to completely explain the large peaktoaverage activity ratio. Fig. 4 presents log–log plots of the relative fluctuations, ΔM′^{2}/〈M〉^{2}, calculated numerically for one set of ER networks and two sets of SF networks having different values of α. The diamond and starshaped markers show data for the ER networks and the SF networks with α = 3.5, respectively. These data show reasonable agreement with the predicted 1/N dependence, given the limited range of N studied. The square markers represent a set of SF networks with α = 2.5, for which our analysis predicts an N ^{–0.5} dependence. Although there is some evidence of a weaker than 1/N falloff, the data for this case are too noisy to confirm or rule out agreement with the meanfield analysis.
It remains to rationalize the powerlaw histogram displayed in Fig. 2 a and d . Combining Eqs. 1 and 2 gives an expression for the distribution, Q(M′), of values of M′ produced by M sites firing at the previous time step: Here, Σ_{{} _{mk} _{}} means the sum from 0 to n_{k} of each m_{k} with 1 ≤ k ≤ N – 1. Because the average activity of the networks of interest is low (typically <0.5%) we can approximate the sums in Eq. 5 by taking m_{k} to be either 0 or 1 for all k, whereupon () is either 1 or n_{k} , respectively. Up to a multiplicative factor, Q(M′) can then be approximated as where U ≡ M′/p. For SF networks, n_{k} ∼ k ^{–α}, so the Udependence of this integral for large U can be extracted, yielding Q(M′) ∼ (M′)^{–α} for any M. It is simple to check this result explicitly in the lowactivity limits M = 1 and M = 2. These limits are actually quite realistic for the smallest system sizes, N ∼ 500–600, in ref. 1 and in our simulations.
For α = 2.5 and α = 3.0, the result Q(M′) ∼ (M′)^{–α} is in reasonably good agreement with the numerical results (Fig. 2 a and d , respectively) for our full model. Activity histogram data for α = 3.5 (data not shown) also is consistent with Q(M′) ∼ (M′)^{–3.5}, although the agreement is less convincing than for α = 2.5 and α = 3.0 because the powerlaw regime of the histograms (and the degree distributions) narrows as α increases. Larger networks would therefore be required to provide compelling confirmation of Q(M′) ∼ (M′)^{–α} for α as large as 3.5. Because the meanfield analysis leading to the result Q(M′) ∼ (M′)^{–α} involved considerable simplification of the dynamics, agreement between this analysis and the numerical results suggests that the network structure itself, rather than the details of the dynamics, is responsible for producing the anomalously large peaks.
Discussion and Conclusions
We have shown that a neural network model with very simple dynamical rules is able to mimic the striking phenomenology recently found experimentally in biological neural networks (1, 2). We also found that the network's type (specifically, its distribution of node degrees) has a significant effect on its dynamical behavior and on its ability to simulate the experimental results. Our model generated two distinctive behaviors reported in refs. 1 and 2:

The occurrence of intermittent and randomly occurring extreme fluctuations of activity. In our simulations, the ratio of the synchronous activity at peaks to the average activity is of the order of 40 for SF networks (Fig. 1 a and b ). In ER networks, the mean activity is larger and the peak heights are lower, producing a ratio that is typically of the order of five for N = 1,200 (Fig. 1d ) for the most favorable parameter choices we could find.

A small subset of nodes dominates the activity of the synchronous peaks, with the same nodes being activated repeatedly and often in the same sequence during a large fraction of the peaks. This finding, as depicted in Fig. 3, is related to the finding of “hot spots” in other types of networks, such as the high flux backbone found in metabolic networks (see ref. 16 and references therein), with the proviso that our subgraph is generated by identifying nodes that are most active during the synchronous peaks, rather than nodes that have the greatest activity when averaged over all time steps.
Some important features of the experiments are absent from our model because of the intentional simplicity of the model's dynamical rules:

The UP and DOWN states (1) in which baseline membrane polarization is maintained at either of two distinct levels for periods from 50 ms to seconds represents an intermediateterm state memory that is absent from the model. It is therefore not surprising that the synchronous peaks are found experimentally to grow and decay over periods of seconds, whereas those in the model last for only one or a few time steps.

There is no spatial structure in our model, so that findings regarding the presence of spatial organization (columnar, clustered, or laminar) of activity within some of the synchronous peaks (1) have no parallel here.

Learning is absent from the model (connection weights and network structure are both static), so the model does not exhibit such adaptive phenomena as the time compression of supersequences of cell firings upon repetition (1) and the time compression (during sleep) found by Nádasdy et al. (7).
Regarding the UP and DOWN states, one could modify our model to include a threshold value b_{i} (for each node i) that can assume either of two values, depending on some measure of the node's recent history. It is an open question whether some form of dependency of b_{i} on history (or some other modification of the dynamics) would be able to robustly generate within an ER network the very large ratios of peaktoaverage activity that are found experimentally and in our model for SF networks. The basic issue is whether such changes in the dynamical rules can produce temporarily an effective subnetwork that mimics the desired behaviors that we have found in a static SF network. For the same subset of nodes to repeatedly participate in synchronous peaks, it would also be necessary for the dynamics (which would include memory over intermediate time scales corresponding to, e.g., 50 ms to seconds) to reactivate substantially the same subnetwork after a much longer interval (e.g., 50–200 seconds).
Our results suggest the value of experimental studies concerning several points. (i) Suppose one constructs a histogram showing the number of time steps (e.g., the number of frames of an imaging sequence), F(x), at which a fraction x of the cells is active. What is the functional form of F(x)? (ii) Is there anatomical evidence for hubs (cells that are connected to many other cells) in the cortical tissue for which extremely large synchrony peaks are observed, and to what extent do these hubs dominate the peak activity? (iii) How does the ratio of peak to average activity scale as a function of N, the number of cells in the slice preparation?
Although our results are limited to a study of random and SF networks, the latter (for sufficiently small exponent) are a special case of networks having “heavytailed” node distributions wherein the probability of a node having high degree k falls off much more slowly with k than for an ER network. Because the presence of hubs in the SF networks is a key to producing the high ratio of peaktoaverage activity in our model, we expect that this high ratio would also emerge in networks that are more generally heavytailed, even when not SF. This group would include, as a simple case of interest, networks that have two distinct subpopulations of nodes with very different average node degrees.
Our discussion has emphasized the increase in fluctuations of average activity that occurs in a SF (vs. ER) network. Another way to view this increase in fluctuations is as an increase in the occurrence of synchronous activity within that portion of the network that dominates the activity. In other applications, e.g., the parallel task processor network considered by Korniss et al. (14), the use of a Watts–Strogatz smallworld (vs. local lattice) network decreases the fluctuations in (i.e., increases the synchrony among) task completion times. Thus, in very different dynamical contexts, the use of a smallworld (in our case, SF) network increases a measure of firing synchrony. These results suggest the following questions: What is the potential utility (or risk, depending on the application) of large dynamical fluctuations in average activity or of synchronous firing (intermittent, in our case) that may be inherent in heavytailed networks, both in neurobiology and in other contexts? And what conditions, in addition to network topology, can significantly affect synchronous firing activity in a variety of network models?
Consideration of these questions will likely produce subtleties and surprises. For example, for the seemingly related problem of the global synchronization of oscillators situated on the nodes of a network, it has been found (17, 18) that the extent of synchronicity is not simply related to the node degree distribution, but instead depends on the properties of a diffusion process on the network, which involves communication along all paths between a pair of nodes. This process is in turn influenced by how the coupling strengths are normalized with respect to node degree, whether the couplings are symmetric, and other considerations. A comprehensive understanding of the fundamental determinants of (locally or globally) synchronized network behavior, one that spans a variety of different dynamical models, remains to be achieved.
Acknowledgments
We thank Drs. Irina Rish and Alina Beygelzimer for discussions on complex network behavior, Dr. Roger Traub for discussions regarding neuroscientific issues, and Dr. Aaron Kershenbaum for assisting us with pajek network display software.
Footnotes

↵ * To whom correspondence should be addressed at: IBM Thomas J. Watson Research Center, 1101 Kitchawan Road and Route 134, P.O. Box 218, Yorktown Heights, NY 10598. Email: linsker{at}us.ibm.com.

Author contributions: G.G. and R.L. designed research; G.G. and R.L. performed research; G.G. and R.L. analyzed data; and G.G. and R.L. wrote the paper.

Abbreviations: ER, ErdösRényi; SF, scalefree.

↵ † It is common in models and in realworld networks for the degree distribution to deviate from a strict power law at small k. See, e.g., equation 7.10 of ref. 12 for the Barabási–Albert preferential attachment model and figure 3.2 of ref. 12 for realworld networks.

↵ ‡ For the 20 SF networks of Fig. 1a , which have α = 2.5 and N = 500 nodes, the average outdegree varied from 2.1 to 3.5. Large fluctuations are expected because the node degree fluctuation diverges with N when α < 3.

↵ § Note, however, that it is difficult to conclude with certainty that Q(M) falls off as a power of M, given the limited range of M and the similar appearance of powerlaw and, e.g., lognormal distributions.
 Copyright © 2005, The National Academy of Sciences
References
 ↵

↵
Ikegaya, Y., Aaron, G., Cossart, R., Aronov, D., Lampl, I., Ferster, D. & Yuste, R. (2004) Science 304 , 559–564. pmid:15105494

↵
Erdös, P. & Rényi, A. (1959) Publ. Math. 6 , 290–297.

↵
Barabási, A.L. & Albert, R. (1999) Science 286 , 509–512. pmid:10521342
 ↵

↵
Tsodyks, M., Kenet, T., Grinvald, A. & Arieli, A. (1999) Science 286 , 1943–1946. pmid:10583955

↵
Nádasdy, Z., Hiraswe, H., Czurkó, A., Csicsvari, J. & Buzsáki, G. (1999) J. Neurosci. 19 , 9497–9507. pmid:10531452

↵
Abeles, M. (1991) Corticonics: Neural Circuits of the Cerebral Cortex (Cambridge Univ. Press, Cambridge, U.K.).

↵
Abeles, M., Bergman, H., Margalit, E. & Vaadia, E. (1993) J. Neurophysiology 70 , 1629–1638.

↵
Hopfield, J. J. (1982) Proc. Natl. Acad. Sci. USA 79 , 2554–2558. pmid:6953413
 ↵
 ↵
 ↵

↵
Korniss, G., Novotny, M. A., Guclu, H., Toroczkai, Z. & Rikvold, P. A. (2003) Science 299 , 677–679. pmid:12560543
 ↵
 ↵
 ↵
 ↵