New Research In
Physical Sciences
Social Sciences
Featured Portals
Articles by Topic
Biological Sciences
Featured Portals
Articles by Topic
 Agricultural Sciences
 Anthropology
 Applied Biological Sciences
 Biochemistry
 Biophysics and Computational Biology
 Cell Biology
 Developmental Biology
 Ecology
 Environmental Sciences
 Evolution
 Genetics
 Immunology and Inflammation
 Medical Sciences
 Microbiology
 Neuroscience
 Pharmacology
 Physiology
 Plant Biology
 Population Biology
 Psychological and Cognitive Sciences
 Sustainability Science
 Systems Biology
Coarsegrained analysis of stochasticityinduced switching between collective motion states

Edited by Simon A. Levin, Princeton University, Princeton, NJ, and approved February 15, 2007 (received for review September 20, 2006)
Abstract
A single animal group can display different types of collective motion at different times. For a onedimensional individualbased model of selforganizing group formation, we show that repeated switching between distinct ordered collective states can occur entirely because of stochastic effects. We introduce a framework for the coarsegrained, computerassisted analysis of such stochasticityinduced switching in animal groups. This involves the characterization of the behavior of the system with a single dynamically meaningful “coarse observable” whose dynamics are described by an effective Fokker–Planck equation. A “lifting” procedure is presented, which enables efficient estimation of the necessary macroscopic quantities for this description through short bursts of appropriately initialized computations. This leads to the construction of an effective potential, which is used to locate metastable collective states, and their parametric dependence, as well as estimate mean switching times.
Fish, birds, and honey bees, as well as many other animal groups, display collective types of motion such as schooling, flocking, and swarming (1, 2). A single animal group can display different types of collective motion at different times, with <1 day of residence time in each state (3). Although such transitions could be due to changing behavioral rules or environmental factors, they also can occur entirely due to stochastic effects, as will be demonstrated for the model considered in this paper.
One class of biologically motivated, individualbased models for group formation, frequently used for schooling fish, abstracts animal behavior by placing zones around individuals in which they respond to others through repulsion, alignment, and/or attraction (4–11). In the threedimensional model of Couzin et al. (10), longtime steadystate computations revealed four different types of stable collective motion in different parameter regions: swarm, torus, dynamic parallel, and highly parallel. It was also shown that by changing the quantitative features of the behavioral rules (increasing or decreasing the radius of alignment), the collective state of the school could be changed.
In ref. 10, stochasticity is incorporated by adding a small deviation to the heading of each individual obtained from the deterministic evolution algorithm. Our simulations show that if one instead considers relatively rare but substantial variations, namely that there is a small probability of each individual changing its direction substantially from that obtained from the deterministic algorithm, then for certain parameter regions multiple successive transitions between the torus and the dynamic parallel state can occur. See supporting information (SI) Fig. 7.
In this paper, we study a onedimensional individualbased model for group formation with stochasticity included along the lines of the variation described above. This system exhibits repeated stochasticityinduced switching between distinct ordered collective motion states. This switching appears similar in nature but is, as we will show, different in detail, from results obtained by using other models. In those cases, collective motion transitions between “symmetryrelated” states [e.g., between clockwise and counterclockwise motions for marching locusts constrained to a ring (12), or the “alternating flock” in refs. 13 and 14], stochastically driven transitions between ordered and disordered states mediated by clustering (15), mixedphase states at phase transition boundaries (16), or transitions that do not occur repeatedly (17) were observed.
Individualbased models are often used in the study of animal groups because they can incorporate biologically realistic behavioral responses and social interactions that might be discontinuous (e.g., characterized by thresholds or if/then rules) or stochastic in nature; they also can support complex network topologies, allow for individual variability, and enable the study of the relationship between adaptive individual behavior and emergent properties (18, 19). Most analysis of individualbased models, however, relies on longtime simulations, which can be extremely costly and difficult to interpret and analyze (2, 19). In this paper, we introduce a framework for the coarse analysis of stochasticityinduced switching between collective motion states for individualbased models. We characterize the behavior of the model with a single “coarse observable,” A(t), a scalar variable that quantifies the global structure of the school. We show computational evidence to support that A(t) parameterizes a onedimensional, attracting, invariant “slow manifold,” which characterizes the longterm dynamics of the system. This suggests that we can use an effective Fokker–Planck (FP) equation to describe the dynamics of the probability distribution P(A), whose drift and diffusion coefficients are determined by the shorttime evolution of the first two moments of A. We locally estimate these coefficients by developing a “lifting” procedure, which enables the initialization of brief bursts of simulations of the individualbased model at a given value of A (20). This framework allows us to construct an effective potential, thereby enabling coarse bifurcation analysis and estimation of the mean residence times in each state, without having to rely on computationally expensive “longtime” equilibrium simulations of the individualbased model.
Results
Model.
N agents with positions c_{i} (t) and unit directions v_{i} (t) = ±1, i = 1, …, N, move on the line with constant speed s. Time is partitioned into steps of size τ units, corresponding to finite response time of the agents. At the beginning of every time step, each agent instantly updates its direction by considering the positions and directions of agents surrounding it in three nonoverlapping zones (see Fig. 1): the zone of repulsion Z_{ri} (t) = (c_{i} (t) − r_{r} , c_{i} (t) + r_{r} ), the zone of orientation Z_{oi} (t) = (c_{i} (t) − r_{o} , c_{i} (t) + r_{o} )\Z_{ri} (t), and the zone of attraction Z_{ai} (t) = (c_{i} (t) − r_{a} , c_{i} (t) + r_{a} )\(Z_{ri} (t) ∪ Z_{oi} (t)), where r_{r} is the radius of repulsion, r_{o} is the radius of orientation, and r_{a} is the radius of attraction. We define Δr_{o} = r_{o} − r_{r} and Δr_{a} = r_{a} − r_{o} . These zones are used to define rules that are abstractions of the behavioral tendencies seen in animal groups in nature, the first being that animals tend to repel away from those that are too close, and the second that if they are not so repelled, they tend to align with and feel an attraction toward their neighbors (21, 22). Specifically, if agents can be found within an individual i's zone of repulsion, then it orients its direction away from the average relative directions of those within its zone of repulsion. The desired direction of agent i is then given by If individual i does not find agents within its zone of repulsion, it orients its direction toward an equally weighted combination of the average orientations of itself and those within its zone of orientation, o_{i} (t), and the average relative directions of those within its zone of attraction, a_{i} (t). The desired direction of agent i is then given by If v_{i} (t + τ) = 0, the agent maintains its direction from the previous time step, so that v_{i} (t + τ) = v_{i} (t). Otherwise, the desired direction v_{i} (t + τ) of agent i is normalized as Stochasticity is added so that each agent changes the sign of its desired direction with probability p. Each agent's position is updated according to To begin a simulation, N individuals are placed randomly on the interval [−N/4, N/4] with random directions, chosen so that each agent initially interacts with at least one other agent.
We observe that the model can display two metastable cohesive collective states, which we call “stationary” and “mobile.” In the stationary state, the individual dynamics are driven by repulsion. The school remains approximately stationary in time, with neighboring agents typically having opposite directions, and each agent typically changing its direction at each time step to avoid neighbors to its right or left. We interpret this ordered stationary state as a onedimensional analog of circular milling behavior. In the mobile state, the individual dynamics are driven by orientation and attraction, and the school coherently travels in the positive or negative direction. This is (onedimensional) parallel motion. For certain values of the parameters we find “stick/slip behavior,” in which the school alternates at apparently random times between the stationary and mobile states; such transitions arise from random fluctuations in the directions of individuals because of the stochasticity of the model (see Fig. 2). In what follows, we focus primarily on parameters for which both the stationary and mobile states are metastable.
Coarse Observable.
Through simulations we are led to hypothesize that the dynamics of this model can be suitably characterized by a single coarse observable the average nearest neighbor distance. This variable has been used in fish schooling models as a measure of the global structure of the school (6). A(t) can distinguish between the stationary and mobile states as long as the school is not fragmented into subgroups displaying different collective dynamics. When the system is in the stationary state, typically A(t) < r_{r} (repulsion driven), and when the system is in the mobile state, typically A(t) > r_{r} (orientation and attraction driven; see Fig. 2).
Computational Observations.
For our simulations, we fix N = 100, s = 0.75, τ = 0.1, r_{r} = 1, Δr_{a} = 1, and p = 0.001, and let Δr_{o} vary. (For these parameters, when fragmentation into smaller, noninteracting subschools occurs, typically one subschool is composed of only a few agents, so that A(t) remains a good measure of the collective state.) For each value of Δr_{o} studied, data were taken from 100 runs lasting 10^{4} steps. For Δr_{o} sufficiently small, the school remains in the stationary state for the duration of our simulation. As Δr_{o} is increased to ≈0.14, the school exhibits stick/slip behavior in which it transitions at apparently random times between the stationary and mobile states. Transitions between these states typically begin with a stochastic change in direction of an agent at the edge of the school, which then “propagates” through the rest of the school (cf. refs. 13 and 14). As an example, for Δr_{o} = 0.6, the mean residence time for the stationary state is 994 steps, and for the mobile state, 509 steps. For Δr_{o} > 1.08, the school remains in the mobile state for the entire duration of our simulations.
Observing the steadystate probability distributions for various values of the parameter Δr_{o} , shown in Fig. 3, one can see the signature of the transitions between the stationary and mobile states. The probability distribution peaks at approximately A = w_{1} ≡ r_{r} − sτ, corresponding to the stationary state, and at approximately A = w_{2} ≡ r_{r} + sτ, corresponding to the mobile state, a distance ≈2sτ from the stationary state. This may be rationalized by considering the dynamics of the stationary state, which is characterized by each agent typically changing its direction at every single time step to avoid its neighbor to the right or left, with A remaining nearly constant. Thus, one expects the individuals to be spaced approximately at alternating distances of d_{1} and d _{2} where d_{1} < r_{r} and d_{2} ≈ d_{1} + 2sτ > r_{r} . When the group exhibits a transition from the stationary to the mobile state, the distance d_{2} “propagates” throughout the school so that A ≈ d_{2} in the mobile state.
In our simulations we found that the locations of the peaks of the stationary probability distribution depend somewhat on the details of the initial positions of the agents. This is a straightforward consequence of an important property of the model: in determining the desired direction of a given agent at the next time step, the only positional information used is which (if any) zone the other agents are in. Thus, agents can be moved slightly without changing their zones and hence with no change to the dynamics, but with a change to the value of A. These “neutrally stable” states have consequences for our lifting procedure described in Methods.
Probabilistic Description.
We assume that the system dynamics at the macroscopic level may be suitably characterized by our single coarse variable A(t). We therefore consider describing the evolution of the probability distribution function P(A) with an effective FP equation (cf. refs. 23–25): where V(A) is the drift coefficient, and D(A) > 0 is the diffusion coefficient (26–28). These terms are related to the shorttime evolution of the first two moments of A as where A(t; A_{0} ) denotes a trajectory initialized at A_{0} at t = 0, angular brackets denote ensemble averaging over different realizations of the trajectory, and σ^{2} denotes the variance of A for such an ensemble. See SI Appendix for a detailed derivation. The FP equation is equivalent to the Itô stochastic differential equation with W(t) a Wiener process (26). In the limit D(A) = 0, Eq. 8 describes the deterministic motion of A subject to the “effective potential” U(A) = −∫_{−∞} ^{A} V(A′)dA′ + const. In general, an effective potential Φ(A) can be obtained from the stationary probability distribution function P_{s} (A), which satisfies the steady state (∂/∂t = 0) FP equation. Defining it follows that Φ(A) satisfies (23) When D(A) = const., this corresponds to Brownian motion of A subject to an effective potential proportional to U(A).
Effective Potential from LongTime Simulations.
We first construct the effective potential by obtaining a stationary probability distribution function directly from an ensemble of longtime simulations and by using Eq. 9 to compute the effective potential as Φ(A) = −log(P_{s} (A)) + const. An ensemble of 100 simulations of 10^{4} steps each was performed for a wide range of values of the parameter Δr_{o} . The probability distributions and corresponding effective potentials for three characteristic values of Δr_{o} are shown in Fig. 3. To study the dependence of the behavior of the system on parameters, the critical points of the effective potential were followed as Δr_{o} was varied. This is a useful practical analog of deterministic bifurcation diagrams for this stochastic case. The minima of the effective potential correspond to points on the stable branch of the bifurcation diagram and the maxima correspond to points on the unstable branch. (In determining the maxima, we perform a quadratic fit of the effective potential between the two prominent wells. This filters out spurious small minima and maxima, which can arise because of fragmentation of the school.) The bifurcation diagram is shown in Fig. 4. Two saddle node bifurcations are found at approximately Δr_{o} = 0.14 and Δr_{o} = 1.09, and the system appears bistable for values of Δr_{o} between these values.
We also construct the effective potential by using ensembles of longtime simulations as a database. Here A is discretized over a grid of values that appear in the database: A_{0} = 0.88 + mk, m = 0.005 (mesh size), k = 0,1, …, 42 for Δr_{o} = 0.6. Then, for each A_{0} over the grid, V(A_{0} ) and D(A_{0} ) are approximated by using Eq. 7 as follows. Every appearance of A_{0} (within a certain error tolerance) as well as its subsequent values over a short fixed time interval of length t = 10 steps are saved. The ensemble mean 〈A(t; A_{0} )〉 and variance σ^{2}(t; A_{0} ) are then computed by averaging over these short trajectories. V(A_{0} ) (resp., D(A_{0} )) are then estimated by taking the slope of the linear regression of 〈A(t; A_{0} )〉 (resp., σ^{2}(t;A_{0} )). Finally, Φ(A) is estimated by numerically approximating the integral in Eq. 10 .
Results are shown for both methods for Δr_{o} = 0.6 in Fig. 5. The effective potential obtained by using the second approach agrees quite well with that obtained by using the first approach, which confirms that A is a good dynamic observable. These methods, however, do not offer any computational savings because we had to compile data from sufficiently extensive temporal simulations (an “equilibrium run”), which take ≈10 h for 100 trials of 10^{4} steps each running with Matlab (MathWorks, Natick, MA) on a standard workstation.
Effective Potential from ShortTime Simulations.
Recently, Kevrekidis et al. (20) developed an “equationfree” computational framework for extracting populationlevel information from individualbased models; the term “equationfree” arises because the populationlevel equations are not explicitly known (20). The approach relies on the assumption that the system state variables can be separated into a subset of fast variables and a lowdimensional subset of slow or coarse variables (in our case A), which parameterize an attracting invariant slow manifold. If a simulation is appropriately initialized at a prescribed value A_{0} , then after a short time, once all of the fast variables have equilibriated, one will in effect sample the slow manifold at A_{0} . This framework can be used to estimate on demand (without longtime simulation) the drift and diffusion terms in the effective FP equation (24, 25).
The drift and diffusion coefficients were estimated by performing an ensemble of 10^{3} simulations for each A_{0} initialized by using the lifting procedure described in Methods. A linear fit of the data was performed after waiting a short “healing” time of 15 steps. The local drift and diffusion terms and the resulting effective potential, obtained by using the designed initializations, are in good agreement with those obtained by averaging multiple longtime simulations (see Figs. 4 and 5) and require approximately a factor of 5 less computation.
The construction of an effective potential from shorttime simulations using the lifting procedure also allows one to predict mean residence times using the Kramers formula (26) without having to observe the average time spent in each collective state from ensembles of longtime simulations. We find that such predictions are reasonably accurate: for example, for Δr_{o} = 0.6 the mean residence time predictions are 1,537 steps for the stationary state and 424 steps for the mobile state.
Discussion
The individualbased model of selforganizing group formation analyzed in this paper shows that animal groups can repeatedly switch between qualitatively different ordered collective motion states entirely due to stochastic effects. In particular, changes to behavioral rules or the environment are not necessary for such transitions to occur. Instead, such switching relies on the presence of at least two metastable collective motion states, and stochasticity of appropriate type and strength to allow transitions to occur. The use of effective potentials constructed from long as well as shorttime simulations allows a powerful characterization of such switching.
Because the stochasticity that leads to switching is imposed at the level of individuals, this analysis suggests that random decisions by a small number of individuals can change an entire population's collective behavior, in particular when these individuals are near the edge of the school. The stochasticityinduced switching discussed in this paper complements recent simulations for a related model, which indicate that a small number of informed individuals can influence group dynamics (29). One can imagine that a combination of these effects might also be important: for example, a small number of individuals might spot a predator and quickly, randomly change their directions, an “informed stochasticity,” which leads to a change in the entire group's motion, which could allow all individuals to escape (cf. ref. 30).
The framework developed in this paper provides a useful, “equationfree” computerassisted approach to the analysis of emergent phenomena in individualbased aggregation models. Most analysis of individualbased models in the field of group formation has relied on costly longtime simulations, which has limited the number of individuals that can be simulated as well as the types of analysis that can be realistically performed (11). Our approach allows one to achieve a new level of understanding and quantification of biological selforganization by bridging individualbased modeling with coarse populationlevel analysis.
Methods
The most challenging step of equationfree computation is “lifting”: the construction of one or more states “consistent with” the prescribed value of the coarse observable A_{0} . Because of the neutrally stable states mentioned in Computational Observations, to equilibrate to the desired slow manifold we must take care when placing agents at a given value of A_{0} . From these computations, we found that the distribution of distances between individuals in the stick/slip state is bimodal, with peaks at w_{1} and w_{2} . We use this information in our lifting algorithm to place individuals at a given A_{0} as follows.

Calculate proportions p _{1} and p _{2} of distances w _{1} and w _{2} so that p _{1} w _{1} + p _{2} w _{2} = A _{0}. If A _{0} < w _{1}, set w _{1} = A _{0} and if A _{0} > w _{2}, set w _{2} = A _{0}.

Draw the appropriate proportion p _{1} and p _{2} of distances to first nearest neighbors from tight Gaussian distributions centered at w _{1} and w _{2}. Place them randomly in the vector d _{1}.

Draw distances to second nearest neighbors from a tight Gaussian distribution centered at w _{2}. Place them randomly in the vector d _{2}.

Start the first agent at some position c _{1}. Place the second agent at position c _{2} = c _{1} + d _{1}(1). Place the third agent at position c _{3} = c _{2} + d _{2}(1). Place the fourth agent at position c _{4} = c _{3} + d _{1}(2). Continue this process until N agents have been positioned.

Let v_{i} = 1, ∀ i.
To validate this lifting procedure, we consider another coarse variable the group polarization, which has also been used in many fish schooling models as a measure of school structure (6, 10). When S ≈ 1 (resp., S ≈ 0) the school is in the mobile (resp., stationary) state. Our lifting procedure initializes the population with S = 1, and the time scale of approach to the slow manifold is comparable whether A _{0} > r_{r} or A _{0} < r_{r} (see Fig. 6). The quick relaxation to the slow manifold for A _{0} < r_{r} occurs because rule 1 causes the agents to try to immediately move away from each other.
Acknowledgments
We thank Danny Grünbaum, Simon Levin, Iain Couzin, and Bjorn Birnir for helpful discussions related to this work. This work was supported by National Science Foundation Grant NSF0434328. J.M. also was supported by an Alfred P. Sloan Research Fellowship in Mathematics, and I.G.K. was supported by a Guggenheim Fellowship.
Footnotes
 ^{‡}To whom correspondence should be addressed: Email: moehlis{at}engineering.ucsb.edu

Author contributions: A.K., J.M., and I.G.K. performed research, analyzed data, and wrote the paper.

The authors declare no conflict of interest.

This article is a PNAS direct submission.

This article contains supporting information online at www.pnas.org/cgi/content/full/0608270104/DC1.
 Abbreviation:
 FP,
 Fokker–Planck.
 © 2007 by The National Academy of Sciences of the USA
References

↵
 Camazine S ,
 Deneubourg JL ,
 Franks NR ,
 Sneyd J ,
 Theraula G ,
 Bonabeau E

↵
 Parrish JK ,
 EdelsteinKeshet L
 ↵

↵
 Aoki I
 ↵
 ↵
 ↵
 ↵

↵
 Czirók A ,
 Vicsek T
 Vicsek T
 ↵

↵
 Parrish JK ,
 Viscido S ,
 Grünbaum D

↵
 Buhl J ,
 Sumpter DJT ,
 Couzin ID ,
 Hale JJ ,
 Despland E ,
 Miller ER ,
 Simpson SJ
 ↵

↵
 Raymond JR ,
 Evans MR
 ↵
 ↵

↵
 Erdmann U ,
 Ebeling W ,
 Mikhailov AS

↵
 Grimm V ,
 Railsback SF

↵
 Bonabeau E

↵
 Kevrekidis IG ,
 Gear CW ,
 Hyman JM ,
 Kevrekidis PG ,
 Runborg O ,
 Theodoropoulos T

↵
 Krause J ,
 Ruxton GD

↵
 Partridge BL
 ↵
 ↵

↵
 Kopelevich DI ,
 Panagiotopoulos AZ ,
 Kevrekidis IG

↵
 Gardiner CW

↵
 Risken H

↵
 Coffey WT ,
 Kalmykov YP ,
 Waldron JT
 ↵
 ↵