## New Research In

### Physical Sciences

### Social Sciences

#### Featured Portals

#### Articles by Topic

### Biological Sciences

#### Featured Portals

#### Articles by Topic

- Agricultural Sciences
- Anthropology
- Applied Biological Sciences
- Biochemistry
- Biophysics and Computational Biology
- Cell Biology
- Developmental Biology
- Ecology
- Environmental Sciences
- Evolution
- Genetics
- Immunology and Inflammation
- Medical Sciences
- Microbiology
- Neuroscience
- Pharmacology
- Physiology
- Plant Biology
- Population Biology
- Psychological and Cognitive Sciences
- Sustainability Science
- Systems Biology

# A phase transition in the first passage of a Brownian process through a fluctuating boundary with implications for neural coding

Edited* by Terrence J. Sejnowski, Salk Institute for Biological Studies, La Jolla, CA, and approved February 26, 2013 (received for review July 24, 2012)

## Abstract

Finding the first time a fluctuating quantity reaches a given boundary is a deceptively simple-looking problem of vast practical importance in physics, biology, chemistry, neuroscience, economics, and industrial engineering. Problems in which the bound to be traversed is itself a fluctuating function of time include widely studied problems in neural coding, such as neuronal integrators with irregular inputs and internal noise. We show that the probability *p*(*t*) that a Gauss–Markov process will first exceed the boundary at time *t* suffers a phase transition as a function of the roughness of the boundary, as measured by its Hölder exponent *H*. The critical value occurs when the roughness of the boundary equals the roughness of the process, so for diffusive processes the critical value is *H*_{c} = 1/2. For smoother boundaries, *H* > 1/2, the probability density is a continuous function of time. For rougher boundaries, *H* < 1/2, the probability is concentrated on a Cantor-like set of zero measure: the probability density becomes divergent, almost everywhere either zero or infinity. The critical point *H*_{c} = 1/2 corresponds to a widely studied case in the theory of neural coding, in which the external input integrated by a model neuron is a white-noise process, as in the case of uncorrelated but precisely balanced excitatory and inhibitory inputs. We argue that this transition corresponds to a sharp boundary between rate codes, in which the neural firing probability varies smoothly, and temporal codes, in which the neuron fires at sharply defined times regardless of the intensity of internal noise.

A Brownian process *W*(*t*) that starts at *t* = 0 from *W*(*t* = 0) = 0 will fluctuate up and down, eventually crossing the value 1 infinitely many times: for any given realization of the process *W*, there will be infinitely many different values of *t* for which *W*(*t*) = 1. Finding the very first such time,known as the “first passage” of the process through the boundary *B* = 1, is easier said than done, one of those classical problems whose concise statements conceal their difficulty (1⇓⇓–4). For general fluctuating random processes, the first-passage time problem is both extremely difficult (5⇓⇓⇓–9) and highly relevant, due to its manifold practical applications: it models phenomena as diverse as the onset of chemical reactions (10⇓⇓⇓–14), transitions of macromolecular assemblies (15⇓⇓⇓–19), time-to-failure of a device (20⇓–22), accumulation of evidence in neural decision-making circuits (23), the “gambler’s ruin” problem in game theory (24), species extinction probabilities in ecology (25), survival probabilities of patients and disease progression (26⇓–28), triggering of orders in the stock market (29⇓–31), and firing of neural action potentials (32⇓⇓⇓⇓–37).

Much attention has been devoted to two extensions of this basic problem. One is the first passage through a stationary boundary within a complex spatial geometry, such as diffusion in porous media or complex networks. These models are used to describe foraging search patterns in ecology (38, 39), and the speed at which a node can receive and relax information in a complex network (40, 41).

The second extension is the first passage through a boundary that is a fluctuating function of time (42⇓–44), a problem with direct application to the modeling of neural encoding of information (45, 46). This problem and its application are the subject of this paper. The connection arises as follows. The membrane voltage of a neuron fluctuates in response both to synaptic inputs as well as internal noise. As soon as a threshold voltage is exceeded, a positive feedback loop triggers a chain reaction of ion channel openings, causing the neuron to generate an action potential or spike. Therefore, the generation of an action potential by a neuron involves the first passage of the fluctuating membrane voltage through the threshold. This dynamics of spike generation underlies neural coding: neurons communicate information through their electrical spiking, and the functional relation between the information being encoded and the spikes is called a “neural code.” Two important classes of neural code are the “rate codes,” in which information is only encoded in the average number of spikes per unit of time (rate) without regard to their precise temporal pattern, and the “temporal codes,” in which the precise timing of action potentials, either absolute or relative to one another, conveys information (47).

Central to the distinction between rate and temporal codes is the notion of jitter or temporal reliability. This notion originates from repeating an input again and again and aligning the resulting spikes to the onset of the stimulus. Time jittering is assessed graphically through a raster plot and quantitatively in a temporal histogram [peristimulus time histogram (PSTH)], which permits verifying the temporal accuracy with which the neuronal process repeats action potentials.

A fundamental observation is that the very same neuron may lock onto fast features of a stimulus yet show great variability when presented with a featureless, smooth stimulus (33). These two are extreme examples from a continuum—the jitter in spike times depends directly on the stimulus being presented (48).

## First Passage Through a Rough Boundary

We shall make use of a simple geometrical construction, mapping the dynamics of a neuron with an input, internal noise and a constant threshold voltage, onto a neuron with internal noise and a fluctuating threshold voltage; the construction thus maps the input onto fluctuations of the threshold. We use as our model neuron the “leaky integrate-and-fire neuron,” a simple yet widely used (36, 48⇓⇓⇓⇓⇓⇓⇓⇓⇓⇓⇓–60) model of neuronal function defined by the following:where *V* is the membrane voltage, 1/*α* is a decay time given by the *RC* constant of the membrane, *I* the current that the neuron receives as an input through synapses or a stimulating electrode, and *ξ*, an internal noise. When *V* first reaches a threshold value *T*, an action potential is generated, and the voltage is reset to zero. The nonlinearity of the model is concentrated on the spike generation and subsequent reset, so that between spikes we can integrate separately the effect of the input and of the noise; defining as *V*_{I} and *V*_{ξ} these separate processes,they obey the following equations:Because the input *I*(*t*) is fixed, we can make the choice to solve the *V*_{I} equation starting from *V*_{I}(0) = 0 just once without any resets, preserving its continuity, because it has no stochastic inputs; thus, all of the resets of the original *V* process are only carried out on the *V*_{ξ} process. The condition of *V*(*t*) reaching the threshold *T* is then recast as *V*_{ξ} reaching the boundary *T* − *V*_{I}. In this way, we have transformed a problem with a variable input *I*(*t*) and a constant threshold *T* into a problem with constant (zero) input and a fluctuating threshold . We stress *V*_{I} is a “frozen” function just like the original *I*(*t*), in fact dependent only on *I*(*t*) and not at all on the process *V*_{ξ} because *ξ* does not appear in its defining equation. The reset operationbecomesor, in other words, upon touching the boundary *B*(*t*), the process *V*_{ξ} instantaneously jumps back by *T* units, to *B*(*t*) − *T*.

These considerations lead us to examine the problem of the first-passage time through a fluctuating threshold, and more generally that of “recurring” first passages through a fluctuating threshold. In the recurring problem, as we have formulated it, upon touching the boundary the walker is immediately teleported back, in our case an amount *T*, and keeps going until the next passage. It should be noted that this recurring-passage problem will lead to distributions that, naively, one would expect to be smoother, because the probability distribution for the second spike consists of passages starting, not from a fixed starting point, but from the first passages of the first spike.

To develop some intuition about the problem, we are going to break it up into two parts, a “geometrical optics” part, in which most first passages can be accounted for by simple “visibility” considerations, and a “diffractive” correction in which we take into account that random walkers can turn around corners. The geometrical part is simple: most first passages are generated by the walker running into a hard-to-avoid obstacle, as shown in Fig. 1*A*. The intuition is that the walkers are moving left to right, rising onto a ceiling from which features are hanging, and as the walkers rise they collide with some feature. The problem is thus twice symmetry-broken: what matters are local minima of the boundary, not the maxima, which are hard to get into; and the walkers only spontaneously run onto the left flank of a local minimum. Therefore, a good first-order approximation follows from observing that most of the first passages occur on the left flanks of local minima, and deeper local minima cast “shadows” on subsequent shallower minima.

However, there is a finite probability that a walker may narrowly avoid a local minimum and pass just under it, only to rapidly rise afterward and hit the right rising flank of the barrier, as shown in Fig. 1*C*. This is, effectively, a race between the boundary and the walker: if the walker can rise far faster than the boundary, then there is some probability of passage right of the minimum. However, if the boundary rises faster than a walker can catch up with, then the probability of passage right of the minimum can be exponentially small. Let us consider a local minimum of the barrier *B*(*t*) at time *t*_{0} of the following form:and consider a walker that has just narrowly missed the minimum by an amount *ε*: *W*(*t*_{0}) = *B*(*t*) − *ε*. The probability of the process to be at value *W* at time *t* > *t*_{0} is, to leading order,where Γ is the diffusion constant of *V*_{ξ}, and thus the probability of arriving at the barrier at time *t* is approximately the following:When *H* < 1/2, this expression has an essential singularity and has a value which is singular-exponentially small for small times. In fact, the probability and all of its derivatives are zero at *t*_{0}. For instance, consider a barrier whose flank to the right of the local minimum rises like . As the fourth root in the barrier rises much more rapidly than the square root in the walker, the probability of hitting the barrier after the minimum looks like , a function that has an essential singularity at 0: the function as well as all of its derivatives approach 0 as Δ*t* → 0^{+}.

The parameter *H* we described above, which is called the Hölder exponent of the function, quantifies the ability of the barrier to, locally, rise faster or slower than a random walk. More formally, a function *f*(*t*) is said to be *H*-Hölder continuous if it satisfies ; the roughness exponent *H* of the function is the largest possible value of *H* for which the function satisfies a Hölder condition.

Up to now, we have considered a single local minimum, and even though the probability of crossing is singular-exponential small for *H* < 1/2, it is still nonzero. However, if the boundary is rugged, the local minima are dense. This density is not an issue for *H* > 1/2, when the inputs are smoother than the internal noise; in this case the probability density of first passages is nowhere zero. However, when *H* < 1/2, the input is rougher or burstier than the internal noise; the probability density ceases to be a function, and it is zero almost everywhere except for a set of zero measure where it diverges.

## Results

We shall present the more formal proofs of regularity of the first-passage time probability distributions elsewhere. We proceed now, instead, to present and analyze numerical simulations.

We carried out careful numerical integration of Eq. **1**, with leak constant 10 ms, for all Hölder exponents *H* in the range (0.25–0.99) in increments of 0.01. In order for the results of the simulations at different Hölder exponents to be directly comparable with one another, we generated the inputs *I*(*t*) by using the exact same overall coefficients in the basis functions of the Ornstein–Uhlenbeck process described in ref. 61, but scaled differently according to the Hölder exponent laws in the natural way. For each one of the 75 Hölder exponents between 0.25 and 0.99, 62,000 repetitions of the 10-s stimulus were performed, accumulating 100,000,000 first passages per Hölder exponent. We computed the first passages using the fast algorithm described in refs. 56 and 61, which carries out exact integration in intervals which are recursively subdivided when the probability that the process attains the first passage exceeds a threshold, in our case 10^{−20}. The first passages were computed to an accuracy of 2^{−26} = 1/67108864, and the allowable probability that a computed passage is not in fact the first one is *p*_{fail} = 10^{−15}, so as to have an overall probability of 10^{−5} that any one of our 7.5 billion numbers is not in fact a true first passage. The values of the first passages were histogrammed in 22^{22} bins; this histogram, which we call our PSTH in analogy to the term in use in neural coding, represents the instantaneous probability distribution of first passage integrated over the bins, or, equivalently, the finite differences over a grid of the cumulative probability distribution function for firing.

The transition from smooth probability distribution to a singular measure is illustrated in Figs. 2 and 3, where, as the Hölder exponent is lowered, the concentration of the first-passage probability on a small set is evident. Histogramming the individual bins of the PSTH, we get the probability distribution to observe a given instantaneous rate of firing, shown in Fig. 4. For large Hölder exponents, the rate does not deviate far from its mean. However, as the Hölder exponent becomes 1/2, both the probability of observing a zero rate, as well as the probability of seeing a rate far larger than the mean, become substantial. For *H* < 1/2, it becomes very probable to observe either zeros or large values of the instantaneous rate. This statement can be made precise by observing the tails of the probability distribution, and this is best accomplished, given our numerical setup, by looking at the tails of the cumulative probability distribution, namely the following:and then analyzing 1 − *F*(*x*) vs. *x* for large *x*, which is carried out in Fig. 5. Fig. 5*A* shows that the tails of the distribution, when *x* ≫ 1, decay exponentially for *H* > 1/2 but behave like stretched exponentials when *H* < 1/2 as follows:This observation is quantified in Fig. 5*B*, where log(1 − *F*) is fitted with a quadratic polynomial in , namely the following:For *H* < 1/2, the quadratic coefficient in the fit, which gives the convergent linear term, vanishes, uncovering the stretched exponential behavior. This quantitatively proves our assertion of a phase transition at *H* = 1/2.

## Discussion

In abstract, mathematical terms, we have shown that the probability of observing a first passage of a Gauss–Markov process through a rough boundary of Hölder exponent *H* suffers a phase transition at *H* = 1/2. The integral of the probability on equispaced grids becomes a stretched exponential, showing the underlying instantaneous probability has ceased to be a function: it is concentrated on a Cantor-like set within which it is infinite, and it is zero outside this set. Gauss–Markov processes, such as the Ornstein–Uhlenbeck process, can be mapped to the canonical Wiener process through a deterministic joint scaling and time-change operation that preserves Hölder continuity. (This transformation is called Doob’s transform.) Furthermore, being the solution to a linear Langevin equation, the first-passage problem for drifted Gauss–Markov processes can always be formulated in terms of a fluctuating effective barrier that integrates the drift contribution. Therefore, our analysis directly applies to this situation. As nonlinear diffusions with bounded drift behave like Brownian motion at vanishingly small scales, we envision that our result is valid for this more general class of stochastic processes with Hölder continuous barrier. However, in this case, the barrier under consideration does not summarize the drift contribution of the diffusion.

In terms of the original motivating problem, the encoding of an input into the timing of action potentials by a model neuron, this means that within our (theoretical and rather aseptic) model, there is an abrupt transition in character of the PSTH, the instantaneous firing rate constructed from histogramming repetitions of the same stimulus. The transition happens when the input has the roughness of white noise, conceptually the case in which the neuron is receiving a barrage of statistically independent excitatory and inhibitory inputs, each with a random, Poisson character. For inputs that are smoother than this, the PSTH is a well-behaved function whose finite resolution approximations converge nicely and properly to finite values. However, when the input is rougher than uncorrelated excitation and inhibition, for example when excitatory and inhibitory activities are clustered positively with themselves and negatively with one another, then the PSTH is concentrated on a singularly small set, which means that the PSTH consists of a large number of sharply-defined peaks of many different amplitudes, but each one of them having precisely zero width. The width of the peaks is zero regardless of the amplitude of the internal noise; increasing internal noise only leads to power from the tall peaks being transferred to lower peaks, but all peaks stay zero width. Because the set of peaks is dense, refining the bins over which the PSTH is histogrammed leads to divergencies.

Concentration of the input into rougher temporal patterns would evidently be a function of the circuit organization. For example, in primary auditory cortex, the temporal precision observed in neuronal responses (62) appears to originate in the concentration of excitatory input into sharp “bump”-like features (63), an observation consistent with event-based analysis of spike trains (64). A network property that has been implicated in temporal precision is that of high-conductance states (65); it is plausible that, for carefully balanced recurrent excitation, leading to high gain states, such high-conductance states may lead to effectively bursty input to individual neurons.

It currently remains to be seen whether our mechanism will resist the multiple layers of real-world detail separating the abstract Eq. **1** from real neurons in a living brain. Obviously, the infinite sharpness of our mathematical result shall not withstand many relevant perturbations, which will broaden our zero-width peaks into finite thickness. That this will happen is indeed sure, but not necessarily relevant, because a defining characteristic of phase transitions is that their presence affects the parameter space around them even under strong perturbations: that is why studying phase transitions in abstract, schematic models has been fruitful. Thus, the real question remaining is whether our mechanism can retain enough temporal accuracy to be relevant to understand the organization of high–temporal-accuracy systems such as the auditory pathways, and whether our description of the roughness of the input as the primary determinant of coding modality, temporal code or rate code, may illuminate and inform further studies.

## Acknowledgments

We are indebted to Jonathan Touboul and Mayte Suarez-Farinas for helpful comments and advice, and to the members of our research group for critical input. This work was partially supported by National Science Foundation Grant EF-0928723.

## Footnotes

- ↵
^{1}To whom correspondence should be addressed. E-mail: magnasco{at}rockefeller.edu.

Freely available online through the PNAS open access option.

## References

- ↵
- Risken H

- ↵
- Wasan MT

*Stochastic Processes and Their First Passage Times*:*Lecture Notes*(Queen’s University, Kingston, ON, Canada). - ↵
- Redner S

- ↵
- van Kampen, NG

*Stochastic Processes in Physics and Chemistry*(Elsevier, North-Holland Personal Library, Amsterdam). - ↵
- ↵
- Mehr CB,
- Mcfadden JA

- ↵
- ↵
- ↵
- ↵
- ↵
- Strenzwilk DF

- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- Goychuk I,
- Hänggi P

- ↵
- ↵
- ↵
- ↵
- Kahle W,
- Lehmann A

*Advances in Stochastic Models for Reliability, Quality and Safety*(Birkhäuser, Boston), pp 139–152. - ↵
- Khan RA,
- Ahmad S,
- Datta TK

*Applications of Statistics and Probability in Civil Engineering*(IOS Press, Amsterdam), Vols. 1 and 2, pp 1659–1666. - ↵
- Mazurek ME,
- Roitman JD,
- Ditterich J,
- Shadlen MN

- ↵
- ↵
- ↵
- ↵
- Lo CF

- ↵
- Xu RM,
- McNicholas PD,
- Desmond AF,
- Darlington GA

- ↵
- Ammann M

- ↵
- ↵
- ↵
- ↵
- Mainen ZF,
- Sejnowski TJ

- ↵
- ↵
- ↵
- ↵
- Sacerdote L,
- Zucca C

*Brain, Vision, and Artificial Intelligence: Proceedings of the First International Symposium, BVAI 2005, Naples, Italy, October 19–21, 2005,*(Springer, Berlin), Vol. 3704, pp. 69–77. - ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- Rieke F

- ↵
- Abbott LF,
- Sejnowski TJ

- ↵
- ↵
- Cecchi GA,
- et al.

- ↵
- Arcas BAY,
- Fairhall AL,
- Bialek W

- ↵
- ↵
- Beierholm U,
- Nielsen CD,
- Ryge J,
- Alstrøm P,
- Kiehn O

- ↵
- ↵
- ↵
- Lo CF,
- Chung TK

*Neural Information Processing*(Springer, Berlin), Vol. 4232, pp. 324–331. - ↵
- Buonocore A,
- Caputo L,
- Pirozzi E,
- Ricciardi LM

*Computer Aided Systems Theory—Eurocast 2009*(Springer, Berlin), Vol. 5717, pp 152–158. - ↵
- Taillefumier T,
- Magnasco MO

- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- Elhilali M,
- Fritz JB,
- Klein DJ,
- Simon JZ,
- Shamma SA

- ↵
- DeWeese MR,
- Zador AM

- ↵
- ↵

## Citation Manager Formats

### More Articles of This Classification

### Related Content

- No related articles found.

### Cited by...

- No citing articles found.