## New Research In

### Physical Sciences

### Social Sciences

#### Featured Portals

#### Articles by Topic

### Biological Sciences

#### Featured Portals

#### Articles by Topic

- Agricultural Sciences
- Anthropology
- Applied Biological Sciences
- Biochemistry
- Biophysics and Computational Biology
- Cell Biology
- Developmental Biology
- Ecology
- Environmental Sciences
- Evolution
- Genetics
- Immunology and Inflammation
- Medical Sciences
- Microbiology
- Neuroscience
- Pharmacology
- Physiology
- Plant Biology
- Population Biology
- Psychological and Cognitive Sciences
- Sustainability Science
- Systems Biology

# Signal processing in cellular clocks

Edited by Jay C. Dunlap, Dartmouth Medical School, Hanover, NH, and approved January 13, 2011 (received for review April 7, 2010)

## Abstract

Many biochemical events within a cell need to be timed properly to occur at specific times of day, after other events have happened within the cell or in response to environmental signals. The cellular biochemical feedback loops that time these events have already received much recent attention in the experimental and modeling communities. Here, we show how ideas from signal processing can be applied to understand the function of these clocks. Consider two signals from the network *s*(*t*) and *r*(*t*), either two variables of a model or two experimentally measured time courses. We show how *s*(*t*) can be decomposed into two parts, the first being a function of *r*(*t*), and the second the derivative of a function of *r*(*t*). Geometric principles are then derived that can be used to understand when oscillations appear in biochemical feedback loops, the period of these oscillations, and their time course. Specific examples of this theory are provided that show how certain networks are prone or not prone to oscillate, how individual biochemical processes affect the period, and how oscillations in one chemical species can be deduced from oscillations in other parts of the network.

Many biological systems perform specific functions at specific times (1). On a 24-h (circadian) timescale, intracellular circadian clocks trigger biological events to occur at specific times of the day. Faster, noncircadian clocks properly time many other events, such as those that occur in development, in cell division, and in metabolism (2–7). Our knowledge of both of these types of clocks has grown tremendously in the past two decades with new experimental techniques, a growing library of mathematical models (8, 9), and even the building of synthetic cellular clocks (10, 11).

However, in each of these cellular clocks, many genes and proteins work together in complex ways to produce oscillations. A new challenge has arisen in incorporating all these recent results into a mathematical theory that can be used along with computation and experimentation (12) to better understand the system’s behavior. One possibility, recently advocated by Sontag for biological clocks (13), is to use ideas from signal processing and Hilbert space. Although this approach has been around for over 30 y (2), it has received limited attention with respect to biological clocks, probably because the biochemistry of clocks is often nonlinear (14), and most techniques consider linear systems (15). Here, we use these mathematical ideas to determine how to relate different oscillating elements in a genetic network.

In this current study, we draw upon this approach to show that oscillating signals in nonlinear biochemical clocks obey geometric properties. We apply these properties to three questions of wide study: (*i*) When are oscillations possible in biochemical feedback loops? (*ii*) How do individual components of the biochemical feedback loops determine the period? (*iii*) How does one element of the feedback loop influence the next? We illustrate answers to these questions with specific examples.

Our basic approach is to consider the time course of each chemical species as a signal rather than the concentration at a particular time. Thus, we can consider a vector of the values of *s*(*t*) over one period sampled at fixed time intervals [*s*(*t* + *ϵ*), *s*(*t* + 2*ϵ*),…, *s*(*t* + *τ*)]. As more and more samples are taken (smaller *ϵ*), the signal approaches a function of time as defined in terms of Hilbert space theory. Further information on this abstraction can be found in refs. 13 or 16.

We subtract off the mean of all signals; in other words, we consider all signals as having mean zero unless otherwise indicated. Unless otherwise indicated, we assume all chemical species oscillate with a period *τ*.

Two important ideas are now needed: The amplitude of a signal is defined as the root-mean-square of the signal (also known in the mathematical literature as the *L*^{2} norm and remembering that we consider all signals to have mean zero); and the dot product of two signals (in this case equivalent to the mathematical inner product). They are defined in the standard way:

We also denote the mean of a signal *f*, 〈*f*〉. From these we can define an angle that represents the similarity between two signals: where *θ*_{f,g} = 0 implies *f* and *g* are scalar multiples of each other and *θ*_{f,g} = ± *π*/2 implies the two signals are orthogonal to each other.

## Results

### Representing One Signal as a Function of Another.

A common form found in mathematical models of biochemical clocks is [1a]or [1b]

This states that the production rate *s*(*t*) can be broken into two parts, a function of *p* (also known as the clearance rate of *p*) and the derivative of *p*. If *p* is constant, then *dp*/*dt* = 0 and *n*(*p*) is constant. We also note that where *N*(*p*) is the indefinite integral of *n*(*p*), meaning that these two parts are orthogonal (see Theorem 1 below for an alternate proof).

Now, consider oscillations in a more general network whose dynamics may not be initially of this form. Surprisingly, this form still applies, as shown by the following definition and theorem:

A reference signal is a signal *r*(*t*) with period *τ* that has at most one maximum and one minimum over the period *τ* (i.e., *dr*/*dt* = 0 twice over a period).

Given a signal *s*(*t*) and a reference signal *r*(*t*) both periodic with period τ. Assuming that if *dr*/*dt* = 0, *d*^{2}*r*/*dt*^{2} ≠ 0 and if *ds*/*dt* = 0, *d*^{2}*s*/*dt*^{2} ≠ 0, *s*(*t*) can be expressed as the sum of two orthogonal parts:

The proof of this theorem is given in *SI Text* given that *s*(*t*), *r*(*t*), *f*_{1}(*r*), and *f*_{2}(*r*) have a Fourier series representation. This proof also provides a way of calculating *f*_{1} and *f*_{2}. This decomposition is possible even if model equations are not available (See Example 1 below).

Assume *f*_{1} and *f*_{2} are known, ideally because we have a mathematical model of the form given by Eqs. **1a** and **1b**, but otherwise by calculation. What does knowledge of *f*_{1} and *f*_{2} tell us about the relationship between *s*(*t*) and *r*(*t*)? Some basic properties can be immediately seen. For example, if *df*_{1}(*r*)/*dt* = 0, then *s*(*t*) is a function of *r*(*t*). Likewise, if *f*_{2}(*r*) = 0, then *s*(*t*) and *r*(*t*) are orthogonal.

The rest of this manuscript will derive properties of the systems based on knowledge of *f*_{1} and *f*_{2}.

We use data from ref. 17 that continuously measure luciferase reporters of two genes in the mammalian circadian clock (*Bmal1* and *Per2*). The *Bmal1* marker is shown in Fig. 1*A*, where digitized data are shown as well as a best fit that we consider *r*(*t*). Fig. 1 *B* and *D* show *f*_{2}(*r*) and *df*_{1}(*r*)/*dt*, respectively, which when added yield the rhythm in the *Per2* marker shown in Fig. 1*C*. Here, we simply illustrate that our methodology can be used on real data. Data from any clock system could have been used and/or other markers of rhythms including more accurate luciferase fusion proteins.

Goodwin’s oscillator (18) has been one of the most studied biochemical models of biological clocks. A more general form of this model is

Main prediction: An exact formula for the period of the oscillator.

We note that each equation can be written in the form

We can substitute for *p*_{i-1} and find

Taking the inner product with *dp*_{3}/*dt* on the left-hand side, we find that

By integration by parts, we know which takes the form of a fixed Sobolev norm. With frequency *w*, we can represent the Fourier series of *p*_{3} as and with Parseval’s theorem (19), substituting this into the previous formula yields

This gives an exact formula for the period of the oscillator. It is interesting to note that transcription or translation rates do not appear in this formula. Here are some key predictions:

As degradation rates increase, the period shortens.

The minimum bound for the period is.

The period lengthens as increases. This fraction gives higher weight to

*a*_{j}as*j*increases.

### Inferring One Signal from the Next.

Consider Eq. **1a**:

Let *ρ*(*t*) ≡ *f*_{2}(*r*(*t*)). Assume *f*_{1} is a function of *f*_{2} or *f*_{1}(*r*) = *q*(*ρ*(*t*)). Then we have scaling time, we then have and *dT*/*dt* ≡ *dρ*/*dq*, assuming that *dρ*/*dq* ≠ 0, which allows a one-to-one mapping between *T* and *t*. Here, an exact solution can be found: which shows that *ρ*(*T*) is simply a weighted average of the previous signal *s*(*T*). However, this weighting uses *T* rather than *t*. From this, the following prediction can be made: As *dt*/*dT* lessens, *t* proceeds slower with respect to *T*, giving more weight to that part of the signal in determining *ρ*(*T*).

Consider the case where *dp*/*dt* = *s*(*t*) - *n*(*p*(*t*)) with *s*(*t*) = 1 + sin(*t*) and *n*(*p*(*t*)) = 2*p*(*t*)/(0.5 + *p*(*t*)). *s*(*t*) is shown in Fig. 2*A* (blue curve) along with *n*(*p*(*t*)) (red curve). Scaling time yields *s*(*T*) shown in Fig. 2*B* (blue curve). One can see that at points where *p*(*t*) saturates *n*(*p*(*t*)), *s*(*T*) becomes more compressed and includes more of the signal *s*(*t*) in a smaller time span. This is then passed through a linear filter with rate constant *η* = 1.22 yielding *ρ*(*T*) (red curve in Fig. 2*B*). Plotting *ρ*(*T*(*t*)) yields *n*(*p*(*t*)) shown in Fig. 2*A*.

### Comparing Signal Amplitudes.

Again consider signals in the form of Theorem 1 and again with the mean of all signals subtracted off. Because *df*_{1}(*r*)/*dt* and *f*_{2}(*r*) are orthogonal, we also know that [2]see Fig. 3 for an illustration. This equation tells us that the amplitude of *s*(*t*) is greater than the amplitude of *df*_{1}(*r*)/*dt* or *f*_{2}(*r*).

What does this tell us about the relationship between the amplitude of *s*(*t*) and the amplitude of *r*(*t*)? We then have the following formula: or

Therefore, this ratio of the amplitude of *s*(*t*) and the amplitude of *r*(*t*) depends on:

how either

*f*_{1}or*f*_{2}increases the amplitude or decreases the amplitude of*r*(*t*) [i.e., ||*f*_{1}(*r*)||^{2}/||*r*(*t*)||^{2}and ||*f*_{2}(*r*)||^{2}/||*r*(*t*)||^{2}], and||

*df*_{1}(*r*)/*dt*||^{2}/||*f*_{1}(*r*)||^{2}. This can be viewed as a generalized estimate of the frequency of*f*_{1}(*r*).

Let us describe what we mean by a generalized estimate of the frequency of *f*_{1}(*r*). If *f*_{1}(*r*) = cos(*wt* + *ϕ*) then this ratio is the exact frequency *w*. Otherwise, let us assume that *f*_{1}(*r*) has a Fourier series , as in Example 2, this ratio is equal to . Thus, it not only depends on the frequency of *f*_{1}(*r*) but also on the frequencies of the harmonics of *f*_{1}(*r*). As the amplitudes of higher harmonics increase, so does this generalized frequency.

### Comparing Signal Shapes.

We next compare the signal shape between *s*(*t*) and *r*(*t*) using *f*_{1}(*r*) and *f*_{2}(*r*) and the geometry outlined in Fig. 3. To start, we take the inner product of Eq. **1** or Theorem 1 with *r*(*t*): where the second equality comes from the fact that *df*_{1}(*r*)/*dt* and *r*(*t*) are orthogonal (Theorem 1).

With the definition of the inner product, we find or [3]where the second equality uses Eq. **2**. So how does *θ*_{s,r} depend on *f*_{1}(*r*) and *f*_{2}(*r*)? It depends monotonically on *θ*_{f2,r}, so as *f*_{2}(*r*) distorts the shape of *r*(*t*), it distorts *s*(*t*) from *r*(*t*). Also, *df*_{1}(*r*)/*dt* distorts *s*(*t*) from *r*(*t*). The larger the magnitude of *df*_{1}(*r*)/*dt*, the greater the difference in shape between *s*(*t*) and *r*(*t*).

### Requirement for Oscillations in a Negative Feedback Loop.

Now, assume we have the general structure of a feedback loop where the signal *s*(*t*) in the *i*th step of the feedback loop is a function of the clearance rate of the previous element in the feedback loop, or that the model has the following form: [4]

For simplicity of notation, we drop the *i* subscript in *f*_{2,i}(*r*_{i}) and represent it as *f*_{2}(*r*_{i}) while noting that this function can be different for different steps. We also assume there are *q* elements of the loop and that *r*_{0}(*t*) = *r*_{q}(*t*), which gives the structure of a feedback loop. We also sometimes abbreviate *s*_{i-1}(*f*_{2}(*r*_{i-1})) as *s*_{i-1}.

The gain of *s*_{i} with respect to *f*_{2}(*r*_{i}(*t*)) at time *t* is where *s*(*fm*) = 0.

We assume that the gain is always defined, a condition known in control theory as passivity. See ref. 20 for details.

The average gain of *s*_{i} with respect to *f*_{2}(*r*_{i}) is

This can be thought of as a weighted harmonic mean of *G*_{i}(*t*). (Comparison of this definition of the gain, and the much more commonly used *L*^{2} gain, is provided later.)

with equality when *θ*_{si-1,si} = *π*/*q*.

This theorem is proved in *SI Text*. It is perhaps more interesting when the inequality of Theorem 2 is not satisfied, which then indicates that oscillations can not be present.

So when do oscillations appear in genetic networks? The most common answer found in the biological literature is that negative feedback and delay cause oscillations. However, many genes in the genome are under the control of negative feedback, and these feedbacks always have a delay. This theorem states that something more is needed, and indeed, most genetic networks with negative feedback and delay do not show oscillations. For example, if there are three elements in the network (*q* = 3) the left-hand side is eight that would require special biological mechanisms to achieve this high gain.

Although similar secant conditions have been presented previously (2, 13, 21–23), this condition involves an equality using the *θ*_{si-1,si}. To maximize the chances that a biochemical feedback loop will oscillate, one should aim for *θ*_{si-1,si} = *π*/*q*, or that each step of the feedback loop will cause the same distortion of the previous signal as in the previous step. Thus, if the steps in the feedback loop are symmetric in the sense that they distort the signals in the same way, then the network will likely oscillate. The following examples illustrate this point.

A conversion system: In this example, one element is simply converted to the next at some rate. The gain for each step is then 1, which violates Theorem 2. Thus, no oscillations are possible in this system.

A symmetric oscillator: where the same functions *m* and *n* are used for all steps. For comparison, this has the structure of the repressilator (11), which has three genes with each gene repressing the next (*dm*/*dp* < 0, *dn*/*dp* > 0). Our results show that, due to the symmetry of this system, oscillations should be easily generated.

Our definition of the gain is different than that used in many other contexts. A much more common definition of the gain of a step in a feedback loop than that used here is the *L*^{2} gain (see ref. 13 for details), which is for each step in this example. Choosing specific functions *n*(*p*_{i}) = *p*_{i} and *m*(*p*_{i-1}) = 1/(0.1 + *p*_{i-1})^{3}, letting *q* = 3 and simulating, we find that the *L*^{2} gain of each step is 1.23, with a total *L*^{2} gain for the feedback network of 1.23^{3} = 1.86. This illustrates how a symmetric system can oscillate at very low gain, particularly when other definitions of the gain are used. The total loop gain with the definition used in Theorem 2 is 91.7, which obeys the theorem.

## Discussion

In this study, we demonstrated how geometric principles, based on well-established and long-standing mathematical theory, could be used to make predictions about biological oscillators. To do so, we combined previously developed ideas from mathematicians (16) and engineers (20), particularly control theorists, who had considered oscillations, including Hilbert space, and signal processing typically used in electrical circuits. These results add to a growing literature on new mathematical methods for understanding biological feedback loops (24–26).

Because of the vast number of previous results on biochemical feedback loops (27, 28), it is beyond the scope of this work to consider all here. That being said, future work is yet needed to determine whether other results can be derived and/or extended using geometric principles. Our hope with this study is that the principles derived here could be used alongside other commonly accepted techniques such as bifurcation theory (29), calculation of Lyapunov functions (23), geometric analysis of phase space (30, 31), and asymptotic analysis in future analysis. Moreover, we hope that these techniques can be used to solve other classical problems in biological rhythms such as synchronization of biological oscillators (28); temperature compensation, which can be understood through the Goodwin model (28); or even may be applied more broadly to nonbiological problems.

By way of example, a similar general theory exists for determining the possibility of bistability in genetic networks (32). Moreover, for our study, another complicating factor is molecular noise (33, 34), which is not accounted for here, and thus could be explored with these techniques using similar methods to those in ref. 25.

For some systems, our results were surprisingly strong. For example, in the case of the well-studied Goodwin model, we found exact formulas for the period. In addition, in the case of a simplified repressilator, we found that the repressilator can oscillate with very little gain when measured in the standard (*L*^{2}) way. This may perhaps explain why this design was built successfully. Both of these results should be of particular importance in synthetic biology, where one of the major goals has been the creation of synthetic clocks, as well as in circadian biology, where proper timekeeping has been shown to be essential in the treatment of disease.

## Acknowledgments

The author thanks Richard Yamada, Cecilia Diniz-Behn, and Jae Kyoung Kim for comments on this manuscript. This work was supported by Grants GM63642 and GM060387 from the National Institutes of Health–National Institute of General Medical Sciences. DBF is an Air Force Office of Scientific Research Young Investigator (FA 9550-08-01-0076).

## Footnotes

^{1}E-mail: forger{at}umich.edu.

Author contributions: D.B.F. designed research, performed research, contributed new reagents/analytic tools, analyzed data, and wrote the paper.

The author declares no conflict of interest.

This article is a PNAS Direct Submission.

This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1004720108/-/DCSupplemental.

## References

- ↵
- Ko CH,
- Takahashi JS

- ↵
- Linkens DA

- Rapp PE

- ↵
- ↵
- Nelson DE,
- et al.

- ↵
- ↵
- ↵
- ↵
- Leloup JC,
- Gonze D,
- Goldbeter A

- ↵
- Forger DB,
- Peskin CS

- ↵
- ↵
- ↵
- ↵
- Sontag ED

- ↵
- Goldbeter A

- ↵
- Grodins FS

- ↵
- Doob JL

- ↵
- ↵
- ↵
- Panter PF

- ↵
- Khalil HK

- ↵
- Tyson JJ,
- Othmer HG

- ↵
- Thron CD

- ↵
- ↵
- Geva-Zatorsky N,
- Dekel E,
- Batchelor E,
- Lahav G,
- Alon U

- ↵
- Shinar G,
- Milo R,
- Martinez MR,
- Alon U

- ↵
- Wagner A

- ↵
- Winfree A

- ↵
- Kuramoto Y

- ↵
- ↵
- ↵
- Indic P,
- Gurdziel K,
- Kronauer RE,
- Klerman EB

- ↵
- Craciun G,
- Tang Y,
- Feinberg M

- ↵
- ↵
- Forger DB,
- Peskin CS

## Citation Manager Formats

### More Articles of This Classification

### Physical Sciences

### Related Content

- No related articles found.