New Research In
Physical Sciences
Social Sciences
Featured Portals
Articles by Topic
Biological Sciences
Featured Portals
Articles by Topic
 Agricultural Sciences
 Anthropology
 Applied Biological Sciences
 Biochemistry
 Biophysics and Computational Biology
 Cell Biology
 Developmental Biology
 Ecology
 Environmental Sciences
 Evolution
 Genetics
 Immunology and Inflammation
 Medical Sciences
 Microbiology
 Neuroscience
 Pharmacology
 Physiology
 Plant Biology
 Population Biology
 Psychological and Cognitive Sciences
 Sustainability Science
 Systems Biology
Perceptron learning rule derived from spikefrequency adaptation and spiketimedependent plasticity

Edited by Eric I. Knudsen, Stanford University School of Medicine, Stanford, CA, and approved December 14, 2009 (received for review August 19, 2009)
Abstract
It is widely believed that sensory and motor processing in the brain is based on simple computational primitives rooted in cellular and synaptic physiology. However, many gaps remain in our understanding of the connections between neural computations and biophysical properties of neurons. Here, we show that synaptic spiketimedependent plasticity (STDP) combined with spikefrequency adaptation (SFA) in a single neuron together approximate the wellknown perceptron learning rule. Our calculations and integrateandfire simulations reveal that delayed inputs to a neuron endowed with STDP and SFA precisely instruct neural responses to earlier arriving inputs. We demonstrate this mechanism on a developmental example of auditory map formation guided by visual inputs, as observed in the external nucleus of the inferior colliculus (ICX) of barn owls. The interplay of SFA and STDP in model ICX neurons precisely transfers the tuning curve from the visual modality onto the auditory modality, demonstrating a useful computation for multimodal and sensoryguided processing.
Many of the sensory and motor tasks solved by the brain can be captured in simple equations or minimization criteria. For example, minimization of errors made during reconstruction of natural images using sparse priors leads to linear filters reminiscent of simple cells (1, 2), minimization of retinal slip or visual error leads to emergence and maintenance of neural integrator networks (3 –5), and optimality criteria derived from information theory can model the remapping dynamics of receptive fields in the barn owl midbrain (6).
Despite these advances, little is known about cellular physiological properties that could serve as primitives for solving such computational tasks. Among the known primitives are shortterm synaptic depression, which can give rise to multiplicative gain control (7), or spikefrequency adaptation (SFA), which may provide highpass filtering of sensory inputs (8, 9).
Here, we explore biophysical mechanisms and computational primitives for instructive coding. Instructive coding is a computation that allows the brain to constrain its sensory representations adaptively by exploiting intrinsic properties of the physical world. The example we consider here is that sound sources and salient visual stimuli often colocalize (e.g., when a dried branch cracks under the footstep of an animal). In the barn owl, a highly efficient predator, this auditory–visual colocalization is well reflected by registration of auditory and visual maps in the external nucleus of the inferior colliculus (ICX) and the optic tectum (OT). The instructive aspect of this registration is that it is actively maintained by plasticity mechanisms: When the visual field of owls is chronically shifted by prisms, neurons in ICX and OT develop a shift in their auditory receptive fields that corresponds to the visual field displacement (10, 11). Hence, visual inputs to these areas are able to serve instructive roles for auditory spatial representations.
In the computational literature, instructive coding has been linked to the perceptron rule, a learning rule for onelayer neural network models (12, 13). This rule guarantees that the firing rate approaches the target rate and is one of the simplest expressions of an almost infinite class of learning algorithms that go under the name of gradient descent algorithms. Although the perceptron rule and gradient descent algorithms have been broadly applied to network models of brain function (14 –16), to our knowledge, they have not been derived from first principles and abundant experimental evidence for their existence is still lacking. One of the most prominent criticisms is that these algorithms depend on the existence of an explicit error signal, for which convincing evidence is scarce in most neural systems.
A special case of instructive coding, crossmodal spatial transformations, can be formed by spiketimedependent plasticity (STDP) rules when driven by multimodal inputs (17, 18). Although STDP by itself does not seem to be capable of supporting arbitrary instructive coding (19), we identify a possible scenario for the implementation of the perceptron rule, namely, in cells that display both SFA and STDP. For a large range of parameters, the interaction of these common cellular and synaptic properties gives rise to the perceptron rule and represents a robust mechanism for supervised learning in biological systems. Most importantly, the error signal in our STDP–SFA scenario is implicit rather than explicit, which alleviates the exploratory urge to identify such signals experimentally.
Results
To explore a possible relationship between firing adaptation, STDP, and error signals, we studied a spiking neuron model based on the organization of the owl midbrain and on physiological responses. Our (ICX) model neuron is an adapting conductancebased leaky integrateandfire unit in which SFA is modeled by an afterhyperpolarizing potassium conductance (20).
The unit receives excitatory sensory input a from a fastacting auditory pathway and input v from a slower acting visual pathway (Fig. 1). The auditoryinput synapses are subject to STDP, whereas the visualinput synapses are fixed.
Our results are based on numerical simulations, and, to avoid exhaustive parameter testing, we first show analytical results in which we use the method of time averaging to replace spike trains with average firingrates (see Analytical Derivation in Methods ). This approximation is valid as long as neurons operate over a range of firing rates in which the slopes of their FI curves do not change much.
SFA Can Encode an Implicit Error Signal.
First, we examine the effects of SFA on bimodal neural responses. We stimulate the neuron with auditory and visual stimuli that are step functions of duration T. To model the slower visual pathway, visual inputs ( v ) have a fixed onset lag of T _{lat} = 70 ms, corresponding to estimates in the barn owl (22). When driven by auditory or visual input in isolation, the neuron exhibits a transient response that adapts within a few milliseconds. When both inputs are simultaneously presented, however, the adaptation elicited by early auditory responses persists and leads to suppression of subsequent visual responses (Fig. 2A ).
Mathematically, this interaction of bimodal responses can be expressed in terms of scalars A and V representing auditory and visual responses. The auditory response, A, is the average firing rate of the neuron in the time interval [0, T], and the visual response, V, is the average firing rate in [T _{lat}, T + T _{lat}]. For short stimuli, T ≤ T _{lat}, there is no temporal overlap between auditory and visual responses, allowing us to describe our findings unambiguously in terms of A and V.
Assuming a nonadapted state at stimulus onset, A is purely a function of auditory input. In contrast, V depends not only on the visualinput current, I _{V}, but also on A because of preceding adaptation. Our calculation shows that V is a linear function of both I _{V} and A: where c _{0}–c _{2} are constants set by cellular and synaptic properties (see Analytical Derivation ).
The linear relationship in Eq. 1 is exact under the condition that I_{V} is large enough to override the adaptation current and to drive spike responses in the cell. Expressed in terms of V and A, the range of validity of Eq. 1 becomes where c _{3} is a constant that depends on cellular/synaptic parameters (see Analytical Derivation ). Note that when the condition in Eq. 2 is violated, the response V might be either delayed by more than T _{lat} or completely suppressed, implying a nonlinear relationship between V, A, and I _{V}. Nevertheless, even in this nonlinear regime, we found the linear relationship in Eq. 1 to be a good approximation of the response behavior of the cell (Fig. 2B ).
Next, we show that under the influence of STDP, this antagonism of auditory and visual responses leads to potentiation or depression of auditory input synapses in such a way that A converges to a term proportional to I _{V} (assuming c _{0} is small), which is the condition of alignment of auditory responses with visual inputs.
Interplay Between SFA and STDP Leads to the Delta Learning Rule.
We endowed auditory synapses with a standard form of Hebbian STDP. According to the STDP rule, the joint occurrence of a presynaptic spike at time t_{j} and a postsynaptic spike at time t_{i} leads to a change in synaptic conductance, Δg, that depends solely on the time interval Δt = t_{i} − t_{j} : where the function W(Δt) is the STDP pairing function (Fig. 1B ) and g _{max} is the upper limit of synaptic conductance. We chose a negative net area under the STDP pairing function, thereby imposing a tendency of the auditory synaptic strength to depress (23).
To explore the interplay between SFA and STDP, we assumed zero initial synaptic conductance (g^{A} = 0) and repeatedly stimulated the neuron with steplike auditory and visual inputs of fixed strengths and duration T. The interstimulus intervals were long, such that the adaptation conductance decayed to zero between stimulus repetitions. Initially, the neuron responded only to visual inputs but not to auditory inputs (A = 0). As the auditory afferents carried spikes just before the visual response, the auditory synapses started to strengthen, because afferent auditory spikes were followed by visually elicited spikes (Fig. 3A ). With further strengthening of the auditory synapses, auditory responses started to appear. Once the neuron displayed robust auditory responses, the visual responses started to decline because of SFA. This decline, in turn, reduced the amount of synaptic potentiation, because afferent auditory spikes were now followed by fewer postsynaptic spikes. In addition, because adaptation shortened auditory responses relative to the afferent drive, there were many auditory afferent spikes that were not followed by postsynaptic spikes, imparting an additional depressing tendency in the synapses (Fig. 3B ). In combination, there existed a balanced regime in which the synaptic depression induced by transient auditory responses equaled the potentiation induced by delayed visual responses (Fig. 3 C and D ). This balanced regime did not depend on the initial synaptic conductance and was also reached when the initial conductance was set to g _{max} instead of zero.
To calculate the synaptic weight change as a function of all parameters in the model, we replaced the spike trains under the STDP pairing function with the neuron’s firing rate function, R(t), which represents the timedependent spike probability (Poisson spike trains). To make calculations tractable, we simply summed over multiple spike pairs inside a given pairing window; under this condition, the total conductance change, Δg, associated with one stimulus presentation becomes the integral
where is the crosscorrelation as a function of the time lag between the auditory input rate function, a(t), and the firing rate, R(t).
When we evaluated Δg in Eq. 4 for a(t), a step function with peak value a, we derived the learning rule: where the terms c _{4} and c _{5} depend on model details (see Analytical Derivation ). We further transformed Eq. 5 by replacing V with I_{V} , using Eq. 1 , to yield which is formally equivalent to the delta rule of the perceptron learning theory. Namely, the term (c _{6} I_{V} − c _{7} A) is the deviation of the auditory response, A, from the target rate (c _{6}/c _{7})I_{V} and corresponds to the postsynaptic error, and the term a is the presynaptic firing rate. Hence, mathematically, STDP and SFA jointly prescribe synaptic weight changes that are proportional to the postsynaptic error times the presynaptic rate. Because the delta rule corresponds to gradient descent on the square error between the neural response and the target response (12, 13), the effect of repeated application of the rule is to make auditory responses equal to visual inputs (with a fixed proportionality factor between them).
Note that Eqs. 5 and 6 are valid for a sufficiently large visual response, V [as specified in Inequality (Eq. 2 )]. Nevertheless, numerical evaluation of Eq. 4 revealed that these equations remained approximately valid even in the fullrange V ≥ 0 (Fig. 3D ). Also, Eq. 5 was in good agreement with weight changes obtained in spikingneuron simulations (Fig. 3 C and E ).
ICX Map Formation.
When we extended our model to an ICX neuron that received auditory input from pools of spatially tuned and topographically laid out neurons [mimicking the central nucleus of the inferior colliculus (ICC)], we found that, in equilibrium, ICX auditory tuning curves were approximately Gaussianshaped and in register with visual tuning curves (Fig. 4). Hence, under STDP and SFA, the visual tuning from OT can be precisely transferred onto auditory response tuning in ICX, in excellent agreement with the delta rule (for details, see Formation and Registration of AuditoryVisual Maps in the Avian Midbrain in SI Text ).
Discussion
Our work shows that SFA can be viewed as a mechanism for instructive error signaling in sensory neurons when these are driven by sparse multimodal inputs in a slow pathway and a fast pathway. SFA trades off between slow (“late arriving”) responses of one modality and fast responses of another modality in an approximately linear fashion (Fig. 2B ). The consequence is that the late sensory responses can be viewed as instructive or error signals that convey the need to respond to the earlier arriving inputs.
When we endowed earlyinput synapses with STDP, we found that synaptic changes were well described by the delta learning rule (21, 24); our implementation does not rely on an explicit neural representation of the error term. Rather, the error is implicitly computed through the interplay of SFA and STDP.
Our calculations showed that the emergence of the delta rule under SFA and STDP is remarkably robust. In conductancebased model neurons, the delta rule was well approximated, irrespective of cellular parameters, provided that the visual drive arrived no later than τ_{+} (potentiation window size) after offset of auditory responses (or else there are no prepost spike pairings that fall into the STDP window). Also, for adaptation to give rise to an implicit error signal (Fig. 2B ), the adaptation time constant needed to be long enough to prevent recovery from adaptation before arrival of visual inputs (i.e., τ_{K} > ≈T _{lat}).
Under special circumstances, when τ_{−} < τ_{+}, we found that the delta or perceptron rule can also be derived for neurons without SFA (see end of Mathematical Derivation of the Perceptron Learning Rule in SI Text ). However, because τ_{−} < τ_{+} has not been reported experimentally, it remains to be seen whether this scenario is biologically relevant.
In previous models of collicular map formation based on Hebbian plasticity (18, 25), temporal correlations and response latencies were not considered, leaving out the potential significance of delayed inputs for instructive coding. By contrast, we interpret latency differences as a computational strategy of the brain, agreeing with the notion that latency coding is very prominent in the visual system and can be found as early as in the retina (26).
In our simulations and derivations, we assumed sparseness of auditory and visual inputs (long dead time between consecutive inputs). In this regime, SFA helps to reduce a known instability of STDP (unrestrained potentiation) that arises from very brief inputs (see SFA Helps to Reduce Unrestrained Potentiation for Short and Sparse Inputs in SI Text ).
During complete absence of instructive visual inputs (e.g., at night), synaptic weights in our model decayed down to zero, in agreement with the perceptron learning rule. A mechanism to counteract such undesirable synaptic decay could arise from the existence of an additional source of delayed input to ICX neurons, with function similar to the delayed visual inputs. For example, feedback loops between the ICX and OT (27 –29) could act as a delay line, giving rise to delayed auditory inputs with similar tuning as the preceding ICX auditory responses, thereby promoting synaptic stability (see also Stability of Learned Synaptic Weights when Visual Inputs Are Absent in SI Text ). Such a scenario could also apply to cortex, where SFA and STDP coexist (30 –33) and where decay of feedforward connections could be prevented by delayed inputs arising from feedback loops via higher cortical areas (34, 35).
In computational learning theories, STDP has been linked to temporal difference learning (36) and to maximization of mutual information (37). Here, we extend this list of computational functions of STDP to include gradientdescent error minimization. By establishing a connection between the delta rule and simple neuron biophysics, our work strengthens the links between computational learning rules and adaptation and plasticity in biological systems. Along with similar efforts (38), our work suggests that learning rules derived from computational insights may be more compatible with simple neuron biophysics than previously thought.
Methods
IntegrateandFire Neurons.
The leaky integrateandfire model (ICX) neuron with membrane potential U(t) satisfies
where C _{m} = 0.5 nF is the membrane capacitance, I _{L} = g _{L}(U − E _{L}) is the leakage current, I _{s} is the total excitatory synaptic input current from auditory and visual afferents, I _{K} is a firingrate adaptation current, and I _{b} is a background input current (see Additional Details on Methods in SI Text ). The threshold potential is E_{θ} = −50 mV, the reset/resting potential is E _{L} = −70 mV, and the leakage conductance is g _{L} = 20 nS. When the membrane potential reaches E_{θ} , the neuron produces an action potential and the membrane potential is reset to E _{L}. There is no refractory period.
SFA.
The adaptation current I _{K} = g _{K}(U − E _{K}) in Eq. 7 models calciumactivated potassium channels, where g _{K} is the potassium conductance and E _{K} = −70 mV is the potassium reversal potential. The potassium conductance, g _{K}, is a stepanddecay function driven by the neuron’s spike train, ρ(t) (sum of delta functions): with increment Δ_{gK} = 80 nS on every spike, and a decay time constant τ_{K} = 110 ms (Fig. 2A ). The decay time constant was inferred from the work of Gutfreund and Knudsen (20).
Auditory and Visual Inputs.
The synaptic current, I _{s}, onto the neuron stems from populations of visual and auditory afferents: where is the synaptic activation of the jth auditory afferent and s^{V} (t) is the summed synaptic activation from a pool of visual neurons. The connection strength, , is modified according to an STDP rule and constrained to with g _{max} = 1.25 nS. The visualinput synapses are of fixed strength, g^{V} = 3 nS; because they convey inputs from independently firing visual neurons, we represent their synaptic activation variables by the single variable s^{V} (t), describing the entire pool. All synapses are excitatory with reversal potential, E _{ex} = 0 mV. Synaptic activations, s(t), are stepanddecay functions. Each time an input spike arrives, s(t) is incremented by 1. Between spikes, s(t) decays exponentially to zero according to τ_{s}ds/dt = −s, with a time constant of τ_{s} = 10 ms (39).
The auditory and visual stepinput amplitudes (firing rates) for simulations in Figs. 2 and 3 varied from a = 0–350 Hz and υ = 0–250 Hz, respectively.
STDP.
The STDP pairing function in Eq. 3 is defined by and We set the halfwidths of the pairing function to τ_{+} = 50 ms and τ_{−} = 110 ms and the amount of potentiation per spike to A_{+} = 0.001. The amount of depression per spike was chosen according to the relationship B = A_{−}τ_{−}/A_{+}τ_{+} = 1.05, which implies that the net area under W is negative. The halfwidths of the pairing function were chosen to be within the range of the correlation time of auditory and visual inputs. The value for τ_{−} on the order of 100 ms is typical in cortex (40, 41), whereas measured cortical values for τ_{+} tend to be smaller than 50 ms (on the order of 20 ms). However, our findings also applied to such small τ_{+} values, provided that τ_{−} was small as well or that auditory stimuli were sufficiently far away from the animal (>10 m) to provide for prepost spike pairing within τ_{+} (Fig. S8).
Note on Parameter Choice.
In our ICX–neuron simulations, we tried to constrain model parameters by existing data. When this was not possible, we adhered to the constraint that simulated ICX rates should match those of experimentally recorded ICX responses (22). To produce Figs. 2B , 3 C–E , and 4, we set the visual stimulus duration to 50 ms and the auditory stimulus duration to 70 ms. Our results were insensitive to these and similar differential changes of auditory and visual stimulus durations.
Analytical Derivation.
The perceptron learning rule in Eq. 5 is a generic consequence of the interplay between adaptation and STDP and does not depend on model details. In fact, Eqs. 1 and 5 can be derived analytically by simplifying the conductancebased model equations (Eqs. 7 – 9 ) using the method of time averaging (24) and simplifying Poissonian assumptions (42, 43). In the following, we briefly outline this derivation assuming that E _{K} = E _{L} and that the duration, T, of auditory–visual stimuli is smaller than the visual latency (T ≤ T _{lat}). This latter assumption implies that the neuron’s auditory and visual responses, A and V, do not temporally overlap, and thus are unambiguously defined. The detailed derivation is provided in SI Text (Mathematical Derivation of the Perceptron Learning Rule).
Derivation of
The method of time averaging consists of replacing the neuron’s spike train, ρ(t), in Eq. 8 by the average firing rate, R(·), which is a good approximation, provided that the time scale of spikefrequency adaptation (τ_{K} = 110 ms) is much longer than the membrane time constant (τ_{m} = 25 ms). For integrateandfire neurons (Eq. 7 ), this firingrate function, R(I), is an approximate thresholdlinear function of the total membrane current I = g _{L}(E _{L} − E_{θ} ) + g _{K}(E _{L} − E_{θ} ) + I_{V} + I_{A} :
where I_{A} and I_{V} are the synaptic input currents from auditory and visual afferents, respectively, in Eq. 9 . More specifically, Eq. 10 is an excellent linear approximation of the exact (nonlinear) expression for R(I) for large suprathreshold input currents: I ≫ I_{θ} where I_{θ} = (g _{L} + g _{K}) (E_{θ} − E _{L}) is the sum of leak and adaptation currents at firing threshold (absolute values)
Under the approximation (Eq. 10 ), the adaptation conductance, g _{K}(t), in Eq. 8 turns into a lowpassfiltered copy of the average firing rate, R(t); mathematically, g _{K}(t) obeys a simple firstorder linear differential equation. When we analytically solve this linear differential equation for g _{K}(t) and R(t) and then compute and we find the desired linear relationship between A, V, and I_{V} , as given by Eq. 1 , with where Note that V in Eq. 1 depends linearly on the constant visual input υ, because to firstorder approximation I_{V} is proportional to is the average ICX membrane voltage and N is the number of neurons in the OT pool). For our choice of parameters, the offset c _{0} in Eq. 1 can be neglected because it is small (6 Hz) when compared with the range of υ (1–250 Hz). The linear relationship in Eq. 1 is exact only in a range of sufficiently large visual responses [Inequality (Eq. 2 )] because of the threshold nonlinearity in Eq. 10 . In practice, however, we found that Eq. 1 provides a reasonably good approximation of visual responses as a function of preceding auditory responses also in the full range, V ≥ 0 (Fig. 2B ).
Derivation of V ≥ c_{3}A as the Range of Linear Behavior.
For I_{V} ≥ I_{θ} , the neuron immediately responds to the visual input in spite of its adapted state; in such a case, the integral is straightforward to compute and leads to a linear relationship between A and V. Solving the differential equation for g _{K}, we find that its value, , just before the arrival of the visual input is a function of the auditory response, A. Combining this with the fact that V is linear in I_{V} , using Eq. 1 , we arrive at Inequality (Eq. 2 ), with the constant c _{3} given by
Note that when I_{V} < I_{θ} , the response V is either delayed by more than T _{lat} or completely suppressed and the relationship between A and V becomes nonlinear.
Derivation of .
Assuming step functions for inputs a(t) and υ(t), the neuron’s response, R(t), is a simple sum of constants and exponentials in time, t. As a consequence, the integration in Eq. 4 can be easily performed, and we find the desired perceptron learning rule of Eq. 5 , with constants c _{4} and c _{5} given by
Acknowledgments
P.D. is partially funded by the Zentrum fuer Neurowissenschaften Zurich.
Footnotes
 ^{1}To whom correspondence should be addressed at: Institute of Neuroinformatics, University of Zurich, Winterthurerstrasse 190, Zurich 8057, Switzerland. Email: shih{at}ini.phys.ethz.ch.

Author contributions: S.C.L., and R.H.R.H. designed research; P.D. performed research; P.D., S.C.L., and R.H.R.H. analyzed data; P.D., S.C.L., and R.H.R.H. wrote the paper.

The authors declare no conflict of interest.

This article is a PNAS Direct Submission.

This article contains supporting information online at www.pnas.org/cgi/content/full/0909394107/DCSupplemental.

Freely available online through the PNAS open access option.
References
 ↵
 ↵
 ↵
 ↵
 ↵
 ↵
 ↵
 ↵
 ↵
 Benda J,
 Longtin A,
 Maler L
 ↵
 ↵
 Knudsen EI,
 Brainard MS
 ↵
 ↵
 Widrow B,
 Hoff ME
 ↵
 ↵
 Fiete IR,
 Fee MS,
 Seung HS
 ↵
 ↵
 Davison AP,
 Frégnac Y
 ↵
 ↵
 ↵
 Gutfreund Y,
 Knudsen EI
 ↵
 ↵
 Gutfreund Y,
 Zheng W,
 Knudsen EI
 ↵
 ↵
 ↵
 Witten IB,
 Knudsen EI,
 Sompolinsky H
 ↵
 Gollisch T,
 Meister M
 ↵
 Reches A,
 Gutfreund Y
 ↵
 Winkowski DE,
 Knudsen EI
 ↵
 RodriguezContreras A,
 Liu XB,
 DeBello WM
 ↵
 Ahmed B,
 Anderson JC,
 Douglas RJ,
 Martin KA,
 Whitteridge D
 ↵
 Fuhrmann G,
 Markram H,
 Tsodyks M
 ↵
 ↵
 ↵
 ↵
 ↵
 ↵
 Toyoizumi T,
 Pfister JP,
 Aihara K,
 Gerstner W
 ↵
 Sprekeler H,
 Michaelis C,
 Wiskott L
 ↵
 ↵
 ↵
 ↵
 Drew PJ,
 Abbott LF
 ↵
Citation Manager Formats
More Articles of This Classification
Biological Sciences
Related Content
 No related articles found.
Cited by...
 No citing articles found.