Theory of neuronal perturbome in cortical networks

Significance Brains are composed of networks of neurons that are highly interconnected. A central question in neuroscience is how such neuronal networks operate in tandem to make a functioning brain. To understand this, we need to study how neurons interact with each other in action, such as when viewing a visual scene or performing a motor task. One way to approach this question is by perturbing the activity of functioning neurons and measuring the resulting influence on other neurons. By using computational models of neuronal networks, we studied how this influence in visual networks depends on connectivity. Our results help to interpret contradictory results from previous experimental studies and explain how different connectivity patterns can enhance information processing during natural vision.

,C, respectively, for rate-based neuronal networks with similar weight matrices. For each network, the weight matrix is generated with the same combination of parameters as in Fig. 3B,C, but instead of inferring the influence from the weight matrix (as in SI Appendix, Methods, Eq. 19), the influence is calculated from rate-based simulations of the network (SI Appendix, Methods, Eq. 14).   for the total duration of simulation (20 s). The neuron is suppressed, as shown by its average firing rate (right). (C) Free membrane potential (Vm) of the same neuron in (B), for 1 second of simulation, before (black) and after (red) perturbation. Lines show the average free Vm, respectively, during the total simulation time (20 s). Free Vm for each neuron was obtained by simulating an extra neuron with exactly the same inputs and dynamics but without the spiking mechanism and measuring its membrane potential. This provides the true estimate of the membrane potential in the absence of distortions induced by the spiking mechanism. (D) Relationship between the rate change of E neurons (influencees) in the network with the change in their free Vm, as a result of the perturbation of a sample E neuron (influencer). (E) Influence inferred from free Vm versus influence inferred from the spiking activity, for all pairs of E influencers and influencees. The influence is obtained as the change in the free Vm or the firing rate as a result of a single neuron perturbation, normalized by the size of perturbation. (F) Average influence as a function of RF correlations (top) or signal correlations (bottom), as inferred from spikes (left) or free Vm (right), compared with the prediction from the weight matrix (red). (A) Average influence as a function of signal correlations in rate-based networks with sparse EE connectivity (connection probability: 25%). Connectivity between a pair of E neurons is obtained as !" = +η !" . !" + ζ, where !" is the initial connectivity from neuron j to i (c=1, connected; c=0, disconnected) and !" is the RF similarity of the neurons. ## = 0.01 and ζ is a random value drawn from a normal distribution with (0,0.005). Weights smaller than 0 are set to zero to ensure non-negativity of excitatory weights. Influence inferred from average rates (black) are compared with the prediction from the weight matrix (red). (B) Influence for all pairs of E influencers and influencees (gray dots), compared with the grand average (red). (C) Distribution of influence for EE pairs with c = 1 (red) and c = 0 (black), where c is the initial connectivity parameter as described in (A). When !" = 1 (i.e. a connection exists between a presynaptic neuron j and a postsynaptic neuron i), the weight of synapse ( !" ) is modulated according to RF similarity of the pair of neurons. (D) Distribution of influence for EE pairs, when connectivity is assayed from actual synaptic weights. Neuron j is considered to be connected to neuron i, if !" > 0.  The tuning curve of each neuron is shown, when the network is stimulated with 16 gratings of different orientations (ranging from 0 to 180º), at a fixed spatial frequency ( = 0.04) and spatial phase of = 0 º (same stimuli as in    Methods 1 Network simulations
x and y are the position on the visual field, σ sets the size, θ is the preferred orientation, ω = 1/λ the spatial frequency, and φ the spatial phase of the receptive field. RFs are instantiated on a L × L field, with their resolutions expressed in pixels per degree (ppd), and centered at (x 0 , y 0 ). Unless stated otherwise, parameters are chosen as the default values below: L = 50, ppd = 4, σ = 2.5, γ = 0.5. The following parameters are drawn randomly from a uniform distribution: (x 0 , y 0 ) from [−1.25, 1.25] degrees, θ and φ from [0, π) and [0, 2π), respectively. Spatial frequency, ω, is drawn from a gamma distribution with shape parameter 2 and scale parameter 0.04 and 0.02 for excitatory and inhibitory neurons, respectively.

Neuronal connectivity
Network connectivity is represented by the weight matrix W , with the entry w ij denoting the weight of the connection from presynaptic neuron j to postsynaptic neuron i. Connectivity is all-to-all and weights between two neurons are modulated as a function of similarity of their respective receptive fields. Functional similarity is assayed in two ways: first, by calculating the correlation coefficient of the receptive fields (RF CC) directly: (2) Alternatively, functional similarity is assayed by using external stimuli. Consider a sequence of stimuli, e.g. N stim static gratings or natural images. The functional behaviour of neuron i in response to this stimulus set can be described by a response vector ρ i (with size N stim ). The k-th element of the vector represents the correlation coefficient of the neuronal RF with the k-th stimulus in the sequence. For a pair of neurons (i, j), the Response CC (also referred to as signal CC) is then calculated from the correlation of their response vectors, (ρ i , ρ j ): To evaluate response CCs in our simulations, we used 1000 static gratings with random preferred orientations between [0, π] and spatial frequencies drawn from a gamma distribution with shape parameter 2 and scale parameter 0.04. Gratings were instantiated in the same fashion as the Gabor RFs described above, with the difference that they were extended in space to obtain full-field gratings. This was achieved by choosing very large values of σ in Eq. (1).
For each pair of neurons i and j, the weight between them is then modulated as a function of the respective measure of similarity (CC ij ): where CC

Neuronal simulations
The activity of neuronal networks was simulated by numerically solving the following equations: where r is a vector of all firing rates (for N neurons, composed of N E excitatory and N I inhibitory neurons), W is the matrix of connection weights as described above, s is the vector of external input to all neurons, τ is the effective time constant of integration, and [] + denotes half-wave rectification.
The influence is assayed by simulating the response of the network for a given time (T ) in the baseline state (s 0 ) and with an extra perturbation of a single neuron: where δs i is a perturbation vector containing δp (the size of perturbation) for the i-th element and zeros for the rest of neurons. The average (temporal) firing rate of the neurons in the stationary state (after discarding the initial transient response for T trans ) is calculated for the baseline and the perturbed state as r 0 and r p , respectively. The vector of change in firing rates, δr = r p − r 0 , is normalized by the size of perturbation to obtain the influence of neuron i (the influencer) on the rest of the network (influencees): ψ = δr/δp. Influence in spiking networks was obtained by modelling the equations describing the membrane potential dynamics of leaky integrate-and-fire neurons: where V m denotes the membrane potential of a neuron, and τ m = RC is the time constant of integration of the membrane potential, with R and C being the membrane resistance and capacitance, respectively. When the membrane potential reaches a voltage threshold, V th = 20 mV , a spike is elicited and the membrane potential is reset to the reset voltage, V reset = 0. The input to the neuron at each time is described by s(t) = R I(t), which comprises external input and internally generated input from recurrent activity. Once a spike is emitted in a presynaptic source, an instantaneous change in the membrane potential of all postsynaptic sources is emulated in the next simulation time step. The total input at time t for a postsynaptic neuron i can be expressed as s(t) = Σ j w ij δ j (t), where δ j (t) denotes the presence (1) or absence (0) of spike in presynaptic sources (including external input or input from presynaptic neurons in the recurrent network), and w ij describes the weight of connection (in mV ) from the j-th presynaptic source. The weight matrix was generated in the same fashion as previous weight matrices, except weights were multiplied by V th , such that the normalized weights (by threshold) were similar between the spiking and rate-based networks.

Data analysis
To analyse the behaviour of the influence as a function of signal correlation between influencers and influencees in different regimes (Fig. 3), we employed a feature-specific suppression/amplification (S/A) index. It was calculated from the average influence, which was obtained as the average influence between all pairs of influencers and influencees with signal correlations in a certain bin (with bin widths of 0.02) between −1 and 1. The index is composed of three submetrics: (1) x: the mean average influence in the intermediate regime; (2) y: the slope of the dependence of the average influence on signal correlation in the intermediate regime; and (3) z: the mean level of average influence in highly similar regime. The intermediate regime was defined as signal correlations between −0.3 to 0.3, and a highly similar regime was taken as the range of signal correlation between 0.7 to 0.9. Each submetric was normalized to the maximum of the absolute values for all the network simulations with different parameters tested (e.g. as in Fig. 3B-G). The feature-specific S/A index (SAI) was then obtained as: The more the suppression at intermediate regimes and the more the amplification at highly similar regimes, the higher the SAI would be.
To quantify the combined influence of perturbing two influencers in double-neuron perturbations (Fig. 6), we developed a synergy index. For the first influencer (neuron i), the effect of additional perturbation of a second influencer (neuron j) on the influencee (neuron k) was quantified as follows: where ψ({i, j} → k) is the influence of double-neuron perturbations of {i, j} on k and ψ(i → k) is the single-neuron influence of i on k. The synergy of influence between the triplet {i, j, k} was then calculated as: syn(i, j, k) = ∆ψ(i, j, k)/ψ(i → k).
We excluded pairs with very small single-neuron perturbations (ψ(i → k) < 0.001) to avoid their overrepresentation in the metric. The average synergy between two influencers (i, j) was calculated by computing the mean synergy across all target influencees: Note that the synergy will be positive (synergistic) if the change in the influence as a result of the interaction of the second influencer is in the same direction as the original, single-neuron perturbation, and negative (antagonistic) otherwise. Thus, both suppression and amplification of single-neuron influences can undergo synergy (or antagonism) as a result of double-neuron perturbations, depending on whether the interaction exacerbates (or diminishes) the initial influence in the same (or reverse) direction.

Decoding of natural images
To evaluate the population responses of neuronal networks to external stimuli, we presented natural images to our model networks. Natural images were chosen from the McGill calibrated colour image database (http://tabby.vision.mcgill.ca). The feedforward input (I ffw ) to each neuron in response to each natural image was calculated as: where < . > denotes the dot product and |.| is the norm of the vector. θ ij = 90 corresponds to the maximum discriminability (orthogonal representations) and 0 or 180 degrees show the maximum collinear relationships (in the same or opposite directions, respectively). We use a normalized version of this angle: to quantify discriminability (as in Fig. 7H), which ranges from 0 (minimum discriminability) to 1 (maximum discriminability).
To assess the decoding capacity of neuronal networks to discriminate natural images, we trained linear decoders on the population activity of the excitatory neurons. Each decoder was trained to distinguish a target image from other images. The ensemble of natural images was broken into two random parts (test and train sets, each containing 300 images) and the decoder was trained on the train set to detect the target image from the rest of the images.
The training was done by presenting 300 pairs of images, containing the target image and one of the 300 test images. The decoder then finds (via linear regression) the best weighting of population activity of the network which separates the response to the target image from the non-target ones. To control for different levels of population activity under different conditions (e.g. networks with strong and weak recurrent interaction), we normalized the activity of the networks, such that the average activity of the network in response to each image was 1.
The decoder was tested on the test set, by presenting pairs of images containing the target image and each of the 300 test images. A threshold of 90% was set for the correct detection.
The percent correct was then calculated as the fraction of the pairs for which the target image passed the threshold and the test image did not. The decoding task was performed for all 600 images as decoding targets. For each decoding task, the procedure was repeated for different levels of noise added to the population activity (both during training and for the test). It was added to the normalized activity of all excitatory neurons in response to each image and was drawn from a uniform distribution between 0 and ξ, with ξ ranging from 0.02 to 0.1 (Fig. 7K). The example shown in Fig. 7J had an intermediate noise level of 0.04.

Theoretical analysis of neuronal influence in single-neuron perturbations
We analytically evaluate the effect of single-neuron perturbations in networks of rate-based neurons as described above (Eq. (5)). We drop the firing threshold nonlinearity and analyze the linear behaviour of the network as described with the following dynamics: The stationary state solution of the firing rates under such dynamics is obtained under dr/dt = 0 and can be written as: We define A = (I − W ) −1 as the operator which acts on external input to obtain the steadystate firing rates in any equilibrium, r 0 = As 0 . Single-cell perturbations around this steadystate leads to a new firing rate solution, r p = As p . Here, s p = s 0 + δs, and δs is a vector of zeros at all entries except for the neuron which is perturbed. If the i-th neuron in the network is perturbed, we have: where δp is the size of perturbation. To obtain the influence of perturbation of neuron i on the postsynaptic neuron j, ψ(i → j) we need to calculate the change in the firing rate of the j-th entry of r p . Writing δr = Aδs (17) the rate change of the j-th neuron is obtained as: where A ji is the entry on the j-th row and i-th column of matrix A. Writing the influence as the rate change of the influencee j divided by the perturbation strength of the influencer i: reveals that A ji is, in fact, denoting the influence.
To obtain the neuronal influence in single-neuron perturbations, we used the above framework to evaluate ψ(i → j) = A ji , by mathematically calculating the influence in networks with different profiles of connectivity. We explain this approach in more detail below.

Calculating influence for a general weight matrix
To obtain the influence, we calculate A ji , by expanding the matrix A with regard to W as: A ji can therefore be expressed as: Note that the first term in Eq. (20) (the identity matrix) does not contribute to the influence in Eq. (21), since i = j (the influencer and the influencee are different neurons). The series describes different pathways of interaction from i to j in the following fashion: (I) Mono-synaptic influence denotes the direct interaction from i to j, which is inferred from the corresponding entry on the original weight matrix: (II) Di-synaptic influence entails second-order interactions, comprising all the pathways in which neuron i can influence neuron j via secondary neurons. It can be mathematically expressed as: where index k denotes all the neurons in the network that mediate the influence from neuron i to neuron j.
(III) Tri-synaptic influence captures all interactions with two layers of intermediate neurons, denoted by indices k and l in the following formulation: Higher order interactions (including tetra-synaptic ψ 4 (i → j), penta-synaptic ψ 5 (i → j), etc) can be calculated via similar equations.

Networks with excitation and inhibition
In the next step, we calculated the influence for networks containing two subtypes of excitatory (E) and inhibitory (I) neurons, with the number of neurons in the network denoted by N E and N I , respectively. The connection weights from E to E, E to I, I to E and I to I neurons are described, respectively, by J EE , J EI , J IE , and J II . W is ordered such that the first N E elements are excitatory neurons and the next N I elements (N E + 1 to N E + N I rows/columns) represent inhibitory neurons.
The influence of the i-th excitatory neuron on the j-th excitatory neuron in the network can be calculated as: where different orders of influence are calculated as the following: (I) Mono-synaptic influence is given by which is the direct connection between the two excitatory neurons.
(II) Di-synaptic influence is calculated as which contains pathways with either excitatory or inhibitory neurons in between the influencer and the influencee.
(III) Tri-synaptic influence can, in turn, be written as This includes four possibility of mediation between two excitatory neurons, respectively) and can be calculated as: Note that the latter motif entails a net positive influence, as it involves inhibition of inhibition.
(IV) Tetra-synaptic influence can, similarly, be mediated by 3-order motifs (E → X → Y → Z → E, where {X, Y, Z} can be either E or I, leading to a total of 8 possibilities), and therefore can be written as Higher order influences can be calculated in a similar fashion by counting higher-order motifs.

The case of inhibition-dominance
It is useful to calculate the influence for a simplified description of the abovementioned weight matrices, where J EE = J, J EI = αJ EE , and J IE = J II = −gJ EE . We further assume N E = N I = N and write: For the specific condition that α = 1, we have: which in fact provides a closed-form description of the influence at all orders of influence: The total influence can therefore be written as: which can be expressed as: Note that inhibition dominance, g > 1, does not imply a negative influence here. Rewriting κ = 1 − N J(1 − g), and noticing that κ > 1 for g > 1, we observe divisive inhibition as a result of inhibition dominance: The stronger the inhibition-dominance, the larger the divisive term in the denominator, and hence the higher the divisive inhibition.

Calculating influence for networks with specific connectivity
The calculations presented in the previous section can be extended to networks with specific connectivity, where the weight of connections between neurons are defined as a function of their functional similarity. First we consider a scenario where the functional property of neurons is defined by a one-dimensional parameter, e.g. their preferred orientations, θ. We consider weight matrices described by: where the connection weight between neuron i and j is modulated by the similarity of their respective preferred orientations. m determines the degree of specificity of connections, with m = 0 retrieving the unspecific weight matrices described in the previous section. We now calculate the influence (A ji ) for a network of excitatory and inhibitory neurons with specific connectivity, described as: Here, J XY and m XY denote, respectively, the average weight and the degree of specificity of synapses, and θ X represents the preferred orientation. (X, Y ) ∈ (E, I).
We first calculate the influence for a scenario where all connections have the same degree of specificity, i.e. m EE = m EI = m IE = m II = m. We also assume that J EE = J, J EI = αJ EE , and J IE = J II = −gJ EE , as described above. Under these conditions, the influence of perturbing excitatory neuron i on the excitatory neuron j can be calculated as the following for different orders of interaction: (I) Monosynaptic: where ∆θ = θ i − θ j .
(III) Tri-synaptic: which can be written as: [αJ(1 + m cos(2(θ j − θ k )))J(1 + m cos(2(θ k − θ i )))]+ and results in: (IV) Tetra-synaptic: Accounting for all fourth order motifs in a similar fashion as explained above, we obtain the following for the fourth order influence: The total influence between excitatory neurons i and j in specific EI networks can, therefore, be expressed as: Note that the influence contains nonspecific and specific components, and that the nonspecific component is similar to what we obtained before for influence in nonspecific networks (Eq. (31).
The nonspecific part retrieves the same formulation as before (c.f. Eq. (35)): while the specific component of the influence can be expressed as: Note that inhibition dominance (g > 1) does not yield a net negative influence here too.
That is, feature-specific suppression does not result from general inhibition dominance in specific EI networks, when α = 1. Instead, it leads to (feature-specific) divisive inhibition (c.f. Eq. (35)), with higher inhibition-dominance values (g) implying higher divisive terms for the specific component.

Networks with broad inhibitory connectivity
Defining J = m e J and g = gm i /m e , we can write: and therefore the specific component of the influence can be calculated as which leads to: To obtain feature-specific suppression in single-neuron influences (namely, more suppression for pairs with smaller ∆θ), we need to have 1 − N J(m e − gm i )/2 < 0. Broader inhibition does not confer such a negative influence in inhibition-dominance networks, if g m i > m e . The only situation under which such a negative influence appears is if m e > gm i and N J(m e − gm i )/2 > 1 at the same time, but note that the latter condition implies instability of the weight matrix along the specific eigenmode.

Calculating influence for specific E-I networks with strong E-to-I connectivity
In this section, we relax the previously made assumption of J EE = J EI , and allow the excitatory neurons to have different connection weights to their excitatory and inhibitory postsynaptic targets, formulated by: J EE = J, J EI = αJ EE , J IE = J II = −gJ EE . We assume similar connection specificity for all synapses: m EE = m EI = m IE = m II = m. Following similar procedures as described before for networks with specific connectivity (and summarized in Eq. (49)), different orders of influence in specific networks with strong E → I connectivity can be written as: . . .
We define ψ E = ψ E→X→E and ψ I = ψ I→X→E as two second-order, di-synaptic motifs of influence, one starting from an excitatory and the other from an inhibitory neuron, respectively.
The excitatory second-order motif ψ E is ψ 2 (i → j), by definition, and can hence be written as before: The inhibitory second-order motif can, in turn, be calculated as: Note that, although starting from an inhibitory neuron, this motif would be positive for g > 1 (the condition which we referred to as inhibition-dominance). That is, inhibition-dominance leads to a net excitatory effect of the second-order motif of inhibitory neurons; this is because the net positive effect of inhibition of inhibition (I → I → E) is larger than the net negative effect of inhibition of excitation (I → E → E).
We define as the main factors appearing in the second-order excitatory and inhibitory motifs. This allows us to write the specific component of the higher-order motifs (in Eq. (57)) in terms of the basic factors of the respective di-synaptic motifs: . . .
Feature-specific suppression implicates a higher suppressive influence between neuronal pairs with smaller ∆θ. We can now evaluate this for specific components of higher-order motifs of influence, in view of the interaction of the basic di-synaptic motifs.
[1] For the 2nd-order motif, this implies that the basic excitatory second-order motif is negative, hence: Both E → I connections (parameterized by α) and I → {E, I} weights (parameterized by g, or inhibition-dominance) can be strong to satisfy this condition, which highlights the significance of the specific excitatory-inhibitory interaction.
[2] For networks with weak recurrent coupling (N J 1), higher-orders of influence would be much smaller than the lower orders and hence can be ignored in the net influence. However, higher-order motifs cannot be ignored in networks with strong coupling (N J 1). In that case, the condition inferred from the 2nd-order motif, ζ E < 0, does not guarantee a negative specific 3rd-order motif, since we need: (ζ E + αζ I ) < 0. However, as we argued above, inhibitiondominance (g > 1) implies a positive ζ I . ζ I is only negative if we have: This means that α and g cannot be arbitrarily increased as we assumed for the 2nd-order motif in [1].
An alternative way to satisfy this condition might be obtained by broadening of inhibition. If we allowed for different selectivity of excitatory and inhibitory connectivity (as denoted by m e and m i in Section. 2.2.1), the condition in Eq. (63) would change to g m i /m e < 1. Now, this can be satisfied by changing g and/or m i /m e , with the small values of the latter (m i /m e < 1) implying broaderer specificity of inhibitory connections compared to excitation. Overall, it means a weak specific inhibitory connectivity, which can be satisfied by weaker or broader inhibition.
As strong inhibition-dominance might be necessary to balance nonspecific excitation, broader inhibition might be a better strategy to satisfy the negative influence of this specific motif.
[3] So far, negative influence of motifs could be satisfied, if the the two basic 2nd-order excitatory and inhibitory specific motifs were negative. For the 4th-order specific motif to be negative, we need: We therefore have: One way to achieve this is to have balanced di-synaptic motifs: ζ E ≈ ζ I . Under this condition, we have ζ 2 E /ζ 2 I ≈ 1, and we therefore need α/g > 1, or α > g. But this is already satisfied, since the two conditions αg > 1 (from Eq. (62)) and g < 1 (from Eq. (63)), imply α > 1 > g and hence α > g.
Taken together, several conclusions can be inferred from our little exercise here: first, it is not possible to obtain negative influence for specific motifs by simply increasing the inhibitiondominance g; too large values of g can lead to strong disinhibitory effects, which might counteract the direct inhibitory effects via higher-order interactions. Second, the strength of E → I connections are as important, if not more, to achieve negative influence. Finally, balancing excitatory and inhibitory motifs might be needed, to avoid the dominance of one over the other.
However, although illustrative, these results are not conclusive, for the following reasons.
First, we do not have a closed-form expression for higher-order motifs, which hinders a systematic evaluation of all such terms and the conditions for their negativity. Moreover, the actual condition for the net negative influence is not the negativity of each higher-order term, but the negativity of the net sum. We therefore need mathematical formulations which provide us with these two requirements. We attempt to provide such analyses in the following sections.