## New Research In

### Physical Sciences

### Social Sciences

#### Featured Portals

#### Articles by Topic

### Biological Sciences

#### Featured Portals

#### Articles by Topic

- Agricultural Sciences
- Anthropology
- Applied Biological Sciences
- Biochemistry
- Biophysics and Computational Biology
- Cell Biology
- Developmental Biology
- Ecology
- Environmental Sciences
- Evolution
- Genetics
- Immunology and Inflammation
- Medical Sciences
- Microbiology
- Neuroscience
- Pharmacology
- Physiology
- Plant Biology
- Population Biology
- Psychological and Cognitive Sciences
- Sustainability Science
- Systems Biology

# Bayesian sampling in visual perception

Edited by Wilson S. Geisler, University of Texas at Austin, Austin, TX, and approved May 31, 2011 (received for review January 27, 2011)

## Abstract

It is well-established that some aspects of perception and action can be understood as probabilistic inferences over underlying probability distributions. In some situations, it would be advantageous for the nervous system to sample interpretations from a probability distribution rather than commit to a particular interpretation. In this study, we asked whether visual percepts correspond to samples from the probability distribution over image interpretations, a form of sampling that we refer to as Bayesian sampling. To test this idea, we manipulated pairs of sensory cues in a bistable display consisting of two superimposed moving drifting gratings, and we asked subjects to report their perceived changes in depth ordering. We report that the fractions of dominance of each percept follow the multiplicative rule predicted by Bayesian sampling. Furthermore, we show that attractor neural networks can sample probability distributions if input currents add linearly and encode probability distributions with probabilistic population codes.

There is mounting evidence that neural circuits can implement probabilistic inferences over sensory, cognitive, or motor variables. In some cases, humans can perform these inferences optimally, as in multi-cue or multisensory integration (1–8). For complex tasks, such as object recognition, action perception, and object tracking, the computations required for optimal inference are intractable, which implies that humans must use approximate inferences (9–11). One approximate scheme that is particularly appealing from a biological point of view is sampling. Consider as an example the problem of object recognition. The goal of the inference in this case would be to compute the probability over object identities given the image. Although this probability distribution may be difficult to compute explicitly, one can often design algorithms to generate samples from the distribution, allowing one to perform approximate inference (12, 13). Some human cognitive choice behaviors suggest that the nervous system implements sampling. However, whether the same is true for low-level perceptual processing is currently unknown.

Stimuli that lead to bistable percepts (14–18), like the Necker cube, provide a tractable experimental preparation for testing the sampling hypothesis. With such stimuli, perception alternates stochastically between two possible interpretations, a behavior consistent with sampling as suggested by several works (16, 19, 20). However, the key question is what probability distribution is being sampled. If the brain uses sampling for Bayesian inference, neural circuits should sample from an internal probability distribution on possible stimulus interpretations that are conditioned on the available sensory data, the so-called posterior distribution. This distribution places important constraints on the distributions of perceptual states for bistable stimuli.

To test this idea, we used stimuli composed of two drifting gratings whose depth ordering is ambiguous (21). We then manipulated two depth cues to vary the fractions of dominance of the percepts. Our central prediction is that the fractions of dominance of each percept should behave as probabilities if they are the result of a sampling process of a posterior distribution over image interpretations. We will refer to this form of sampling as Bayesian sampling. First, we show that subjects’ fractions of dominance in different cue conditions follow the same multiplicative rule as probabilities in the Bayesian calculus, suggesting that bistable perception is indeed a form of Bayesian sampling. Second, we describe possible neural implementations of a Bayesian sampling process using attractor networks, and we discuss the link with probabilistic population codes (22).

## Results

### Multiplicative Rule for Combining Empirical Fractions of Dominance.

We asked subjects to report their spontaneous alternations in perceived depth ordering of two superimposed moving gratings over a 1-min period and measured the fraction of dominance time for each percept (*Methods* and Fig. 1*A*). In the first experiment, the two drifting gratings, α and β, were parameterized by their wavelength and speed. One of the wavelengths was always set to a fixed value λ*, and one of the speeds was set to a fixed value *v**. The remaining wavelength and speed parameters, λ and *v*, respectively, determined the difference in wavelength and speed between gratings α and β, denoted Δλ and *Δv*, and hence, the information for choosing grating α as the one behind. We refer to these differences as the cues to depth ordering, and we refer to the condition where the two differences are zero as the neutral cue condition (Δλ = 0 and *Δv* = 0). These cues have been shown to have a strong effect on the depth ordering of the gratings because of their relationship with the natural statistics of wavelength and speed of distant objects (21). In the second experiment, we manipulated wavelength and disparity, *d*, of the gratings. In this case, the label *v* should be interchanged with the label *d*.

According to the Bayesian sampling hypothesis, the empirical fractions of dominance arise from a process that samples the posterior distribution on possible scene interpretations given the sensory input. As we show in *SI Methods*, when two conditionally independent cues are available (i.e., the values of the cues are independent when conditioned on true depth), an optimal system should sample from a probability distribution given by the normalized product of the probability distributions derived by varying each cue in isolation while keeping the other cue neutral. Our hypothesis implies that the empirical fractions should behave as probabilities, and therefore, they should follow the multiplicative rule (Eq. **1**)

where *f*_{λv} is the fraction of time that subjects report percept *A* (grating α moving behind grating β) when the cues are set to Δλ and Δ*v*, *f*_{λ} is the fraction of dominance of percept *A* when the speed cue is neutral (*Δv* = 0) while the wavelength cue has value Δλ, and *f _{v}* is the dominance fraction when the wavelength cue is neutral (Δλ = 0) while the speed cue has value

*Δv*. This relation holds whether subjects are sampling from posterior distributions on depth or posterior distributions raised to an arbitrary power

*n*(

*SI Methods*). The multiplicative rule provides an empirical consistency constraint for Bayesian sampling. Note that this rule does not specify how the samples are extracted over time [i.e., it works whether the samples are independent over time (23, 24) or correlated]. As discussed later, bistable perception is only consistent with a sampling mechanism that generates correlated samples (i.e., the percept tends to remains the same over hundreds of milliseconds).

### Observed vs. Predicted Fractions of Dominance.

The multiplicative rule was tested in two experiments. In the first experiment, the wavelength and speed differences between the two gratings, Δλ and Δ*v*, were changed from trial to trial congruently [C condition (i.e., both cues favoring the same depth ordering); example in Fig. 1*B*] or incongruently [IC condition (i.e., the cues favored different depth orderings)]. This change was achieved by decreasing the wavelength and increasing the speed of grating α in the C condition, while decreasing the wavelength of grating α and increasing the speed of grating β in the IC condition. In the second experiment, the wavelength and stereo disparity (instead of speed) of the gratings were manipulated in the C and IC conditions as in the previous experiment.

As shown in Figs. 2 and 3, wavelength, speed, and disparity differences in the gratings have a strong impact on the fractions of dominance of the gratings’ depth ordering (21). The fraction of dominance of percept *A* (grating α is behind grating β) increases as the wavelength difference between gratings α and β (Δλ = λ_{α} − λ_{β}) decreases. The fraction increases as the speed difference between gratings α and β (Δ*v* = *v*_{α} − *v*_{β}) increases in the C condition (Fig. 2*A*). Conversely, the fraction decreases as the difference (in speed or wavelength) between the gratings decreases in the IC condition (Fig. 2*B*). In the second experiment, the fraction of dominance of percept *A* increases as the disparity difference between gratings α and β (Δ*d* = *d*_{α} − *d*_{β}) increases in the C condition (Fig. 3*A*). Again, the reverse pattern is observed in the IC condition (Fig. 3*B*). In the two experiments, when the two cues are set to their neutral values, the fractions (Figs. 2 and 3, black open circles) are not significantly different from one-half [two-tailed *t* test; experiment 1: *p* = 0.39 (C), *p* = 0.06 (IC) and experiment 2: *p* = 0.31 (C), *p* = 0.051 (IC)].

The experimental results were compared with the theoretical predictions from the multiplicative rule (Eq. **1**) (Figs. 2 *A* and *B* and 3*A* and *B*). The predictions when the two cues are nonneutral (Figs. 2 *A* and *B* and 3*A* and *B*, filled blue circles) were computed using the experimental data of the single nonneutral cue cases only (Figs. 2 *A* and *B* and 3*A* and *B*, open red circles). The case in which wavelength is the only nonneutral cue corresponds to the lower line of open circles in Figs. 2*A* and 3*A* and the upper line in Figs. 2*B* and 3*B* in both experiments. The cases in which speed (or disparity) is the only nonneutral cue correspond to the vertical line of open circles in the wavelength and speed (or disparity) experiment in Fig. 2*B* (Fig. 3*B* respectively). The match between the observed data points (filled red circles) and predictions is tight, even though the multiplicative rule is parameter-free and cannot be adjusted to match the experimental results (note that, for the sake of clarity, the blue dots have been slightly displaced to the right). The data in Figs. 2 *A* and *B* and 3*A* and *B* were replotted in Figs. 2*C* and 3*C* to show the predicted fraction of dominance from the multiplicative model vs. the observed fraction when the two cues were nonneutral with the C (Figs. 2*C* and 3*C*, light blue dots) and IC (Figs. 2*C* and 3*C*, dark blue) conditions combined. The strong alignment of the data points along the unity line confirms that the multiplicative rule provides a tight fit to the data. Individual subjects also followed the multiplicative rule (*SI Methods* and Fig. S1).

We also tested alternative models to the multiplicative rule. In the first model, we assumed that integration between the cues does not take place—a strongest cue take all model. In this model, performance is driven by the cue with the lowest uncertainty: The fraction of dominance when both cues are varied together is set to that of the cue whose fraction when the cues are manipulated alone has the largest absolute value difference with respect to one-half (*SI Methods*). As shown in Figs. 2*D* and 3*D* (brown dots) this model fails to capture our experimental results. In the second model, we generated predictions from a realistic neuronal network (see *Results*, *Sampling with Realistic Neural Circuits*). When the input neurons to the network fired nonlinearly in response to the stimuli (25), the predictions of the model, which fit the single nonneutral cue conditions, substantially differed from the experimental data in the four nonneutral cues conditions (NL net) (Figs. 2*D* and 3*D*, orange dots). When the input neurons fired linearly (26), the predictions were identical to the multiplicative rule (L net) (Figs. 2*D* and 3*D*, blue dots). This result shows that the mere fact that a network can oscillate stochastically between two percepts in a way suggestive of sampling does not guarantee that it will also follow the multiplicative rule. Whether it does depends critically on how the inputs are combined, a point that we discuss more thoroughly below.

### Diffusion in an Energy Model.

Our finding that bistable perception behaves like a Bayesian sampling process raises the issue as to how neurons could implement such a process. We first show that implementing the multiplicative rule is surprisingly straightforward with energy models. In *Results*, *Sampling with Realistic Neural Circuits*, we will present a neural instantiation of this conceptual framework. We model the dynamics of two neural populations, *A* and *B*, whose states are described by their firing rates *r _{A}* and

*r*, respectively (Fig. 4

_{B}*A*). The reduced dynamics tracks the difference between the firing rates,

*r = r*−

_{A}*r*, where

_{B}*r*> 0 corresponds to percept

*A*. This variable obeys (Eq.

**2**)

where *g*(*I*_{λ},*I _{v}*) is a bias provided by the inputs and

*n*(

*t*) is a filtered white noise with variance σ

^{2}(27) (

*SI Methods*). The first term on the right-hand side ensures that the activity difference,

*r*, hovers around the centers of the two energy wells (Fig. 4

*B*). The bias term measures the combined strength of the cues, which is a function of the individual strengths

*I*

_{λ}and

*I*favoring percept

_{v}*A*from the wavelength and speed cues, respectively. The function is chosen such that it is zero when the two cues are neutral (zero currents) and positive when the two cues favor percept

*A*(the two currents are positive). The dynamics of Eq.

**2**can be viewed as a noisy descent over the energy landscape , which is symmetrical (Fig. 4

*B*, black line) when the two cues are neutral and negatively tilted (Fig. 4

*B*, gray line) when the cues favor percept

*A*. The resulting dynamics effectively draws samples from an underlying probability distribution that depends on the input currents (a process known as Langevin Monte Carlo sampling) (28).

To model the experimental data that we have described, we need a form of sampling that obeys the multiplicative rule. Whether the network obeys the rule or not depends critically on the function . We consider here the family of functions described by , where ε measures the strength of the nonlinearity. Similar nonlinear functional dependences on the input currents naturally arise in neuronal networks with nonlinear activation functions (*Results*, *Sampling with Realistic Neural Circuits*).

For a value of ε different from zero, the dynamical system does not follow the multiplicative rule (Fig. 4*D*). In contrast, if we set ε to zero, such that , the system now obeys the multiplicative rule (Fig. 4*E*). This result can be derived analytically by computing the mean dominance duration of each percept, which corresponds to the mean escape time from one of the energy wells (*SI Methods*). We can then show that the fraction of dominance of population *A* for ε equal to zero is a sigmoid function of the sum of the inputs (Eq. **3**)

where is the effective noise in the system and is proportional to σ^{2}. Note that when only one cue is nonneutral, (*i* = λ, *v*), and when both cues are nonneutral, . Therefore, the fractions are related through , and after normalization, they follow the multiplicative rule (Eq. **1**). Fig. 4*F* shows that Eq. **3** is indeed satisfied by the diffusion model, because the fraction of dominance of percept *A* obtained from numerical simulations as a function of the total input current (Fig. 4*F*, blue line) is a sigmoid function (Fig. 4*F*, red line). This analytical approach can also be used to reveal why the system with a nonlinear function does not follow the multiplicative rule. Because in this case, , the product of the fractions when only one cue is nonneutral is not equal to the fraction when the two cues are nonneutral.

### Sampling with Realistic Neural Circuits.

The main features of the energy model can be implemented in a neural network with attractor dynamics. We consider a recurrent neural network with two competing populations (Fig. 5*A*) encoding the two percepts *A* and *B*, whose states are described by their population averaged firing rates *r _{A}* and

*r*, as suggested by neural data (29). An additional relay neuronal population fires in response to the cues and provides inputs to the competing populations

_{B}*A*and

*B*with positive (direct connections) and negative (through an inhibitory population) signs, respectively. The firing of the relay population is a function of the sum of the cue strengths,

*I*

_{λ}+

*I*. We consider linear and nonlinear activation functions (

_{v}*SI Methods*) close to those functions found in primary visual cortex (25, 26). We also added a slow adaptation process (30–33).

The network stochastically alternates between percepts with gamma-like distributions of dominance durations, which captures several aspects of the experimental distributions (Fig. 5*B*) (14, 17, 34–36). The distributions generated by the network are not significantly different from those distributions obtained from pooling data across subjects (Fig. 5*B*) (two-sample Kolmogorov–Smirnov test, *p* > 0.05). The distributions from human data have a coefficient of variation (CV; ratio between SD and mean) close to 0.6, regardless of the fraction of dominance (Fig. 5*C*, blue dots) (slope not significantly different from zero, *p* = 0.3). Although the model shows a significant linear dependence on the fraction (*p* < 0.05), the dependence is weak, and the CV is consistently close to the experimental value (Fig. 5*C*, red dots). Importantly, the network predicts that the mean dominance durations of a percept should depend primarily on its fraction of dominance. The experimental data not only show this important qualitative feature but also follow quantitatively the idiosyncratic mean duration vs. fraction dependence obtained from the model (Fig. 5*D*). These results hold independently of whether the activation function of the relay population is linear (Fig. 5 *B–D*) or nonlinear (*SI Methods* and Fig. S2).

The slow dynamics of switches indicate that bistable perception generates temporally correlated samples (successive samples tend to be similar, which is indicated by the fact that percepts tend to linger for hundreds of milliseconds before switching), a property consistent with Langevin Monte Carlo sampling (28).

Therefore, the network generates a stochastic behavior consistent with bistable perception and makes nontrivial predictions about the dynamics of perceptual bistability. However, this behavior does not necessarily mean that the network follows the multiplicative rule. Interestingly, when the activation function in the relay population is nonlinear, the fractions of dominance do not combine multiplicatively (Figs. 2*D*, 3*D*, and 6*A*, orange dots). In contrast, when the activation function is linear-rectified, the network obeys the multiplicative rule (Figs. 2*D*, 3*D*, and 6*A*, blue dots). This result holds because the fraction of dominance time is a sigmoid function of the sum of input currents when the inputs to the network are linear (Fig. 6*B*, blue lines) but not when the inputs are nonlinear (Fig. 6*B*, orange lines). We show in *SI Methods* (Fig. S3) that these results hold even in a more realistic network with integrate and fire neurons.

### Probabilistic Population Codes and Bayesian Sampling.

We have shown in the previous sections how to build a recurrent network that implements the multiplicative rule, but we have not shown yet that the network samples the posterior distribution over image interpretations specified by the input signals. If the fraction of dominance for a given cue is the result of sampling the posterior distribution over image interpretations (here *s* = {*A,B*} and is the current induced by cue *i* = {λ,*v*}), then the fraction of dominance and the posterior distribution should be the same function of the input current, *I _{i}*. Because the attractor network generates fractions of dominance that are sigmoid functions of the current (Eq.

**3**), the attractor network is sampling the posterior distribution only if that distribution is also a sigmoid function of the input current, that is (Eq.

**4**),

Moreover, through Bayes rule, we know that (Eq. **5**)

where the function corresponds to the variability in neural responses (in this case, one input current) over multiple presentations of the same stimulus *s*. Therefore, the key question is whether neural variability in vivo has a distribution consistent with Eqs. **4** and **5**. If this is not the case, attractor dynamics would not be sampling from the posterior distributions of *s*.

Experimentally, neural variability is typically assessed by measuring the variability in spike counts for a fixed *s* as opposed to the variability in input currents. Mapping input current onto spike counts is easy if we assume, as we did earlier, that the input current is proportional to the difference in spike counts vectors, − , from two presynaptic populations (e.g., V1 neurons with different depth and speed preferences) (37), one that prefers stimulus *s* = *A* and the other that prefers stimulus *s* = *B*. One can then show (*SI Methods*) that Eqs. **4** and **5** are only satisfied when the distribution over either or given *s* takes the form , where **h**(*s*) is a kernel related to the tuning curves and covariance matrix of the neural responses. Remarkably, this family of distributions, known as the exponential family with linear sufficient statistics, provides a very close approximation to the variability observed in vivo (22, 38). This family of distributions corresponds also to a form of neural code known as probabilistic population codes (22). In other words, our results show that attractor dynamics can be used to sample from a posterior distribution encoded by a probabilistic population code using the exponential family with linear sufficient statistics.

## Discussion

We have reported that the fraction of dominance in bistable perception behaves as a probability. This result supports the notion that the visual system samples the posterior distribution over image interpretations. In addition, we showed that attractor networks can implement Bayesian sampling only when the variability of neuronal activity follows the exponential family with linear sufficient statistics, as observed experimentally.

This last result is important, but using the exponential family has another advantage. Several works have reported that humans perform near-optimal cue integration in a variety of settings (1–8). It is, therefore, essential that the combination of inputs that leads to the multiplicative rule in an attractor network also results in optimal cue integration. We saw that inputs need to be added to observe the multiplicative rule in an attractor network. Adding two inputs does not necessarily result in optimal cue integration, but again, when the variability of cortical activity follows the exponential family with linear sufficient statistics, it is the optimal combination rule for cue integration (22). Therefore, the fact that the neural variability follows the exponential family allows both Bayesian sampling and optimal integration of evidence with attractor networks.

Our study is not the first study to investigate cue combination and perceptual bistability, but previous works did not test whether bistable perception is akin to what we defined as Bayesian sampling (19, 20). The fact that bistable perception alternates between two interpretations is certainly suggestive of a sampling process but not necessarily of Bayesian sampling. For instance, the orange dots in Fig. 6*A* show an example of a network that stochastically oscillates with gamma-like distributions over percept durations (Fig. 5*B*), as observed in our experimental data. The kind of analysis that has been used in previous studies to argue that bistable perception is a form of sampling (19, 20) would also conclude that this network is sampling. However, this particular network does not perform Bayesian sampling; it does not follow the multiplicative rule (Fig. 6*A*). In contrast, our experimental results make it clear that bistable perception follows the multiplicative rule predicted by Bayesian sampling.

Bayesian sampling has several computational advantages. For instance, in the context of reinforcement learning, when the statistics of the world is fixed, the optimal solution involves picking the action that is the most likely to be rewarded; however, when the statistics of the world change over the time, sampling from the posterior distribution, which is a form of exploratory behavior (21, 39), is more sensible (40). Interestingly, bistable perception implements a form of sampling that could be used to smoothly interpolate between pure exploration (sampling from the posterior) and pure exploitation (choosing the action that is the most likely to be rewarded). Indeed, our results suggest that bistable perception samples from posterior distributions that are raised to a power, *p ^{n}*, where

*n*can take any value (

*SI Methods*). When

*n*is large, the most likely state is sampled on almost every iteration, which corresponds to exploitation, whereas setting

*n*close to zero leads to exploratory behavior.

The fact that low-level vision and perhaps low-level perception might involve sampling is particularly interesting in light of several other recent findings suggesting that higher-level cognitive tasks, like causal reasoning (41, 42) and decision-making (43), might also involve some form of sampling. Sampling may turn out to be a general algorithm for probabilistic inference in all domains.

## Methods

### Experimental Methods.

The stimulus consisted of two superimposed square-wave gratings, denoted α and β, moving at an angle of 160° between their directions of motion behind a circular aperture (21) (Fig. 1*A*) with the parameters specified in *SI Methods*. The gratings consisted of gray bars of equal luminance presented on a white background. Where the gray bars intersected, the luminance was set to that of the bars (as if one of the bars was occluding the other bar). Observers were asked to continually report their percept by holding down one of two designated keys [i.e., motion direction (right or left) of the grating that they perceived as being behind the other grating] and not to press any key if they were not certain. We measured, in each trial, the accumulated time that either percept (i.e., depth ordering) was dominant and computed the fraction of time that percept *s =* {*A*,*B*} dominated as *f*(*s*) = (the cumulative time percept *s* was reported as dominant)/(the total time that either of the percepts was reported as dominant). Therefore, this fraction corresponds to the proportion of time that percept *s* dominated. Percept *A* denotes the percept in which grating α is behind grating β (and conversely, percept *B*). Fractions of dominance shown in the figures correspond to averaged values of the fractions across trials and observers, and error bars correspond to SEM across the population.

### Mathematical Methods.

The derivations of the multiplicative rule and stronger cue take all rule and the descriptions of the energy, rate-based, and spiking models are presented in *SI Methods*.

## Acknowledgments

We thank Jan Drugowitsch and Robbie Jacobs for their suggestions and comments. We are also very grateful to Thomas Thomas and Bo Hu for their assistance during the experimental setup and Vick Rao for his help in using the cluster. D.C.K. is supported by National Institutes of Health Grant EY017939. A.P. is supported by National Science Foundation Grant BCS0446730 and the Multidisciplinary University Research Initiative (MURI) Grant N00014-07-1-0937. This work was also partially supported by National Eye Institute Award P30 EY001319.

## Footnotes

- ↵
^{1}To whom correspondence should be addressed. E-mail: rmoreno{at}bcs.rochester.edu.

Author contributions: R.M.-B., D.C.K., and A.P. designed research; R.M.-B., D.C.K., and A.P. performed research; R.M.-B. analyzed data; and R.M.-B., D.C.K., and A.P. wrote the paper.

The authors declare no conflict of interest.

This article is a PNAS Direct Submission.

This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1101430108/-/DCSupplemental.

## References

- ↵
- ↵
- ↵
- ↵
- van Beers RJ,
- Sittig AC,
- Gon JJ

- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- Necker LA

- ↵
- Beardslee DC,
- Wertheimer M

- Rubin E

- ↵
- ↵
- Becker S,
- et al.

- Hoyer PO,
- Hyvarinen A

- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- Carandini M,
- Ferster D

- ↵
- Moreno-Bote R,
- Rinzel J,
- Rubin N

- ↵
- Bishop CM

- ↵
- Sheinberg DL,
- Logothetis NK

- ↵
- ↵
- ↵
- ↵
- ↵
- Levelt WJM

- ↵
- ↵
- ↵
- ↵
- Moreno-Bote R,
- Shpiro A,
- Rinzel J,
- Rubin N

- ↵
- Sutton RS,
- Barto AG

- ↵
- ↵
- ↵
- Sugrue LP,
- Corrado GS,
- Newsome WT

## Citation Manager Formats

## Sign up for Article Alerts

## Jump to section

## You May Also be Interested in

### More Articles of This Classification

### Biological Sciences

### Neuroscience

### Social Sciences

### Psychological and Cognitive Sciences

### Related Content

- No related articles found.

### Cited by...

- Human confidence judgments reflect reliability-based hierarchical integration of contextual information
- Sparse-Coding Variational Auto-Encoders
- Catecholamines Alter the Intrinsic Variability of Cortical Population Activity and Perception
- Remembrance of Inferences Past
- Selective maintenance mechanisms of seen and unseen sensory features in the human brain
- What is consciousness, and could machines have it?
- Attention model of binocular rivalry
- Theory of cortical function
- The Behavioral Relevance of Cortical Neural Ensemble Responses Emerges Suddenly
- Perceptual Color Map in Macaque Visual Area V4
- Covariance-Based Synaptic Plasticity in an Attractor Network Model Accounts for Fast Adaptation in Free Operant Learning
- Population Rate Dynamics and Multineuron Firing Patterns in Sensory Cortex