# Lawful relation between perceptual bias and discriminability

^{a}Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10027;^{b}Center for Theoretical Neuroscience, Columbia University, New York, NY 10027;^{c}Department of Statistics, Columbia University, New York, NY 10027;^{d}Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104;^{e}Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA 19104

See allHide authors and affiliations

Edited by Wilson S. Geisler, University of Texas at Austin, Austin, TX, and approved August 4, 2017 (received for review November 21, 2016)

## Significance

We present a law of human perception. The law expresses a mathematical relation between our ability to perceptually discriminate a stimulus from similar ones and our bias in the perceived stimulus value. We derived the relation based on theoretical assumptions about how the brain represents sensory information and how it interprets this information to create a percept. Our main assumption is that both encoding and decoding are optimized for the specific statistical structure of the sensory environment. We found large experimental support for the law in the literature, which includes biases and changes in discriminability induced by contextual modulation (e.g., adaptation). Our results imply that human perception generally relies on statistically optimized processes.

## Abstract

Perception of a stimulus can be characterized by two fundamental psychophysical measures: how well the stimulus can be discriminated from similar ones (discrimination threshold) and how strongly the perceived stimulus value deviates on average from the true stimulus value (perceptual bias). We demonstrate that perceptual bias and discriminability, as functions of the stimulus value, follow a surprisingly simple mathematical relation. The relation, which is derived from a theory combining optimal encoding and decoding, is well supported by a wide range of reported psychophysical data including perceptual changes induced by contextual modulation. The large empirical support indicates that the proposed relation may represent a psychophysical law in human perception. Our results imply that the computational processes of sensory encoding and perceptual decoding are matched and optimized based on identical assumptions about the statistical structure of the sensory environment.

Perception is a subjective experience that is shaped by the expectations and beliefs of an observer (1). Psychophysical measures provide an objective yet indirect characterization of this experience by describing the dependency between the physical properties of a stimulus and the corresponding perceptually guided behavior (2).

Two fundamental measures characterize an observer’s perception of a stimulus. Discrimination threshold indicates the sensitivity of an observer to small changes in a stimulus variable (Fig. 1*A*). The threshold depends on the quality with which the stimulus variable is represented in the brain (2) (i.e., encoded; Fig. 1*B*); a more accurate representation results in a lower discrimination threshold. In contrast, perceptual bias is a measure that reflects the degree to which an observer’s perception deviates on average from the true stimulus value. Perceptual bias is typically assumed to result from prior beliefs and reward expectations with which the observer interprets the sensory evidence (1), and thus is determined by factors that seem not directly related to the sensory representation of the stimulus. As a result, it has long been believed that there is no reason to expect any lawful relation between perceptual bias and discrimination threshold (3).

However, here we derive a direct mathematical relation between discrimination threshold *A*). Specifically, we assume encoding to be efficient (8, 9) such that it maximizes the information in the sensory representation about the stimulus given a limit on the overall available coding resources (10). The assumption implies a sensory representation whose coding resources are allocated according to the stimulus distribution **1**) above, we can express discrimination threshold in terms of the stimulus distribution as*A*) can also be expressed in terms of the stimulus distribution (4, 7). Assuming that uncertainty in the perceptual process is dominated by internal (neural) noise and that the noise is relatively small, we can analytically derive the observer’s bias as*Supporting Information*). Magnitude and sign of its proportionality coefficient, however, depend on several factors, including the noise magnitude and the loss function. Finally, by combining Eqs. **2** and **3****,** we obtain a direct functional relation between perceptual bias and discrimination threshold in the form of*B*).

We tested the surprisingly simple relation against a wide range of existing psychophysical data. Figs. 3 and 4 show data for those perceptual variables for which both discrimination threshold and perceptual bias have been reported over a sufficiently large stimulus range. We grouped the examples according to their characteristic bias–threshold patterns. The first group consists of a set of circular variables (Fig. 3 *A*–*C*). It includes local visual orientation, probably the most well-studied perceptual variable. Orientation perception exhibits the so-called oblique effect (38), which describes the observation that the discrimination threshold peaks at the oblique orientations yet is lowest for cardinal orientations (14). Based on the oblique effect, Eq. **4** predicts that perceptual bias is zero at, and only at, both cardinal and oblique orientations. Measured bias functions confirm this prediction (15). Other circular variables that exhibit similar patterns are heading direction using either visual or vestibular information (16), 2D motion direction measured with a two alternative forced-choice (2AFC) procedure (17, 18) or by smooth pursuit eye movements (19), pointing direction (20), and motion direction in depth (21, 22). The relation also holds for the more high-level perceptual variable of perceived heading direction of approaching biological motion (human pedestrian) (23) as shown in Fig. 3*C*.

The second group contains noncircular magnitude variables for which discrimination threshold (approximately) follows Weber’s law (24) and linearly increases with magnitude (Fig. 3*D*). We predict that these variables should exhibit a perceptual bias that is also linear in stimulus magnitude. Indeed, we found this to be true for spatial frequency [threshold (14, 34, 39), bias (25)] as well as temporal frequency [threshold (40), bias (27)] in vision. Visual speed is another example for which discrimination threshold approximately follows Weber’s law (28) and bias is also approximately linear with stimulus speed (27), although, in contrast to the other examples, with a negative proportionality coefficient. A possible explanation for the sign difference may be that speed perception is governed by a loss function that differs from the loss functions for the other variables. In perceiving the speed of a moving object, an observer’s emphasis might be on estimating the speed “just right” to maximize the chance to, e.g., successfully intercept the object. A loss function that approximates the “all-or-nothing” characteristics of the *Supporting Information*). Finally, perceived weight, one of the classical examples for illustrating Weber’s law (24), also seems to be consistent with the proposed bias–threshold pattern (41).

The last group contains bias–threshold patterns that are not intrinsic to individual specific variables but are induced by contextual modulation (Fig. 4). Spatial context as in the tilt illusion can induce characteristic repulsive biases in the perceived stimulus orientation away from the orientation of the spatial surround (30). The corresponding change in discrimination threshold (29) well matches the predicted pattern based on our theoretically derived relation. A similar bias–threshold pattern has been reported when probing motion direction instead of orientation (31). Furthermore, similar patterns have been observed for temporal context, i.e., as the result of adaptation. Adaptation-induced biases and changes in discrimination threshold for perceived visual orientation (32, 33) and spatial frequency (34, 35) are qualitatively in agreement with the prediction. At a slightly longer time scale, perceptual learning is also known to reduce discrimination thresholds. We therefore predict that perceptual learning also induces repulsive biases away from the learned stimulus value. This prediction is indeed confirmed by data for learning orientation (36) and motion direction (42), albeit the existing data are sparse. Finally, attention has been known as a mechanism that can decrease discrimination threshold (43). We predict that this decrease should coincide with a repulsive bias in the perceived stimulus variable. Although limited in extent, data from a Vernier gap size estimation experiment are in agreement with this prediction (37) and are further supported by recent results (44). In sum, the derived relation can readily explain a wide array of empirical observations across different perceptual variables, sensory modalities, and contextual modulations.

Based on the strong empirical support, we argue that we have identified a universal law of human perception. It provides a unified and parsimonious characterization of the relation between discrimination threshold and perceptual bias, which are the two main psychophysical measures characterizing the perception of a stimulus variable. Only a very few quantitative laws are known in perceptual science, including Weber–Fechner’s law (2, 24) and Stevens’ law (45). These laws express simple empirical regularities which provide a compact yet generally valid description of the data. The law we have proposed here shares the same virtue. However, unlike these previous laws, our law is not the result of empirical observations but rather was derived based on theoretical considerations of optimal encoding and decoding (4). Thus, as such, it does not merely describe perceptual behavior but rather reflects our understanding of why perception exhibits such characteristics in the first place. Note that, without the theory, we would not have discovered the general empirical regularity between discrimination threshold and bias. It is conceivable that some of the theoretical assumptions we made in deriving the law may prove incorrect (see Fig. S1) despite the fact that the law itself is empirically well supported. It is difficult to imagine, however, how a lawful relation between perceptual bias and discrimination threshold could emerge without a functional constraint that tightly links the encoding and decoding processes of perception (Fig. 2*A*).

The law allows us to predict either perceptual bias based on measured data for discrimination threshold or vice versa. One general prediction is that stimulus variables that follow Weber’s law should exhibit perceptual biases that are linearly proportional to the stimulus value as demonstrated with examples in Fig. 3*D*. Furthermore, because perceptual illusions are often examples of a strong form of perceptual bias induced by changes in context, we predict that these illusions should be accompanied with substantial threshold changes according to our law.

Perceptual biases can arise for different reasons, not all of which are aligned with the assumptions we made in our derivations. In particular, because we assumed that the uncertainty in the inference process is predominantly due to internal (neural) noise of the observer, we do not expect the proposed law to hold under conditions where stimulus ambiguity/noise is the dominant source of uncertainty. In this case, we expect discrimination threshold to be mainly determined by the stimulus uncertainty and not the prior expectations as we have assumed (Eq. **2**).

It is worth noting that the law can also be expressed in terms of perceptual variance rather than discrimination threshold. This can be useful because some psychophysical experiments designed for measuring perceptual bias (e.g., by a method of adjustment) often record variance in subjects’ estimates as well. Using the Cramer–Rao bound on the variance **4** as*Supporting Information* for details).

Last but not least, perhaps the most surprising finding is that the law seems to hold for bias and discriminability patterns induced by contextual modulation (Fig. 4). This implies not only that changes in encoding and decoding can happen immediately (e.g., spatial context), or at least on short time scales, but also that these changes are matched between encoding and decoding by relying on identical assumptions (i.e., prior expectations) about the structure of the sensory environment. This fundamentally contrasts with existing theories that assume mismatches between encoding and decoding (i.e., the “coding catastrophe”) to be responsible for many of the known contextually modulated bias effects (46). It also contrasts with findings that put the locus of perceptual learning either at the encoding (47) or at the decoding (48) level; we predict learning to occur at both levels. Whether these contextual priors actually match the stimulus distributions within these contexts or not is unclear and remains a subject for future studies. Data from spatial attention experiments (Fig. 4) at least suggest that they may reflect subjective rather than objective expectations. This would imply that the distinction is not relevant in the context of the observer model considered here (Fig. 2*A*) and that efficient encoding and Bayesian decoding are both optimized for identical prior expectations, irrespective of whether these expectations are subjective or objective. We believe that the proposed law and its underlying theoretical assumptions have profound implications for the computations and neural mechanisms governing perception, which we have just started to explore.

## SI Text

This document consists of three parts: The first part provides derivations of all of the components (i.e., essentially Eqs. **1**–**3**) used in expressing the relation between perceptual bias and the discrimination threshold. Note that most of these derivations have been described before, and we only include them here to make the paper self-contained. In the second part, we derive the relation between perceptual bias and variance (rather than discrimination threshold). Finally, in the last part, we prove that the proposed relation between discrimination threshold (or variance) and perceptual bias is not limited to the

### Part 1: Relation Between Perceptual Bias and Discrimination Threshold.

#### Relation between Fisher information and stimulus distribution (prior).

A key assumption of the used observer model (4) is that encoding is efficient in the sense that it seeks to maximize the mutual information

We have previously shown (4, 13) that, by imposing a constraint to bound the total coding resource of the form**S2** to be zero. This is the case if**S4**, which we refer to as the “efficient coding constraint.”

#### Relation between Fisher information and discrimination threshold.

Fisher information provides a lower bound on the discrimination threshold (5), including biased estimators (6). Below, we provide a shortened version of the derivation for the biased estimator according to ref. 6.

Consider two stimulus values *d*-prime) between the distributions for the two estimates

By writing *i*) approximating *ii*) approximating **S5**. We will refer to this **S9**, we then can express the discrimination threshold **S10**, we get**S11** and **S9**, we find that the discrimination threshold is lower bounded by the Fisher information, i.e.,

#### Relation between perceptual bias and stimulus distribution (L_{2} loss).

We have previously derived an analytic expression for perceptual *Generalization to Different Loss Functions*, this functional relation holds for a large family of symmetric loss functions, including some of the most commonly used loss functions (e.g., **S14** for the special case of mean squared error loss.

#### Putting it together: The relation between perceptual bias and discrimination threshold.

By combining Eqs. **S4**, **S13**, and **S14**, we obtain our main result, the proposed relation between perceptual bias and discrimination threshold,*Generalization to Different Loss Functions*, we will prove that this relation generalizes to a large class of symmetric loss functions of the Bayesian decoder.

#### Validation of the analytic expressions for discrimination threshold and perceptual bias.

The analytic expressions for both the discrimination threshold and the perceptual bias (Eqs. **2** and **3**) relied on a set of assumptions and approximations (mostly with regard to the magnitude of the noise). To provide some insight into how these assumptions affect our results, we simulated the full Bayesian observer model (as briefly described in *Generalization to Different Loss Functions* and in more detail in ref. 4) for various sensory (internal) noise magnitudes, and then compared the bias and threshold curves of the observer model with those predicted by the two analytic expressions.

Fig. S1 shows this comparison for increasing levels of noise. As expected, the approximations are good under small noise conditions and smoothly degrade with increasing noise levels. The largest noise level shown (**2** and **3**), they still almost perfectly obey the proposed law.

### Part 2: Relation Between Estimation Bias and Variance.

The relation between discrimination threshold and bias can be rewritten in terms of estimation variance rather than threshold. This can be useful when trying to analyze datasets obtained with an estimation task (e.g., the method of adjustment). These datasets typically consist of a subject’s estimates over repeated trials, and thus allow the simultaneous estimate of perceptual bias and variance. While discrimination threshold and estimation variance are directly related, the reformulation in terms of variance results in a slightly more complex expression.

We start with the Cramer–Rao bound on the variance of a biased estimator, that is,**S4** of our Bayesian observer model, we can express the variance in terms of the prior expectation as**S17** with the above derived dependency of the bias on the prior expectation Eq. **S14**, we find**S14**, the formulation is identical for variance except that it is scaled by *D*). Similarly, for low noise levels, the biases become small and thus, assuming reasonably smooth bias curves, the term **S18** approximates 1. The relation between variance and bias again approximates Eq. **S20**. As the noise increases, the term

The two predictions remain qualitatively similar, which was the reason for us to include the pointing direction dataset (20) in our collection of examples (Fig. 3) even though it reports estimation variance rather than discrimination threshold.

### Part 3: Generalization to Different Loss Functions.

Below, we show that the expression for bias (Eq. **S15**) generalizes to a large class of symmetric loss functions. Therefore, the main result of the paper, the predicted relation between discrimination threshold (or variance) and perceptual bias, is general. We first focus on the family of

#### Bayesian observer model with efficient sensory representation.

We consider a Bayesian observer model that relies on the efficient encoding of sensory information (4). Given a stimulus with value

Due to the efficient constraint Eq. **S4** of the observer model, the mapping

With the above assumption of Gaussian noise, the posterior distribution can be expressed as**S24**. Note that we have previously derived the bias for the

#### L_{1}*loss*.

*loss*

The Bayesian estimate that minimizes the **S22**) follows**S26** and **S31**, we then can express the bias for any

#### L_{0}*loss (MAP estimate).*

*loss (MAP estimate).*

The Bayesian estimate that minimizes the **S23** is equivalent to computing the mode of the logarithm of the posterior, thus**S33** as**S31**, we then can express the bias for the MAP estimator as

#### L_{2q}*loss function (*q > 1 , and q is an integer).

*loss function (*q > 1 , and q is an integer).

The Bayesian estimate under

Again, the goal is to find the estimator **S22**) around **S51** is a continuous function of **S51** are of the order

For small **S52** on the coefficients **S31**.

Therefore,

#### L_{2q−1}*loss function (*q > 1 , and q is an integer).

*loss function (*q > 1 , and q is an integer).

We can use techniques similar to those for the

To simplify notation, we define

We first deal with

Furthermore, we find that

Thus,

#### General symmetric loss function.

For an arbitrary symmetric loss function

We thus conclude that

## Acknowledgments

We thank V. De Gardelle, S. Kouider, and J. Sackur for sharing their data. We thank Josh Gold and Michael Eisenstein for helpful comments and suggestions on earlier versions of the manuscript. This research was performed while both authors were at the University of Pennsylvania. We thank the Office of Naval Research for supporting the work (Grant N000141110744).

## Footnotes

- ↵
^{1}To whom correspondence should be addressed. Email: astocker{at}psych.upenn.edu.

Author contributions: X.-X.W. and A.A.S. designed research, performed research, analyzed data, and wrote the paper.

The authors declare no conflict of interest.

This article is a PNAS Direct Submission.

This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1619153114/-/DCSupplemental.

## References

- ↵.
- Helmholtz Hv

- ↵.
- Fechner GT

- ↵.
- Green D,
- Swets J

- ↵
- ↵.
- Seung H,
- Sompolinsky H

- ↵
- ↵.
- Wei XX,
- Stocker AA

- ↵
- ↵.
- Barlow HB

- ↵
- ↵
- ↵
- ↵.
- Wei XX,
- Stocker AA

- ↵
- ↵.
- De Gardelle V,
- Kouider S,
- Sackur J

- ↵
- ↵
- ↵
- ↵
- ↵.
- Smyrnis N,
- Mantas A,
- Evdokimidis I

- ↵
- ↵.
- Welchman AE,
- Lam JM,
- Bülthoff HH

- ↵
- ↵.
- Weber EH

*The Sense of Touch*, trans Ross HE, Murray DJ (Academic, San Diego). - ↵.
- Georgeson M,
- Ruddock K

- ↵
- ↵.
- Vintch B,
- Gardner JL

- ↵
- ↵.
- Solomon JA,
- Morgan MJ

- ↵
- ↵
- ↵
- ↵
- ↵.
- Regan D,
- Beverley K

- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵.
- Szpiro S,
- Spering M,
- Carrasco M

- ↵.
- Carrasco M,
- McElree B

- ↵.
- Cutrone E

- ↵
- ↵
- ↵
- ↵
- ↵.
- Kullback S,
- Leibler RA

- ↵

## Citation Manager Formats

## Article Classifications

- Biological Sciences
- Neuroscience

- Social Sciences
- Psychological and Cognitive Sciences