Latent structure in random sequences drives neural learning toward a rational bias
- aCenter for Biomedical Informatics, Texas A&M University Health Science Center, Houston, TX 77030;
- bDepartment of Psychology and Neuroscience, University of Colorado, Boulder, CO 80309;
- cCenter for Neural and Emergent Systems, Information and System Sciences Lab, HRL Laboratories LLC, Malibu, CA 90265; and
- dKey Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China 100101
See allHide authors and affiliations
Edited by Michael I. Posner, University of Oregon, Eugene, OR, and approved February 4, 2015 (received for review November 18, 2014)

Significance
The human mind has a unique capacity to find order in chaos. The way the neocortex integrates information over time enables the mind to capture rich statistical structures embedded in random sequences. We show that a biologically motivated neural network model reacts to not only how often a pattern occurs (mean time) but also when a pattern is first encountered (waiting time). This behavior naturally produces the alternation bias in the gambler’s fallacy and provides a neural grounding for the Bayesian models of human behavior in randomness judgments. Our findings support a rational account for human probabilistic reasoning and a unifying perspective that connects the implicit learning without instruction with the generalization under structured and expressive rules.
Abstract
People generally fail to produce random sequences by overusing alternating patterns and avoiding repeating ones—the gambler’s fallacy bias. We can explain the neural basis of this bias in terms of a biologically motivated neural model that learns from errors in predicting what will happen next. Through mere exposure to random sequences over time, the model naturally develops a representation that is biased toward alternation, because of its sensitivity to some surprisingly rich statistical structure that emerges in these random sequences. Furthermore, the model directly produces the best-fitting bias-gain parameter for an existing Bayesian model, by which we obtain an accurate fit to the human data in random sequence production. These results show that our seemingly irrational, biased view of randomness can be understood instead as the perfectly reasonable response of an effective learning mechanism to subtle statistical structure embedded in random sequences.
People are prone to search for patterns in sequences of events, even when the sequences are completely random. In a famous game of roulette at the Monte Carlo casino in 1913, black repeated a record 26 times—people began extreme betting on red after about 15 repetitions (1). The gambler’s fallacy—a belief that chance is a self-correcting process where a deviation in one direction would induce a deviation in the opposite direction—has been deemed a misperception of random sequences (2). For decades, this fallacy is thought to have originated from the “representativeness bias,” in which a sequence of events generated by a random process is expected to represent the essential characteristics of that process even when the sequence is short (3).
However, there is a surprising amount of systematic structure lurking within random sequences. For example, in the classic case of tossing a fair coin, where the probability of each outcome (heads or tails) is exactly 0.5 on every single trial, one would naturally assume that there is no possibility for some kind of interesting structure to emerge, given such a simple and desolate form of randomness. And yet, if one records the average amount of time for a pattern to first occur in a sequence (i.e., the waiting time statistic), it is significantly longer for a repetition (head–head HH or tail–tail TT, six tosses) than for an alternation (HT or TH, four tosses). This is despite the fact that on average, repetitions and alternations are equally probable (occurring once in every four tosses, i.e., the same mean time statistic). For both of these facts to be true, it must be that repetitions are more bunched together over time—they come in bursts, with greater spacing between, compared with alternations. Intuitively, this difference comes from the fact that repetitions can build upon each other (e.g., sequence HHH contains two instances of HH), whereas alternations cannot. Statistically, the mean time and waiting time delineate the mean and variance in the distribution of the interarrival times of patterns, respectively (4). Despite the same frequency of occurrence (i.e., the same mean), alternations are more evenly distributed over time than repetitions (i.e., different variances). Another source of insight comes from the transition graph (Fig. 1A), which reveals a structural asymmetry in the process of fair coin tossing. For example, when the process has the same chance to visit any of the states, the minimum number of transitions it takes to leave and then revisit a repetition state is longer than that for an alternation state. Let
Time of patterns described by the probability of alternation between consecutive trials
Is this latent structure of waiting time just a strange mathematical curiosity or could it possibly have deep implications for our cognitive-level perceptions of randomness? It has been speculated that the systematic bias in human randomness perception such as the gambler’s fallacy might be due to the greater variance in the interarrival times or the “delayed” waiting time for repetition patterns (4, 5). Here, we show that a neural model based on a detailed biological understanding of the way the neocortex integrates information over time when processing sequences of events (6, 7) is naturally sensitive to both the mean time and waiting time statistics. Indeed, its behavior is explained by a simple averaging of the influences of both of these statistics, and this behavior emerges in the model over a wide range of parameters. Furthermore, this averaging dynamic directly produces the best-fitting bias-gain parameter for an existing Bayesian model of randomness judgments (8), which was previously an unexplained free parameter and obtained only through parameter fitting. We also show that we can extend this Bayesian model to better fit the full range of human data by including a higher-order pattern statistic, and the neurally derived bias-gain parameter still provides the best fit to the human data in the augmented model. Overall, our model provides a neural grounding for the pervasive gambler’s fallacy bias in human judgments of random processes, where people systematically discount repetitions and emphasize alternations (9, 10).
Neural Model of Temporal Integration
Our neural model is extremely simple (Fig. 2A). It consists of a sensory input layer that scans nonoverlapping binary digits of H vs. T and an internal prediction layer that attempts to predict the next input, while the prior inputs in the sequence are encoded in the temporal context. This model is based on a biologically motivated computational framework that has been used to explain the neural basis of cognition in a wide range of different domains (6), with the benefit of integrating prior temporal context information according to the properties of the deep neocortical neurons (layers
Neural model of temporal integration to capture the statistics of pattern times in random sequences. (A) Architecture of the neural model. A single sensory input layer scans through a sequence of binary digits one digit at a time (input at
Our main hypothesis is that the predictive learning and temporal integration properties of this model, which reflect corresponding features of the neocortex, will produce representations that incorporate both the waiting time and mean time statistics of the input sequences (despite the inability of the model to accurately predict the next input in these random sequences). In other words, we predict a systematic interaction between these basic learning mechanisms and the surprisingly rich statistical structure of the input. This is a principled prediction based on the well-established sensitivity of these kinds of neural learning mechanisms to the statistical structure of inputs (e.g., ref. 11), and extensive parameter exploration demonstrates that our results hold across a wide range of parameters and that the model’s behavior is systematically affected by certain parameters in sensible ways (SI Text). Thus, despite the emergent nature of learning in our model, it nevertheless reflects systematic behavior and is not the result of a random parameter-fitting exercise. Moreover, we show below that the model’s behavior can be largely explained by a simple equation as a function of the mean time and waiting time statistics, further establishing the systematicity of the model’s behavior and also establishing a direct connection to more abstract Bayesian models as elaborated subsequently.
The model was trained with binary sequences generated at various levels of the probability of alternation
Most intriguingly, at
We then used the
To further characterize the nature of the alternation bias, we systematically varied the probability of alternation
Overall, the model’s behavior can be mostly replicated by a simple equation that averages the effects of the mean time and waiting time statistics (dotted line in Fig. 2B),
Bayesian Models of Random Sequence Production
A unifying perspective on human statistical learning requires bridging the gap between the implicit learning without instruction and the generalization of the learned patterns under structured and expressive rules (12). On the one hand, our neural model shows that through mere exposure to a set of input stimuli, a systematic bias was developed toward the alternation pattern in random sequences. On the other hand, recent Bayesian accounts for probabilistic learning suggest that the human mind performs rational inferences at both neural and behavioral levels (13, 14). Thus, we asked whether it was possible to relate the emergent behavior of the neural model to an existing Bayesian model of randomness judgments, specifically whether we could demonstrate a quantitative connection between the bias for local patterns at the neural level and the behavior of generating longer random sequences governed by the rules of Bayesian inference.
Let
Because of the binomial coefficient
This Bayesian model was then used to fit a massive database from the “Zenith radio experiment,” where 20,099 participants attempted to produce five random binary symbols one at a time (15). It was found that a λ value of around 0.60 produced the optimal fit to 15 of the 16 data points from the human data (Fig. 3A).
Bayesian models fitting to human data in random sequence production. (A) Probabilities of the generated random sequences, collapsed over the first choice (e.g., HHHHH is combined with TTTTT). Human data represent the responses of
The data point that Griffiths and Tenenbaum (8) did not predict well was the sequence HTHTH, which was judged by people to not be a very good random sequence, but Eq. 5 ranked it highly. It seems that in generating random sequences, people were seeking a balance not only between the heads and tails but also between higher-order pattern events (e.g., alternation itself is repeated four times in HTHTH). We can add this mechanism into the Bayesian model with an additional term,
Applying the same scaling factor λ to both
The Bayesian model provides a formalization of the representativeness heuristic (2, 3, 8, 14). It captures the idea that when generating random sequences, people are seeking a balance between heads and tails and between repetitions and alternations not only in the global sequence, but also in the local subsequences (10). Apparently, the extent to which this balance is adjusted is determined by the free scaling parameter λ in Eqs. 5 and 7. However, beyond parameter fitting, the Bayesian model does not have any independent basis for specifying this parameter.
In the light of the neural model’s behavior (i.e., the alternation bias in Eq. 1), we predict that the scaling parameter λ should have originated naturally from people’s actual experiences of random sequences in the learning environment. Specifically, we can deduce from either Eq. 5 or Eq. 7 that λ actually serves as a bias-gain parameter that modulates the strength of the alternation bias:
Then, we are able to show that λ can actually be derived from the behavior of the neural model. Substituting
The implication of Eq. 9 is that the naturally emergent properties of the neural model can in effect provide an independent anchor to the previously free parameter in the Bayesian model. Specifically, it shows that the bias-gain parameter λ is anchored to the alternation bias, which has been learned by the neural model through mere exposure to random sequences of fair coin tossing. Moreover, Eq. 9 is in accord with both the subjective probability of alternation
Conclusion
We find that the latent structure in simple probabilistic sequences shapes the learning dynamics in a neural model, producing an alternative “rational” explanation for what has generally been considered a curious failure of human probabilistic understanding. Our findings demonstrate that the waiting time statistics can be captured implicitly by the learning mechanism of temporal integration, without instruction, through mere exposure to the input stimuli. This supports the claim that the human mind might have evolved an accurate sense of randomness from the learning environment but may fail to reveal it by the criterion of a particular measuring device (16). For example, the alternation bias, as a result of averaging the mean time and waiting time statistics, would have been judged as “irrational” if it is measured against the mean time statistics alone.
In addition, our results highlight the connection between the temporally distributed predictive learning (6, 7, 11, 17) and abstract structured representations (8, 14). The remarkable fit of the parameters derived from this neural model with a Bayesian model derived from very different considerations reinforces the idea that the temporal integration mechanisms in our neural model provide a good account of human information integration over time. This ability to bridge between levels of analysis represents a rare and important development, with the potential to both ground the abstract models in underlying neural mechanisms and provide a simpler explanatory understanding of the emergent behavior of the neural models.
Acknowledgments
This work was supported by the Air Force Office of Scientific Research Grant FA9550-12-1-0457, the Office of Naval Research Grants N00014-08-1-0042 and N00014-13-1-0067, Intelligence Advanced Research Projects Activity via Department of the Interior Contract D10PC20021, and National Natural Science Foundation of China (31328013). The US Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
Footnotes
- ↵1To whom correspondence may be addressed. Email: ysun{at}tamhsc.edu or hwang{at}tamhsc.edu.
Author contributions: Y.S., R.C.O., and H.W. designed research; Y.S. and H.W. performed research; Y.S., R.C.O., J.W.S., and H.W. contributed new reagents/analytic tools; Y.S. and H.W. analyzed data; and Y.S., R.C.O., R.B., J.W.S., X.L., and H.W. wrote the paper.
The authors declare no conflict of interest.
This article is a PNAS Direct Submission.
This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1422036112/-/DCSupplemental.
Freely available online through the PNAS open access option.
References
- ↵.
- Huff D
- ↵
- ↵.
- Tversky A,
- Kahneman D
- ↵.
- Sun Y,
- Wang H
- ↵
- ↵.
- O’Reilly RC,
- Munakata Y,
- Frank MJ,
- Hazy TE
- ↵.
- O’Reilly RC,
- Wyatte D,
- Rohrlich J
- ↵.
- Griffiths TL,
- Tenenbaum JB
- ↵
- ↵
- ↵
- ↵.
- Aslin RN,
- Newport EL
- ↵
- ↵.
- Tenenbaum JB,
- Kemp C,
- Griffiths TL,
- Goodman ND
- ↵
- ↵.
- Pinker S
- ↵.
- Gerstner W,
- Sprekeler H,
- Deco G
Citation Manager Formats
Article Classifications
- Biological Sciences
- Neuroscience
- Social Sciences
- Psychological and Cognitive Sciences
This article has a Letter. Please see:
- Relationship between Research Article and Letter - June 01, 2015
See related content:
- What is countable and how neurons learn- Jun 01, 2015