Latent structure in random sequences drives neural learning toward a rational bias
- aCenter for Biomedical Informatics, Texas A&M University Health Science Center, Houston, TX 77030;
- bDepartment of Psychology and Neuroscience, University of Colorado, Boulder, CO 80309;
- cCenter for Neural and Emergent Systems, Information and System Sciences Lab, HRL Laboratories LLC, Malibu, CA 90265; and
- dKey Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China 100101
See allHide authors and affiliations
Edited by Michael I. Posner, University of Oregon, Eugene, OR, and approved February 4, 2015 (received for review November 18, 2014)

Significance
The human mind has a unique capacity to find order in chaos. The way the neocortex integrates information over time enables the mind to capture rich statistical structures embedded in random sequences. We show that a biologically motivated neural network model reacts to not only how often a pattern occurs (mean time) but also when a pattern is first encountered (waiting time). This behavior naturally produces the alternation bias in the gambler’s fallacy and provides a neural grounding for the Bayesian models of human behavior in randomness judgments. Our findings support a rational account for human probabilistic reasoning and a unifying perspective that connects the implicit learning without instruction with the generalization under structured and expressive rules.
Abstract
People generally fail to produce random sequences by overusing alternating patterns and avoiding repeating ones—the gambler’s fallacy bias. We can explain the neural basis of this bias in terms of a biologically motivated neural model that learns from errors in predicting what will happen next. Through mere exposure to random sequences over time, the model naturally develops a representation that is biased toward alternation, because of its sensitivity to some surprisingly rich statistical structure that emerges in these random sequences. Furthermore, the model directly produces the best-fitting bias-gain parameter for an existing Bayesian model, by which we obtain an accurate fit to the human data in random sequence production. These results show that our seemingly irrational, biased view of randomness can be understood instead as the perfectly reasonable response of an effective learning mechanism to subtle statistical structure embedded in random sequences.
Footnotes
- ↵1To whom correspondence may be addressed. Email: ysun{at}tamhsc.edu or hwang{at}tamhsc.edu.
Author contributions: Y.S., R.C.O., and H.W. designed research; Y.S. and H.W. performed research; Y.S., R.C.O., J.W.S., and H.W. contributed new reagents/analytic tools; Y.S. and H.W. analyzed data; and Y.S., R.C.O., R.B., J.W.S., X.L., and H.W. wrote the paper.
The authors declare no conflict of interest.
This article is a PNAS Direct Submission.
This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1422036112/-/DCSupplemental.
Freely available online through the PNAS open access option.
Citation Manager Formats
This article has a Letter. Please see:
- Relationship between Research Article and Letter - June 01, 2015
See related content:
- What is countable and how neurons learn- Jun 01, 2015