Skip to main content

Main menu

  • Home
  • Articles
    • Current
    • Special Feature Articles - Most Recent
    • Special Features
    • Colloquia
    • Collected Articles
    • PNAS Classics
    • List of Issues
  • Front Matter
    • Front Matter Portal
    • Journal Club
  • News
    • For the Press
    • This Week In PNAS
    • PNAS in the News
  • Podcasts
  • Authors
    • Information for Authors
    • Editorial and Journal Policies
    • Submission Procedures
    • Fees and Licenses
  • Submit
  • Submit
  • About
    • Editorial Board
    • PNAS Staff
    • FAQ
    • Accessibility Statement
    • Rights and Permissions
    • Site Map
  • Contact
  • Journal Club
  • Subscribe
    • Subscription Rates
    • Subscriptions FAQ
    • Open Access
    • Recommend PNAS to Your Librarian

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Home
Home
  • Log in
  • My Cart

Advanced Search

  • Home
  • Articles
    • Current
    • Special Feature Articles - Most Recent
    • Special Features
    • Colloquia
    • Collected Articles
    • PNAS Classics
    • List of Issues
  • Front Matter
    • Front Matter Portal
    • Journal Club
  • News
    • For the Press
    • This Week In PNAS
    • PNAS in the News
  • Podcasts
  • Authors
    • Information for Authors
    • Editorial and Journal Policies
    • Submission Procedures
    • Fees and Licenses
  • Submit
Research Article

Latent structure in random sequences drives neural learning toward a rational bias

Yanlong Sun, Randall C. O’Reilly, Rajan Bhattacharyya, Jack W. Smith, Xun Liu, and Hongbin Wang
  1. aCenter for Biomedical Informatics, Texas A&M University Health Science Center, Houston, TX 77030;
  2. bDepartment of Psychology and Neuroscience, University of Colorado, Boulder, CO 80309;
  3. cCenter for Neural and Emergent Systems, Information and System Sciences Lab, HRL Laboratories LLC, Malibu, CA 90265; and
  4. dKey Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China 100101

See allHide authors and affiliations

PNAS March 24, 2015 112 (12) 3788-3792; first published March 9, 2015; https://doi.org/10.1073/pnas.1422036112
Yanlong Sun
aCenter for Biomedical Informatics, Texas A&M University Health Science Center, Houston, TX 77030;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: ysun@tamhsc.edu hwang@tamhsc.edu
Randall C. O’Reilly
bDepartment of Psychology and Neuroscience, University of Colorado, Boulder, CO 80309;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Rajan Bhattacharyya
cCenter for Neural and Emergent Systems, Information and System Sciences Lab, HRL Laboratories LLC, Malibu, CA 90265; and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jack W. Smith
aCenter for Biomedical Informatics, Texas A&M University Health Science Center, Houston, TX 77030;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Xun Liu
dKey Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China 100101
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Hongbin Wang
aCenter for Biomedical Informatics, Texas A&M University Health Science Center, Houston, TX 77030;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: ysun@tamhsc.edu hwang@tamhsc.edu
  1. Edited by Michael I. Posner, University of Oregon, Eugene, OR, and approved February 4, 2015 (received for review November 18, 2014)

  • Article
  • Figures & SI
  • Info & Metrics
  • PDF
Loading

Significance

The human mind has a unique capacity to find order in chaos. The way the neocortex integrates information over time enables the mind to capture rich statistical structures embedded in random sequences. We show that a biologically motivated neural network model reacts to not only how often a pattern occurs (mean time) but also when a pattern is first encountered (waiting time). This behavior naturally produces the alternation bias in the gambler’s fallacy and provides a neural grounding for the Bayesian models of human behavior in randomness judgments. Our findings support a rational account for human probabilistic reasoning and a unifying perspective that connects the implicit learning without instruction with the generalization under structured and expressive rules.

Abstract

People generally fail to produce random sequences by overusing alternating patterns and avoiding repeating ones—the gambler’s fallacy bias. We can explain the neural basis of this bias in terms of a biologically motivated neural model that learns from errors in predicting what will happen next. Through mere exposure to random sequences over time, the model naturally develops a representation that is biased toward alternation, because of its sensitivity to some surprisingly rich statistical structure that emerges in these random sequences. Furthermore, the model directly produces the best-fitting bias-gain parameter for an existing Bayesian model, by which we obtain an accurate fit to the human data in random sequence production. These results show that our seemingly irrational, biased view of randomness can be understood instead as the perfectly reasonable response of an effective learning mechanism to subtle statistical structure embedded in random sequences.

  • gambler's fallacy
  • waiting time
  • neural network
  • temporal integration
  • Bayesian inference

People are prone to search for patterns in sequences of events, even when the sequences are completely random. In a famous game of roulette at the Monte Carlo casino in 1913, black repeated a record 26 times—people began extreme betting on red after about 15 repetitions (1). The gambler’s fallacy—a belief that chance is a self-correcting process where a deviation in one direction would induce a deviation in the opposite direction—has been deemed a misperception of random sequences (2). For decades, this fallacy is thought to have originated from the “representativeness bias,” in which a sequence of events generated by a random process is expected to represent the essential characteristics of that process even when the sequence is short (3).

However, there is a surprising amount of systematic structure lurking within random sequences. For example, in the classic case of tossing a fair coin, where the probability of each outcome (heads or tails) is exactly 0.5 on every single trial, one would naturally assume that there is no possibility for some kind of interesting structure to emerge, given such a simple and desolate form of randomness. And yet, if one records the average amount of time for a pattern to first occur in a sequence (i.e., the waiting time statistic), it is significantly longer for a repetition (head–head HH or tail–tail TT, six tosses) than for an alternation (HT or TH, four tosses). This is despite the fact that on average, repetitions and alternations are equally probable (occurring once in every four tosses, i.e., the same mean time statistic). For both of these facts to be true, it must be that repetitions are more bunched together over time—they come in bursts, with greater spacing between, compared with alternations. Intuitively, this difference comes from the fact that repetitions can build upon each other (e.g., sequence HHH contains two instances of HH), whereas alternations cannot. Statistically, the mean time and waiting time delineate the mean and variance in the distribution of the interarrival times of patterns, respectively (4). Despite the same frequency of occurrence (i.e., the same mean), alternations are more evenly distributed over time than repetitions (i.e., different variances). Another source of insight comes from the transition graph (Fig. 1A), which reveals a structural asymmetry in the process of fair coin tossing. For example, when the process has the same chance to visit any of the states, the minimum number of transitions it takes to leave and then revisit a repetition state is longer than that for an alternation state. Let pA denote the probability of alternation between any two consecutive trials; despite the same mean time at pA=1/2, repetitions will have longer waiting times than alternations as long as pA>1/3 (Fig. 1B). (See SI Text for the calculation of mean time and waiting time statistics.)

Fig. 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 1.

Time of patterns described by the probability of alternation between consecutive trials (pA). (A) Transitions between patterns of length 2. At pA=1/2, the process has the same chance to visit either a repetition state (HH or TT) or an alternation state (HT or TH). However, it takes a minimum of three transitions for the process to leave and then revisit a repetition state (e.g., HH → HT → TH → HH), but only two for an alternation state (e.g., HT → TH → HT). (B) Equilibriums by pA values. A repetition (R) and an alternation (A) have the same mean time E[TR]=E[TA] at pA=1/2, the same waiting time E[TR∗]=E[TA∗] at pA=1/3, and the same sum E[TR]+E[TR∗]=E[TA]+E[TA∗] at pA=3/7.

Is this latent structure of waiting time just a strange mathematical curiosity or could it possibly have deep implications for our cognitive-level perceptions of randomness? It has been speculated that the systematic bias in human randomness perception such as the gambler’s fallacy might be due to the greater variance in the interarrival times or the “delayed” waiting time for repetition patterns (4, 5). Here, we show that a neural model based on a detailed biological understanding of the way the neocortex integrates information over time when processing sequences of events (6, 7) is naturally sensitive to both the mean time and waiting time statistics. Indeed, its behavior is explained by a simple averaging of the influences of both of these statistics, and this behavior emerges in the model over a wide range of parameters. Furthermore, this averaging dynamic directly produces the best-fitting bias-gain parameter for an existing Bayesian model of randomness judgments (8), which was previously an unexplained free parameter and obtained only through parameter fitting. We also show that we can extend this Bayesian model to better fit the full range of human data by including a higher-order pattern statistic, and the neurally derived bias-gain parameter still provides the best fit to the human data in the augmented model. Overall, our model provides a neural grounding for the pervasive gambler’s fallacy bias in human judgments of random processes, where people systematically discount repetitions and emphasize alternations (9, 10).

Neural Model of Temporal Integration

Our neural model is extremely simple (Fig. 2A). It consists of a sensory input layer that scans nonoverlapping binary digits of H vs. T and an internal prediction layer that attempts to predict the next input, while the prior inputs in the sequence are encoded in the temporal context. This model is based on a biologically motivated computational framework that has been used to explain the neural basis of cognition in a wide range of different domains (6), with the benefit of integrating prior temporal context information according to the properties of the deep neocortical neurons (layers 5b and 6) (7).

Fig. 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 2.

Neural model of temporal integration to capture the statistics of pattern times in random sequences. (A) Architecture of the neural model. A single sensory input layer scans through a sequence of binary digits one digit at a time (input at t−1 is for illustration only). An internal prediction layer, with bidirectional connections from the input layer and its own temporal context representation, attempts to predict the next input. (B) Neural model behavior depicted by the ratio between repetition and alternation detectors in response to the actual probability of alternation (pA) in the input sequences. At pA=1/2, the model showed R/A≈0.70 (i.e., fewer repetition detectors than alternation detectors). Error bars (± SEM) represent the variability of model predictions. The dotted line is the squared total time ratio between alternation and repetition patterns (Eq. 2).

Our main hypothesis is that the predictive learning and temporal integration properties of this model, which reflect corresponding features of the neocortex, will produce representations that incorporate both the waiting time and mean time statistics of the input sequences (despite the inability of the model to accurately predict the next input in these random sequences). In other words, we predict a systematic interaction between these basic learning mechanisms and the surprisingly rich statistical structure of the input. This is a principled prediction based on the well-established sensitivity of these kinds of neural learning mechanisms to the statistical structure of inputs (e.g., ref. 11), and extensive parameter exploration demonstrates that our results hold across a wide range of parameters and that the model’s behavior is systematically affected by certain parameters in sensible ways (SI Text). Thus, despite the emergent nature of learning in our model, it nevertheless reflects systematic behavior and is not the result of a random parameter-fitting exercise. Moreover, we show below that the model’s behavior can be largely explained by a simple equation as a function of the mean time and waiting time statistics, further establishing the systematicity of the model’s behavior and also establishing a direct connection to more abstract Bayesian models as elaborated subsequently.

The model was trained with binary sequences generated at various levels of the probability of alternation (pA), each sequence consisting of 10,000 coin tosses (although learning occurred quickly within a few hundred trials). Crucially, learning was concerned with only reconstructing the input sequence but not pattern discrimination, as no teaching signals were provided regarding the underlying pA values and pattern time statistics. After training, the model was tested with a sequence of 1,000 tosses generated at the same pA level as in the training sequence. We then decoded these sequence representations through a reverse correlation technique. Based on the sensitivity of the unit activations to the temporal patterns of length 2, we classified and then counted the number of the units on the internal prediction layer as either repetition detectors (R, being sensitive to either HH or TT) or alternation detectors (A, being sensitive to either HT or TH). (See SI Text for the method of detector classification.)

Most intriguingly, at pA=1/2 (i.e., independent tosses of a fair coin), the model produced a ratio of R/A≈0.70—despite the equal frequency of pattern occurrences, repetition detectors were significantly less likely than alternation detectors. Such alternation bias is in the same direction of the representativeness bias, where people perceive alternation patterns as more representative of a random process than repetition patterns (2, 3). Effectively, this result demonstrates the gambler’s fallacy emerging naturally as a consequence of the alternation bias, due to the model’s sensitivity to the waiting time advantage of alternations compared with repetitions.

We then used the R/A ratio to compute the subjective probability of alternation, pA′, as the model’s internal representation of its actually experienced pA. With R/A≈0.70, we havepA′=AR+A=11+R/A≈0.59 .[1]This pA′ value is consistent with the empirical findings on subjective randomness. From a comprehensive review of the studies on random sequence perception and generation, it was found that the subjective probability of alternation was around 0.58∼0.63 (9).

To further characterize the nature of the alternation bias, we systematically varied the probability of alternation (pA) in generating the training sequence (i.e., departures from tossing a fair coin independently) and then measured the effects on the R/A ratio. We found a smooth curve, where the R/A ratio increased (more repetition detectors) as pA decreased (less frequent occurrences of alternations). At pA=3/7, the model reached an equilibrium point with equal numbers of repetition and alternation detectors, R/A=1 (Fig. 2B). That is, alternations have to be this much less frequent (i.e., greater mean time) to cancel out their waiting time advantage. This corresponds exactly to the equilibrium point where repetitions and alternations have the same sum of mean and waiting times (Fig. 1B).

Overall, the model’s behavior can be mostly replicated by a simple equation that averages the effects of the mean time and waiting time statistics (dotted line in Fig. 2B),RA≈(E[TA]+E[TA∗]E[TR]+E[TR∗])2,[2]where E[T] is the mean time, E[T∗] is the waiting time, and subscripts R and A represent repetitions and alternations, respectively. This establishes a clear higher-level explanation for the emergent behavior of the model, allowing us to summarize its behavior as simply averaging the effects of these two relevant statistics over the random sequences.

Bayesian Models of Random Sequence Production

A unifying perspective on human statistical learning requires bridging the gap between the implicit learning without instruction and the generalization of the learned patterns under structured and expressive rules (12). On the one hand, our neural model shows that through mere exposure to a set of input stimuli, a systematic bias was developed toward the alternation pattern in random sequences. On the other hand, recent Bayesian accounts for probabilistic learning suggest that the human mind performs rational inferences at both neural and behavioral levels (13, 14). Thus, we asked whether it was possible to relate the emergent behavior of the neural model to an existing Bayesian model of randomness judgments, specifically whether we could demonstrate a quantitative connection between the bias for local patterns at the neural level and the behavior of generating longer random sequences governed by the rules of Bayesian inference.

Let f(H,T) denote the degree of the belief that a sequence of coin tosses consisting of H heads and T tails is generated by a fair coin, where the probability of heads in any single toss is p=1/2. By Bayes’ theorem, assuming a uniform prior distribution p∈[0,1], f(H,T) can be formulated as the posterior probability density,f(H,T)=2−(H+T)∫01pH(1−p)Tdp=2−(H+T)(H+T+1)(H+TH).[3]

Because of the binomial coefficient (H+TH), Eq. 3 is maximized when H=T. That is, governed by the belief function f(H,T), the optimal solution to generating a random sequence is to always seek a balance between the numbers of heads and tails (10). Based on this belief function, Griffiths and Tenenbaum (8) proposed a Bayesian model of random sequence production. They first defined a likelihood function, Lk, to represent the local representativeness that choosing a head instead of a tail as the outcome of the kth toss would result in a more random sequence,Lk=∑i=1k−1log⁡f(Hi+1,Ti)−log⁡f(Hi,Ti+1)=log∏i=1k−1(Ti+1Hi+1), k≥2,[4]where Hi and Ti were, respectively, the numbers of heads and tails counting back i steps in the sequence. Then, with a free parameter (λ) to scale the contribution of Lk, the probability of choosing a head at each response (Rk) was obtained by a logistic function:P(Rk=H)=11+e−λLk.[5]

This Bayesian model was then used to fit a massive database from the “Zenith radio experiment,” where 20,099 participants attempted to produce five random binary symbols one at a time (15). It was found that a λ value of around 0.60 produced the optimal fit to 15 of the 16 data points from the human data (Fig. 3A).

Fig. 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 3.

Bayesian models fitting to human data in random sequence production. (A) Probabilities of the generated random sequences, collapsed over the first choice (e.g., HHHHH is combined with TTTTT). Human data represent the responses of 20,099 participants (15). In the model by Griffiths and Tenenbaum (G&T 2001; ref. 8) (Eq. 5), the bias-gain parameter λ≈0.60 was obtained by best fitting the model to 15 of 16 human data points (excluding “HTHTH”). In our augmented model (Eq. 7), λ≈0.51 can be derived from the emergent behavior of the neural model. (B) Best-fitting λ values for the model by Griffiths and Tenenbaum (8) and the augmented model, with either the partial or the full dataset. In both datasets, the optimal λ value for the augmented model remained the same at 0.51 as predicted by the neural model.

The data point that Griffiths and Tenenbaum (8) did not predict well was the sequence HTHTH, which was judged by people to not be a very good random sequence, but Eq. 5 ranked it highly. It seems that in generating random sequences, people were seeking a balance not only between the heads and tails but also between higher-order pattern events (e.g., alternation itself is repeated four times in HTHTH). We can add this mechanism into the Bayesian model with an additional term, Mk,Mk=log(OT+1OH+1), k≥3,[6]where Mk performs a similar function to that of Lk, except being based on the numbers of the second-order pattern events, OH and OT (either alternation or repetition, depending the choice at Rk−1).

Applying the same scaling factor λ to both Lk and Mk, Eq. 5 becomesP(Rk=H)=11+e−λ(Lk+Mk).[7]This augmented model now produces an excellent fit to the full set of sequence data points (Pearson’s R2≈0.86), with λ≈0.51 as the optimal parameter value (Fig. 3A). In addition to improving the prediction on the sequence HTHTH, Eq. 7 also consistently makes better predictions than Eq. 5 on other data points, and the same value λ≈0.51 also produces the best fit to the partial dataset excluding the sequence HTHTH (Pearson’s R2≈0.89) (Fig. 3B).

The Bayesian model provides a formalization of the representativeness heuristic (2, 3, 8, 14). It captures the idea that when generating random sequences, people are seeking a balance between heads and tails and between repetitions and alternations not only in the global sequence, but also in the local subsequences (10). Apparently, the extent to which this balance is adjusted is determined by the free scaling parameter λ in Eqs. 5 and 7. However, beyond parameter fitting, the Bayesian model does not have any independent basis for specifying this parameter.

In the light of the neural model’s behavior (i.e., the alternation bias in Eq. 1), we predict that the scaling parameter λ should have originated naturally from people’s actual experiences of random sequences in the learning environment. Specifically, we can deduce from either Eq. 5 or Eq. 7 that λ actually serves as a bias-gain parameter that modulates the strength of the alternation bias:pA′=P(R2=T|R1=H)=11+2−λ.[8]When λ=0, Eq. 8 produces unbiased responses with the subjective probability of alternation pA′=1/2, corresponding to the process of independent coin tossing; and higher values λ>0 produce an increasing alternation bias with pA′>1/2. In other words, λ>0 corresponds to the tendency of avoiding repetitions, which applies to both the first- and second-order events (Eq. 7).

Then, we are able to show that λ can actually be derived from the behavior of the neural model. Substituting pA′ in Eq. 8 with Eq. 1, λ can be computed directly by the neural model’s R/A ratio (repetitions over alternations):λ=−log2RA.[9]For independent fair coin tossing (i.e., pA=1/2), the neural model showed R/A≈0.7, resulting in λ≈0.51—precisely the value that optimizes the fit to the human data for the augmented model (Fig. 3B).

The implication of Eq. 9 is that the naturally emergent properties of the neural model can in effect provide an independent anchor to the previously free parameter in the Bayesian model. Specifically, it shows that the bias-gain parameter λ is anchored to the alternation bias, which has been learned by the neural model through mere exposure to random sequences of fair coin tossing. Moreover, Eq. 9 is in accord with both the subjective probability of alternation pA′ (Eq. 1) and the normative measure of pattern mean time and waiting time statistics (Eq. 2). Most significantly, the derivation of the λ value demonstrates a quantitative connection between the implicit learning without instruction and the generalization of the learned patterns under structured and expressive rules, supporting a unified perspective on these two different learning mechanisms (12). This represents a remarkable convergence across multiple levels of analysis and further bolsters the validity of our understanding of the nature and origin of the systematic preference for alternating sequences and against repeating ones.

Conclusion

We find that the latent structure in simple probabilistic sequences shapes the learning dynamics in a neural model, producing an alternative “rational” explanation for what has generally been considered a curious failure of human probabilistic understanding. Our findings demonstrate that the waiting time statistics can be captured implicitly by the learning mechanism of temporal integration, without instruction, through mere exposure to the input stimuli. This supports the claim that the human mind might have evolved an accurate sense of randomness from the learning environment but may fail to reveal it by the criterion of a particular measuring device (16). For example, the alternation bias, as a result of averaging the mean time and waiting time statistics, would have been judged as “irrational” if it is measured against the mean time statistics alone.

In addition, our results highlight the connection between the temporally distributed predictive learning (6, 7, 11, 17) and abstract structured representations (8, 14). The remarkable fit of the parameters derived from this neural model with a Bayesian model derived from very different considerations reinforces the idea that the temporal integration mechanisms in our neural model provide a good account of human information integration over time. This ability to bridge between levels of analysis represents a rare and important development, with the potential to both ground the abstract models in underlying neural mechanisms and provide a simpler explanatory understanding of the emergent behavior of the neural models.

Acknowledgments

This work was supported by the Air Force Office of Scientific Research Grant FA9550-12-1-0457, the Office of Naval Research Grants N00014-08-1-0042 and N00014-13-1-0067, Intelligence Advanced Research Projects Activity via Department of the Interior Contract D10PC20021, and National Natural Science Foundation of China (31328013). The US Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.

Footnotes

  • ↵1To whom correspondence may be addressed. Email: ysun{at}tamhsc.edu or hwang{at}tamhsc.edu.
  • Author contributions: Y.S., R.C.O., and H.W. designed research; Y.S. and H.W. performed research; Y.S., R.C.O., J.W.S., and H.W. contributed new reagents/analytic tools; Y.S. and H.W. analyzed data; and Y.S., R.C.O., R.B., J.W.S., X.L., and H.W. wrote the paper.

  • The authors declare no conflict of interest.

  • This article is a PNAS Direct Submission.

  • This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1422036112/-/DCSupplemental.

Freely available online through the PNAS open access option.

References

  1. ↵
    1. Huff D
    (1959) How to Take a Chance (Norton, New York)
    .
  2. ↵
    1. Tversky A,
    2. Kahneman D
    (1971) Belief in the law of small numbers. Psychol Bull 76(2):105–110
    .
    OpenUrlCrossRef
  3. ↵
    1. Tversky A,
    2. Kahneman D
    (1974) Judgment under uncertainty: Heuristics and biases. Science 185(4157):1124–1131
    .
    OpenUrlAbstract/FREE Full Text
  4. ↵
    1. Sun Y,
    2. Wang H
    (2010) Gambler’s fallacy, hot hand belief, and time of patterns. Judgm Decis Mak 5(2):124–132
    .
    OpenUrl
  5. ↵
    1. Sun Y,
    2. Wang H
    (2010) Perception of randomness: On the time of streaks. Cognit Psychol 61(4):333–342
    .
    OpenUrlCrossRefPubMed
  6. ↵
    1. O’Reilly RC,
    2. Munakata Y,
    3. Frank MJ,
    4. Hazy TE
    (2012) Computational Cognitive Neuroscience (Wiki Book), 1st Ed. Available at grey.colorado.edu/CompCogNeuro/index.php?title=CCNBook/Main
    .
  7. ↵
    1. O’Reilly RC,
    2. Wyatte D,
    3. Rohrlich J
    (2014) Learning through time in the thalamocortical loops. arxiv.org/abs/1407.3432
    .
  8. ↵
    1. Griffiths TL,
    2. Tenenbaum JB
    (2001) Randomness and coincidences: Reconciling intuition and probability theory. Proceedings of the 23rd Annual Conference of the Cognitive Science Society, eds Moore JD, Stenning K (Lawrence Erlbaum, Mahwah, NJ), pp 398–403
    .
  9. ↵
    1. Falk R,
    2. Konold C
    (1997) Making sense of randomness: Implicit encoding as a basis for judgment. Psychol Rev 104(2):301–318
    .
    OpenUrlCrossRef
  10. ↵
    1. Nickerson RS
    (2002) The production and perception of randomness. Psychol Rev 109(2):330–357
    .
    OpenUrlCrossRefPubMed
  11. ↵
    1. Elman JL
    (1990) Finding structure in time. Cogn Sci 14(2):179–211
    .
    OpenUrlCrossRef
  12. ↵
    1. Aslin RN,
    2. Newport EL
    (2012) Statistical learning: From acquiring specific items to forming general rules. Curr Dir Psychol Sci 21(3):170–176
    .
    OpenUrlAbstract/FREE Full Text
  13. ↵
    1. Pouget A,
    2. Beck JM,
    3. Ma WJ,
    4. Latham PE
    (2013) Probabilistic brains: Knowns and unknowns. Nat Neurosci 16(9):1170–1178
    .
    OpenUrlCrossRefPubMed
  14. ↵
    1. Tenenbaum JB,
    2. Kemp C,
    3. Griffiths TL,
    4. Goodman ND
    (2011) How to grow a mind: Statistics, structure, and abstraction. Science 331(6022):1279–1285
    .
    OpenUrlAbstract/FREE Full Text
  15. ↵
    1. Goodfellow LD
    (1938) A psychological interpretation of the results of the Zenith radio experiments in telepathy. J Exp Psychol 23(6):601–632
    .
    OpenUrlCrossRef
  16. ↵
    1. Pinker S
    (1997) How the Mind Works (Norton, New York)
    .
  17. ↵
    1. Gerstner W,
    2. Sprekeler H,
    3. Deco G
    (2012) Theory and simulation in neuroscience. Science 338(6103):60–65
    .
    OpenUrlAbstract/FREE Full Text
PreviousNext
Back to top
Article Alerts
Email Article

Thank you for your interest in spreading the word on PNAS.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Latent structure in random sequences drives neural learning toward a rational bias
(Your Name) has sent you a message from PNAS
(Your Name) thought you would like to see the PNAS web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
Neural learning of waiting time
Yanlong Sun, Randall C. O’Reilly, Rajan Bhattacharyya, Jack W. Smith, Xun Liu, Hongbin Wang
Proceedings of the National Academy of Sciences Mar 2015, 112 (12) 3788-3792; DOI: 10.1073/pnas.1422036112

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Request Permissions
Share
Neural learning of waiting time
Yanlong Sun, Randall C. O’Reilly, Rajan Bhattacharyya, Jack W. Smith, Xun Liu, Hongbin Wang
Proceedings of the National Academy of Sciences Mar 2015, 112 (12) 3788-3792; DOI: 10.1073/pnas.1422036112
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Mendeley logo Mendeley

Article Classifications

  • Biological Sciences
  • Neuroscience
  • Social Sciences
  • Psychological and Cognitive Sciences

This article has a Letter. Please see:

  • Relationship between Research Article and Letter - June 01, 2015

See related content:

  • What is countable and how neurons learn
    - Jun 01, 2015
Proceedings of the National Academy of Sciences: 112 (12)
Table of Contents

Submit

Sign up for Article Alerts

Jump to section

  • Article
    • Abstract
    • Neural Model of Temporal Integration
    • Bayesian Models of Random Sequence Production
    • Conclusion
    • Acknowledgments
    • Footnotes
    • References
  • Figures & SI
  • Info & Metrics
  • PDF

You May Also be Interested in

Water from a faucet fills a glass.
News Feature: How “forever chemicals” might impair the immune system
Researchers are exploring whether these ubiquitous fluorinated molecules might worsen infections or hamper vaccine effectiveness.
Image credit: Shutterstock/Dmitry Naumov.
Reflection of clouds in the still waters of Mono Lake in California.
Inner Workings: Making headway with the mysteries of life’s origins
Recent experiments and simulations are starting to answer some fundamental questions about how life came to be.
Image credit: Shutterstock/Radoslaw Lecyk.
Cave in coastal Kenya with tree growing in the middle.
Journal Club: Small, sharp blades mark shift from Middle to Later Stone Age in coastal Kenya
Archaeologists have long tried to define the transition between the two time periods.
Image credit: Ceri Shipton.
Illustration of groups of people chatting
Exploring the length of human conversations
Adam Mastroianni and Daniel Gilbert explore why conversations almost never end when people want them to.
Listen
Past PodcastsSubscribe
Panda bear hanging in a tree
How horse manure helps giant pandas tolerate cold
A study finds that giant pandas roll in horse manure to increase their cold tolerance.
Image credit: Fuwen Wei.

Similar Articles

Site Logo
Powered by HighWire
  • Submit Manuscript
  • Twitter
  • Facebook
  • RSS Feeds
  • Email Alerts

Articles

  • Current Issue
  • Special Feature Articles – Most Recent
  • List of Issues

PNAS Portals

  • Anthropology
  • Chemistry
  • Classics
  • Front Matter
  • Physics
  • Sustainability Science
  • Teaching Resources

Information

  • Authors
  • Editorial Board
  • Reviewers
  • Subscribers
  • Librarians
  • Press
  • Cozzarelli Prize
  • Site Map
  • PNAS Updates
  • FAQs
  • Accessibility Statement
  • Rights & Permissions
  • About
  • Contact

Feedback    Privacy/Legal

Copyright © 2021 National Academy of Sciences. Online ISSN 1091-6490