Skip to main content

Main menu

  • Home
  • Articles
    • Current
    • Special Feature Articles - Most Recent
    • Special Features
    • Colloquia
    • Collected Articles
    • PNAS Classics
    • List of Issues
  • Front Matter
    • Front Matter Portal
    • Journal Club
  • News
    • For the Press
    • This Week In PNAS
    • PNAS in the News
  • Podcasts
  • Authors
    • Information for Authors
    • Editorial and Journal Policies
    • Submission Procedures
    • Fees and Licenses
  • Submit
  • Submit
  • About
    • Editorial Board
    • PNAS Staff
    • FAQ
    • Accessibility Statement
    • Rights and Permissions
    • Site Map
  • Contact
  • Journal Club
  • Subscribe
    • Subscription Rates
    • Subscriptions FAQ
    • Open Access
    • Recommend PNAS to Your Librarian

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Home
Home
  • Log in
  • My Cart

Advanced Search

  • Home
  • Articles
    • Current
    • Special Feature Articles - Most Recent
    • Special Features
    • Colloquia
    • Collected Articles
    • PNAS Classics
    • List of Issues
  • Front Matter
    • Front Matter Portal
    • Journal Club
  • News
    • For the Press
    • This Week In PNAS
    • PNAS in the News
  • Podcasts
  • Authors
    • Information for Authors
    • Editorial and Journal Policies
    • Submission Procedures
    • Fees and Licenses
  • Submit
Research Article

Detecting affiliation in colaughter across 24 societies

Gregory A. Bryant, Daniel M. T. Fessler, View ORCID ProfileRiccardo Fusaroli, Edward Clint, Lene Aarøe, Coren L. Apicella, Michael Bang Petersen, Shaneikiah T. Bickham, Alexander Bolyanatz, Brenda Chavez, Delphine De Smet, Cinthya Díaz, Jana Fančovičová, Michal Fux, Paulina Giraldo-Perez, Anning Hu, Shanmukh V. Kamble, Tatsuya Kameda, Norman P. Li, Francesca R. Luberti, Pavol Prokop, Katinka Quintelier, Brooke A. Scelza, Hyun Jung Shin, Montserrat Soler, Stefan Stieger, View ORCID ProfileWataru Toyokawa, Ellis A. van den Hende, Hugo Viciana-Asensio, Saliha Elif Yildizhan, Jose C. Yong, Tessa Yuditha, and Yi Zhou
  1. aDepartment of Communication Studies, University of California, Los Angeles, CA 90095;
  2. bUCLA Center for Behavior, Evolution, and Culture, University of California, Los Angeles, CA 90095;
  3. cDepartment of Anthropology, University of California, Los Angeles, CA 90095;
  4. dInteracting Minds Center, Aarhus University, 8000 Aarhus C, Denmark;
  5. eCenter for Semiotics, Aarhus University, 8000 Aarhus C, Denmark;
  6. fDepartment of Political Science and Government, Aarhus University, 8000 Aarhus C, Denmark;
  7. gDepartment of Psychology, University of Pennsylvania, Philadelphia, PA 19104;
  8. hDepartment of Political Science and Government, Aarhus University, 8000 Aarhus C, Denmark;
  9. iIndependent Scholar;
  10. jDepartment of Anthropology, College of DuPage, Glen Ellyn, IL 60137;
  11. kDepartment of Psychology, Pontificia Universidad Catolica del Peru, San Miguel Lima, Lima 32, Peru;
  12. lDepartment of Interdisciplinary Study of Law, Private Law and Business Law, Ghent University, 9000 Ghent, Belgium;
  13. mDepartment of Biology, University of Trnava, 918 43 Trnava, Slovakia;
  14. nDepartment of Biblical and Ancient Studies, University of South Africa, Pretoria 0002, South Africa;
  15. oDepartment of Biology, University of Auckland, Aukland 1142, New Zealand;
  16. pDepartment of Sociology, Fudan University, Shanghai 200433, China;
  17. qDepartment of Psychology, Karnatak University Dharwad, Karnataka 580003, India;
  18. rDepartment of Social Psychology, University of Tokyo, 7 Chome-3-1 Hongo, Tokyo, Japan;
  19. sDepartment of Psychology, Singapore Management University, 188065 Singapore;
  20. tInstitute of Zoology, Slovak Academy of Sciences, 845 06 Bratislava, Slovakia;
  21. uInternational Strategy & Marketing Section, University of Amsterdam, 1012 Amsterdam, The Netherlands;
  22. vDepartment of Psychology, Pusan National University, Pusan 609-735, Korea;
  23. wDepartment of Anthropology, Montclair State University, Montclair, NJ 07043;
  24. xDepartment of Psychology, University of Konstanz, 78464 Konstanz, Germany;
  25. yDepartment of Psychology, University of Vienna, 1010 Vienna, Austria;
  26. zDepartment of Behavioral Science, Hokkaido University, Sapporo, Hokkaido Prefecture, 5 Chome-8 Kita Jonshi, Japan;
  27. aaDepartment of Product Innovation and Management, Delft University of Technology, 2628 Delft, The Netherlands;
  28. bbDepartment of Philosophy, Université Paris 1 Panthéon-Sorbonne, 75005 Paris, France;
  29. ccDepartment of Molecular Biology and Genetics, Uludag University, Bursa 16059, Turkey;
  30. ddJakarta Field Station, Max Planck Institute for Evolutionary Anthropology, Jakarta 12930, Indonesia

See allHide authors and affiliations

PNAS April 26, 2016 113 (17) 4682-4687; first published April 11, 2016; https://doi.org/10.1073/pnas.1524993113
Gregory A. Bryant
aDepartment of Communication Studies, University of California, Los Angeles, CA 90095;
bUCLA Center for Behavior, Evolution, and Culture, University of California, Los Angeles, CA 90095;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: gabryant@ucla.edu
Daniel M. T. Fessler
bUCLA Center for Behavior, Evolution, and Culture, University of California, Los Angeles, CA 90095;
cDepartment of Anthropology, University of California, Los Angeles, CA 90095;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Riccardo Fusaroli
dInteracting Minds Center, Aarhus University, 8000 Aarhus C, Denmark;
eCenter for Semiotics, Aarhus University, 8000 Aarhus C, Denmark;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Riccardo Fusaroli
Edward Clint
bUCLA Center for Behavior, Evolution, and Culture, University of California, Los Angeles, CA 90095;
cDepartment of Anthropology, University of California, Los Angeles, CA 90095;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Lene Aarøe
fDepartment of Political Science and Government, Aarhus University, 8000 Aarhus C, Denmark;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Coren L. Apicella
gDepartment of Psychology, University of Pennsylvania, Philadelphia, PA 19104;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Michael Bang Petersen
hDepartment of Political Science and Government, Aarhus University, 8000 Aarhus C, Denmark;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Shaneikiah T. Bickham
iIndependent Scholar;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Alexander Bolyanatz
jDepartment of Anthropology, College of DuPage, Glen Ellyn, IL 60137;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Brenda Chavez
kDepartment of Psychology, Pontificia Universidad Catolica del Peru, San Miguel Lima, Lima 32, Peru;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Delphine De Smet
lDepartment of Interdisciplinary Study of Law, Private Law and Business Law, Ghent University, 9000 Ghent, Belgium;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Cinthya Díaz
kDepartment of Psychology, Pontificia Universidad Catolica del Peru, San Miguel Lima, Lima 32, Peru;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jana Fančovičová
mDepartment of Biology, University of Trnava, 918 43 Trnava, Slovakia;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Michal Fux
nDepartment of Biblical and Ancient Studies, University of South Africa, Pretoria 0002, South Africa;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Paulina Giraldo-Perez
oDepartment of Biology, University of Auckland, Aukland 1142, New Zealand;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Anning Hu
pDepartment of Sociology, Fudan University, Shanghai 200433, China;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Shanmukh V. Kamble
qDepartment of Psychology, Karnatak University Dharwad, Karnataka 580003, India;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Tatsuya Kameda
rDepartment of Social Psychology, University of Tokyo, 7 Chome-3-1 Hongo, Tokyo, Japan;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Norman P. Li
sDepartment of Psychology, Singapore Management University, 188065 Singapore;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Francesca R. Luberti
bUCLA Center for Behavior, Evolution, and Culture, University of California, Los Angeles, CA 90095;
cDepartment of Anthropology, University of California, Los Angeles, CA 90095;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Pavol Prokop
mDepartment of Biology, University of Trnava, 918 43 Trnava, Slovakia;
tInstitute of Zoology, Slovak Academy of Sciences, 845 06 Bratislava, Slovakia;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Katinka Quintelier
uInternational Strategy & Marketing Section, University of Amsterdam, 1012 Amsterdam, The Netherlands;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Brooke A. Scelza
bUCLA Center for Behavior, Evolution, and Culture, University of California, Los Angeles, CA 90095;
cDepartment of Anthropology, University of California, Los Angeles, CA 90095;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Hyun Jung Shin
vDepartment of Psychology, Pusan National University, Pusan 609-735, Korea;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Montserrat Soler
wDepartment of Anthropology, Montclair State University, Montclair, NJ 07043;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Stefan Stieger
xDepartment of Psychology, University of Konstanz, 78464 Konstanz, Germany;
yDepartment of Psychology, University of Vienna, 1010 Vienna, Austria;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Wataru Toyokawa
zDepartment of Behavioral Science, Hokkaido University, Sapporo, Hokkaido Prefecture, 5 Chome-8 Kita Jonshi, Japan;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Wataru Toyokawa
Ellis A. van den Hende
aaDepartment of Product Innovation and Management, Delft University of Technology, 2628 Delft, The Netherlands;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Hugo Viciana-Asensio
bbDepartment of Philosophy, Université Paris 1 Panthéon-Sorbonne, 75005 Paris, France;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Saliha Elif Yildizhan
ccDepartment of Molecular Biology and Genetics, Uludag University, Bursa 16059, Turkey;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jose C. Yong
sDepartment of Psychology, Singapore Management University, 188065 Singapore;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Tessa Yuditha
ddJakarta Field Station, Max Planck Institute for Evolutionary Anthropology, Jakarta 12930, Indonesia
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Yi Zhou
pDepartment of Sociology, Fudan University, Shanghai 200433, China;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  1. Edited by Susan T. Fiske, Princeton University, Princeton, NJ, and approved March 9, 2016 (received for review December 18, 2015)

  • Article
  • Figures & SI
  • Info & Metrics
  • PDF
Loading

Significance

Human cooperation requires reliable communication about social intentions and alliances. Although laughter is a phylogenetically conserved vocalization linked to affiliative behavior in nonhuman primates, its functions in modern humans are not well understood. We show that judges all around the world, hearing only brief instances of colaughter produced by pairs of American English speakers in real conversations, are able to reliably identify friends and strangers. Participants’ judgments of friendship status were linked to acoustic features of laughs known to be associated with spontaneous production and high arousal. These findings strongly suggest that colaughter is universally perceivable as a reliable indicator of relationship quality, and contribute to our understanding of how nonverbal communicative behavior might have facilitated the evolution of cooperation.

Abstract

Laughter is a nonverbal vocal expression that often communicates positive affect and cooperative intent in humans. Temporally coincident laughter occurring within groups is a potentially rich cue of affiliation to overhearers. We examined listeners’ judgments of affiliation based on brief, decontextualized instances of colaughter between either established friends or recently acquainted strangers. In a sample of 966 participants from 24 societies, people reliably distinguished friends from strangers with an accuracy of 53–67%. Acoustic analyses of the individual laughter segments revealed that, across cultures, listeners’ judgments were consistently predicted by voicing dynamics, suggesting perceptual sensitivity to emotionally triggered spontaneous production. Colaughter affords rapid and accurate appraisals of affiliation that transcend cultural and linguistic boundaries, and may constitute a universal means of signaling cooperative relationships.

  • laughter
  • cooperation
  • cross-cultural
  • signaling
  • vocalization

Humans exhibit extensive cooperation between unrelated individuals, managed behaviorally by a suite of elaborate communication systems. Social coordination relies heavily on language, but nonverbal behaviors also play a crucial role in forming and maintaining cooperative relationships (1). Laughter is a common nonverbal vocalization that universally manifests across a broad range of contexts, and is often associated with prosocial intent and positive emotions (2⇓⇓–5), although it can also be used in a threatening or aggressive manner (2). That laughter is inherently social is evident in the fact that people are up to 30 times more likely to laugh in social contexts than when alone (6). Despite the ubiquity and similarity of laughter across all cultures, its communicative functions remain largely unknown. Colaughter is simultaneous laughter between individuals in social interactions, and occurs with varying frequency as a function of the sex and relationship composition of the group: friends laugh together more than strangers, and groups of female friends tend to laugh more than groups of male friends or mixed-sex groups (7, 8). Colaughter can indicate interest in mating contexts (9), especially if it is synchronized (10), and is a potent stimulus for further laughter (i.e., it is contagious) (11). Researchers have focused on laughter within groups, but colaughter potentially provides rich social information to those outside of the group. Against this backdrop, we examined (i) whether listeners around the world can determine the degree of social closeness and familiarity between pairs of people solely on the basis of very brief decontextualized recordings of colaughter, and (ii) which acoustic features in the laughs might influence such judgments.

Laughter is characterized by neuromechanical oscillations involving rhythmic laryngeal and superlaryngeal activity (12, 13). It often features a series of bursts or calls, collectively referred to as bouts. Laugh acoustics vary dramatically both between and within speakers across bouts (14), but laughter appears to follow a variety of production rules (15). Comparative acoustic analyses examining play vocalizations across several primate species suggest that human laughter is derived from a homolog dating back at least 20 Mya (16, 17). Humans evolved species-specific sound features in laughs involving higher proportions of periodic components (i.e., increasingly voiced), and a predominantly egressive airflow. This pattern is different from laugh-like vocalizations of our closest nonhuman relative, Pan troglodytes, which incorporate alternating airflow and mostly noisy, aperiodic structure (2, 16). In humans, relatively greater voicing in laughs is judged to be more emotionally positive than unvoiced laughs (18), as is greater variability in pitch and loudness (19). People produce different perceivable laugh types [e.g., spontaneous (or Duchenne) versus volitional (or non-Duchenne)] that correspond to different communicative functions and underlying vocal production systems (3, 20⇓–22), with spontaneous laughter produced by an emotional vocal system shared by many mammals (23, 24). Recent evidence suggests that spontaneous laughter is often associated with relatively greater arousal in production (e.g., increased pitch and loudness) than volitional laughter, and contains relatively more features in common with nonhuman animal vocalizations (20) (Audios S1–S6). These acoustic differences might be important for making social judgments if the presence of spontaneous (i.e., genuine) laughter predicts cooperative social affiliation, but the presence of volitional laughter does not.

All perceptual studies to date have examined individual laughs, but laughter typically occurs in social groups, often with multiple simultaneous laughers. Both because social dynamics can change rapidly and because newcomers will often need to quickly assess the membership and boundaries of coalitions, listeners frequently must make rapid judgments about the affiliative status obtaining within small groups of interacting individuals; laughter may provide an efficient and reliable cue of affiliation. If so, we should expect humans to exhibit perceptual adaptations sensitive to colaughter dynamics between speakers. However, to date no study has examined listeners’ judgments of the degree of affiliation between laughers engaged in spontaneous social interactions.

We conducted a cross-cultural study across 24 societies (Fig. 1) examining listeners’ judgments of colaughter produced by American English-speaking dyads composed either of friends or newly acquainted strangers, with listeners hearing only extremely brief decontextualized recordings of colaughter. This “thin slice” approach is useful because listeners receive no extraneous information that could inform their judgments, and success with such limited information indicates particular sensitivity to the stimulus (25). A broadly cross-cultural sample is important if we are to demonstrate the independence of this perceptual sensitivity from the influences of language and culture (26). Although cultural factors likely shape pragmatic considerations driving human laughter behavior, we expect that many aspects of this phylogenetically ancient behavior will transcend cultural differences between disparate societies.

Fig. 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 1.

Map of the 24 study site locations.

Results

Judgment Task.

We used a model comparison approach in which variables were entered into generalized linear mixed models and effects on model fit were measured using Akaike Information Criterion (for all model comparisons, see SI Appendix, Tables S1 and S2). This approach allows researchers to assess which combination of variables best fit the pattern of data without comparison with a null model. The data were modeled using the glmer procedure of the lme4 package (27) in the statistical platform R (v3.1.1) (28). Our dependent measures consisted of two questions: one forced-choice item and one rating scale. For question 1 (Do you think these people laughing were friends or strangers?) data were modeled using a binomial (logistic) link function, with judgment accuracy (hit rate) as a binary outcome (1 = correct; 0 = incorrect). For question 2 (How much do you think these people liked each other?), we used a Gaussian link function with rating response (1⇓⇓⇓⇓⇓–7) as a continuous function.

Across all participants, the overall rate of correct judgments in the forced-choice measure (friends or strangers) was 61% (SD = 0.49), a performance significantly better than chance (z = 40.5, P < 0.0001) (Fig. 2 and SI Appendix, Table S3). The best-fitting model was a generalized linear mixed model by the Laplace approximation, with participant sex as a fixed effect, familiarity and dyad type as interacting fixed effects, participant and study site as random effects, and hit rate (percent correct) as the dependent measure (Table 1). Participants (VAR = 0.014; SD = 0.12) and study site (VAR = 0.028; SD = 0.17) accounted for very little variance in accuracy in the forced-choice measure. Familiarity interacted with dyad type, with female friends being recognized at higher rates than male friends (z = 42.96, P < 0.001), whereas male strangers were recognized at higher rates than female strangers (z = −22.57, P < 0.0001). A second significant interaction indicates that mixed-sex friends were recognized at higher rates than male friends, and mixed-sex strangers were recognized at lower rates than male strangers (z = 4.42, P < 0.001). For the second question (i.e., “How much do you think these people liked each other?”) the same model structure was the best fit, with a similar pattern of results (SI Appendix, Fig. S1 and Table S4).

Fig. 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 2.

Rates of correct judgments (hits) in each study site broken down by experimental condition (friends or strangers), and dyad type (male–male, male–female, female–female). Chance performance represented by 0.50. For example, the bottom right graph of the United States results shows that female–female friendship dyads were correctly identified 95% of the time, but female–female stranger dyads were identified less than 50% of the time. Male–male and mixed-sex friendship dyads were identified at higher rates than male–male and mixed-sex stranger dyads. This contrasts with Korea, for example, where male–male and mixed-sex friendship dyads were identified at lower rates than male–male and mixed-sex stranger dyads. In every society, female–female friendship dyads were identified at higher rates than all of the other categories.

View this table:
  • View inline
  • View popup
Table 1.

Best-fit model for question 1 (Do you think these people laughing were friends or strangers?)

Overall, female friends were identified at the highest rate in every society without exception, but there was also a universal tendency to judge female colaughers as friends (SI Appendix, Fig. S2). Forced-choice responses for each colaughter trial were collapsed across societies and compared across dyad types, revealing that a response bias to answer “friends” existed in judgments of female dyads (70%), F(2, 47) = 7.25, P = 0.002, but not male (46%) or mixed-sex dyads (49%), which did not differ from one another (LSD test, P = 0.73). Additionally, female participants (M = 0.62; SD = 0.49) had slightly greater accuracy than male participants (M = 0.60; SD = 0.49) overall (z = 2.31, P < 0.05).

Acoustic Analysis.

Acoustic features, including the frequency and temporal dynamics of voiced and unvoiced segments, were automatically extracted from the individual laugh segments and used to statistically reconstruct the rate at which participants judged each colaugh segment as a friendship dyad. We used an ElasticNet process (29) to individuate key features to assess in a multiple linear regression and a fivefold cross-validated multiple linear regression to estimate coefficients of the selected features, repeating the process 100 times to ensure stability of the results (Table 2). The resulting model was able to reliably predict participants’ judgments that colaughers were friends, adjusted R2 = 0.43 [confidence interval (CI) 0.42–0.43], P = 0.0001 (Fig. 3). Across cultures, laughs that had shorter call duration, less regular pitch and intensity cycles, together with less variation in pitch cycle regularity were more likely to be judged to be between friends (for complete details of acoustic analysis, see SI Appendix).

View this table:
  • View inline
  • View popup
Table 2.

Sample coefficients from one run of the fivefold cross-validated model for friend ratio across 24 societies

Fig. 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 3.

Acoustic-based model predictions of friend ratio (defined as the overall likelihood of each single laugh being part of a colaugh segment produced between individuals identified by participants as being friends) (on the x axis) with the actual values (on the y axis) (95% CI).

Discussion

Across all societies, listeners were able to distinguish pairs of colaughers who were friends from those who were strangers that had just met. Biases, presumably reflecting panhuman patterns in the occurrence of laughter in everyday life, existed in all societies sampled as well, such that participants were more likely to assume that female colaughers were friends than strangers. Male strangers were also recognized universally at significantly high rates, and participants worldwide rated the members of these dyads as liking each other the least among all pairs. Dynamic acoustic information in the laughter predicted the accuracy of judgments, strongly suggesting that participants attended closely to these sound features, likely outside of conscious awareness. The judgment pattern was remarkably similar across disparate societies, including those with essentially no familiarity with English, the language of the target individuals whose laughter was evaluated. These results constitute strong preliminary evidence that colaughter provides a reliable cue with which overhearers (and, presumably, colaughers themselves) can assess the degree of affiliation between interactants. Although embedded within discourse, laughter is nonverbal in nature and presents universally interpretable features, presumably reflecting phylogenetic antiquity predating the evolution of language.

Together with auxiliary experiments on the laugh stimuli (SI Appendix), acoustic data strongly suggest that individual laugh characteristics provided much of the information, allowing our participants to accurately differentiate between friends and strangers. Laugh features predicting listeners’ friend responses included shorter call duration, associated with judgments of friendliness (18) and spontaneity (20), as well as greater pitch and loudness irregularities, associated with speaker arousal (30). Acoustic analyses comparing laughs within a given copair did not indicate any contingent dynamic relationship that could plausibly correspond to percepts of entrainment or coordination one might expect from familiar interlocutors. Indeed, our colaugh audio clips may be too short to capture shared temporal dynamics that longer recordings might reveal. A second group of United States listeners evaluated artificial colaughter pairs constructed by shuffling the individual laugh clips within dyad categories (SI Appendix). Consonant with the conclusion that our main result was driven by features of the individual laughs rather than interactions between them, these artificial copairs were judged quite similarly to the original copairs in the main study. Finally, a third group of United States listeners rated the individual laughs on the affective dimensions of arousal and valence; these judgments were positively associated with the likelihood that, in its colaughter context, a given laugh was judged in the main study as having occurred in a friendship dyad.

Inclusion in cooperative groups of allied individuals is often a key determinant of social and material success; at the same time, social relationships are dynamic, and can change over short time spans. As a consequence, in our species’ ancestral past, individuals who could accurately assess the current degree of affiliation between others stood to gain substantial fitness benefits. Closely allied individuals often constitute formidable opponents; similarly, such groups may provide substantial benefits to newcomers who are able to gain entry. Many social primates exhibit these political dynamics, along with corresponding cognitive abilities (31); by virtue of the importance of cooperation in human social and economic activities, ours is arguably the political species par excellence. However, even as language and cultural evolution have provided avenues for evolutionarily unprecedented levels of cooperation and political complexity in humans, we continue to use vocal signals of affiliation that apparently predate these innovations. As noted earlier, human laughter likely evolved from labored breathing during play of the sort exhibited by our closest living relatives, a behavior that appears to provide a detectable cue of affiliation among extant nonhuman primates. The capability for speech affords vocal mimicry in humans, and as such, the ability to generate volitional emulations of cues that ordinarily require emotional triggers. In turn, because of the importance of distinguishing cues indicative of deeply motivated affiliation from vocalizations that are not contingent on such motives, producers’ vocal mimicry of laughter will have favored the evolution of listeners’ ability to discriminate between genuine and volitional tokens. However, the emergence of such discriminative ability will not have precluded the utility of the production of volitional tokens, as these could then become normative utterances prescribed in the service of lubricating minimally cooperative interactions; that is, “polite laughter” emerges. Laughter and speech have thus coevolved into a highly interactive and flexible vocal production ensemble involving strategic manipulation and mindreading among social agents.

This finding opens up a host of evolutionary questions concerning laughter. Can affiliative laughter be simulated effectively, or is it an unfakeable signal? Hangers-on might do well to deceptively signal to overhearers that they are allied with a powerful coalition, whereas others would benefit from detecting such deception. If the signal is indeed honest, what keeps it so? Does the signal derive from the relationship itself (i.e., can unfamiliar individuals allied because of expedience signal their affiliation through laughter) or, consonant with the importance of coordination in cooperation, is intimate knowledge of the other party a prerequisite? Paralleling such issues, at the proximate level, numerous questions remain. For example, given universal biases that apparently reflect prior beliefs, future studies should both explore listeners’ accuracy in judging the sex of colaughers and examine the sources of such biases. Our finding that colaughter constitutes a panhuman cue of affiliation status is thus but the tip of the iceberg when it comes to understanding this ubiquitous yet understudied phenomenon.

Methods

All study protocols were approved by the University of California, Los Angeles Institutional Review Board. At all study sites, informed consent was obtained verbally before participation in the experiment. An informed consent form was signed by all participants providing voice recordings for laughter stimuli.

Stimuli.

All colaughter segments were extracted from conversation recordings, originally collected for a project unrelated to the current study, made in 2003 at the Fox Tree laboratory at the University of California, Santa Cruz. The recorded conversations were between pairs of American English-speaking undergraduate students who volunteered to participate in exchange for course credit for an introductory class in psychology. Two rounds of recruitment were held. In one, participants were asked to sign up with a friend whom they had known for any amount of time. In the other, participants were asked to sign up as individuals, where after they were paired with a stranger. The participants were instructed to talk about any topic of their choosing; “bad roommate experiences” was given as an example of a possible topic. The average length of the conversations from which the stimuli used in this study were extracted was 13.5 min (mean length ± SD = 809.2 s ± 151.3 s). Interlocutors were recorded on separate audio channels using clip-on lapel microphones (Sony ECM-77B) placed ∼15 cm from the mouth, and recorded to DAT (16-bit amplitude resolution, 44.1-kHz sampling rate, uncompressed wav files, Sony DTC series recorder). For more description of the conversations, see ref. 32.

Colaughter Segments.

Forty-eight colaughter segments were extracted from 24 conversations (two from each), half from conversations between established friends (mean length of acquaintance = 20.5 mo; range = 4–54 mo; mean age ± SD = 18.6 ± 0.6) and half from conversations between strangers who had just met (mean age ± SD = 19.3 ± 1.8). Colaughter was defined as the simultaneous vocal production (intensity onsets within 1 s), in two speakers, of a nonverbal, egressive, burst series (or single burst), either voiced (periodic) or unvoiced (aperiodic). Laughter is acoustically variable, but often stereotyped in form, characterized typically by an initial alerting component, a stable vowel configuration, and a decay function (2, 13, 14).

In the colaughter segments selected for use, no individual laugh contained verbal content or other noises of any kind. To prevent a selection bias in stimulus creation, for all conversations, only two colaughter sequences were used: namely, the first to appear in the conversation and the last to appear in the conversation. If a colaughter sequence identified using this rule contained speech or other noises, the next qualifying occurrence was chosen. The length of colaughter segments (in milliseconds) between friends (mean length ± SD = 1,146 ± 455) and strangers (mean length ± SD = 1,067 ± 266) was similar, t(46) = 0.74, P = 0.47. Laughter onset asynchrony (in milliseconds) was also similar between friends (mean length ± SD = 337 ± 299) and strangers (mean length ± SD = 290 ± 209), t(46) = −0.64, P = 0.53. Previous studies have documented that the frequency of colaughter varies as a function of the gender composition of the dyad or group (6, 7). The same was true in the source conversations used here, with female friends producing colaughter at the highest frequency, followed by mixed-sex groups, and then all-male groups. Consequently, our stimulus set had uneven absolute numbers of different dyad types, reflecting the actual occurrence frequency in the sample population. Of the 24 sampled conversations, 10 pairs were female dyads, 8 pairs were mixed-sex dyads, and 6 pairs were male dyads. For each of these sex-pair combinations, half were friends and half were strangers. Sample audio files for each type of dyad and familiarity category are presented in Audios S7–S12; these recordings are depicted visually in spectrograms presented in Fig. 4.

Fig. 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 4.

Six sample waveforms and narrowband FFT spectrograms (35-ms Gaussian analysis window, 44.1-kHz sampling rate, 0- to 5-kHz frequency range, 100- to 600-Hz F0 range) of colaughter from each experimental condition (friends and strangers), and dyad type (male–male, male–female, female–female). For each colaugh recording, the Top and Middle show the waveforms from each of the constituent laughs, and the spectrogram collapses across channels. Blue lines represent F0 contours. The recordings depicted here exemplify stimuli that were accurately identified by participants. Averaging across all 24 societies, the accuracy (hit rate) for the depicted recordings were: female–female friendship, 85%; mixed-sex friendship, 75%; male–male friendship, 78%; female–female strangers, 67%; mixed-sex strangers, 82%; male–male strangers, 73%.

Design and Procedure.

The selected 48 colaughter stimuli were amplitude-normalized and presented in random order using SuperLab 4.0 experiment software (www.superlab.com). We recruited 966 participants from 24 societies across six regions of the world (Movie S1); for full demographic information about participants, see SI Appendix, Tables S5 and S6. For those study sites in which a language other than English was used in conducting the experiment, the instructions were translated beforehand by the respective investigators or by native-speaker translators recruited by them for this purpose. Customized versions of the experiment were then created for each of these study sites using the translated instructions and a run-only version of the software. For those study sites in which literacy was limited or absent, the experimenter read the instructions aloud to each participant in turn.

Before each experiment, participants were instructed that they would be listening to recordings of pairs of people laughing together in a conversation, and they would be asked questions about each recording. Participants received one practice trial and then completed the full experiment consisting of 48 trials. After each trial, listeners answered two questions. The first question was a two-alternative forced-choice asking them to identify the pair as either friends or strangers; the second question asked listeners to judge how much the pair liked one another on a scale of 1–7, with 1 being not at all, and 7 being very much. The scale was presented visually and, in study sites where the investigator judged participants’ experience with numbers or scales to be low, participants used their finger to point to the appropriate part of the scale. All participants wore headphones. For complete text of instructions and questions used in the experiment, see SI Appendix.

Acknowledgments

We thank our participants around the globe, and Brian Kim and Andy Lin of the UCLA Statistical Consulting Group.

Footnotes

  • ↵1To whom correspondence should be addressed. Email: gabryant{at}ucla.edu.
  • Author contributions: G.A.B. designed research; G.A.B., R.F., E.C., L.A., C.L.A., M.B.P., S.T.B., A.B., B.C., D.D.S., C.D., J.F., M.F., P.G.-P., A.H., S.V.K., T.K., N.P.L., F.R.L., P.P., K.Q., B.A.S., H.J.S., M.S., S.S., W.T., E.A.v.d.H., H.V.-A., S.E.Y., J.C.Y., T.Y., and Y.Z. performed research; G.A.B. and R.F. analyzed data; G.A.B., D.M.T.F., and R.F. wrote the paper; D.M.T.F. conceived and organized the cross-cultural component; and E.C. coordinated cross-cultural researchers.

  • The authors declare no conflict of interest.

  • This article is a PNAS Direct Submission.

  • Data deposition: Experimental response data from all study sites and acoustic data from all laugh stimuli are available at https://escholarship.org/uc/item/99j8r0gx.

  • This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1524993113/-/DCSupplemental.

References

  1. ↵
    1. Dale R,
    2. Fusaroli R,
    3. Duran N,
    4. Richardson D
    (2013) The self-organization of human interaction. Psychol Learn Motiv 59:43–95
    .
    OpenUrlCrossRef
  2. ↵
    1. Provine RR
    (2000) Laughter: A Scientific Investigation (Penguin, New York)
    .
  3. ↵
    1. Gervais M,
    2. Wilson DS
    (2005) The evolution and functions of laughter and humor: A synthetic approach. Q Rev Biol 80(4):395–430
    .
    OpenUrlCrossRefPubMed
  4. ↵
    1. Scott SK,
    2. Lavan N,
    3. Chen S,
    4. McGettigan C
    (2014) The social life of laughter. Trends Cogn Sci 18(12):618–620
    .
    OpenUrlCrossRefPubMed
  5. ↵
    1. Sauter DA,
    2. Eisner F,
    3. Ekman P,
    4. Scott SK
    (2010) Cross-cultural recognition of basic emotions through nonverbal emotional vocalizations. Proc Natl Acad Sci USA 107(6):2408–2412
    .
    OpenUrlAbstract/FREE Full Text
  6. ↵
    1. Provine RR,
    2. Fischer KR
    (1989) Laughing, smiling, and talking: Relation to sleeping and social context in humans. Ethology 83(4):295–305
    .
    OpenUrl
  7. ↵
    1. Smoski MJ,
    2. Bachorowski JA
    (2003) Antiphonal laughter between friends and strangers. Cogn Emotion 17(2):327–340
    .
    OpenUrlCrossRef
  8. ↵
    1. Vettin J,
    2. Todt D
    (2004) Laughter in conversation: Features of occurrence and acoustic structure. J Nonverbal Behav 28(2):93–115
    .
    OpenUrlCrossRef
  9. ↵
    1. Koch W
    1. Grammer K,
    2. Eibl-Eibesfeldt I
    (1990) The ritualisation of laughter. Natürlichkeit der Sprache und der Kultur, ed Koch W (Brockmeyer, Bochum, Germany), pp 192–214
    .
  10. ↵
    1. Grammer K
    (1990) Strangers meet: Laughter and nonverbal signs of interest in opposite-sex encounters. J Nonverbal Behav 14(4):209–236
    .
    OpenUrlCrossRef
  11. ↵
    1. Provine RR
    (1992) Contagious laughter: Laughter is a sufficient stimulus for laughs and smiles. Bull Psychon Soc 30(1):1–4
    .
    OpenUrlCrossRef
  12. ↵
    1. Luschei ES,
    2. Ramig LO,
    3. Finnegan EM,
    4. Baker KK,
    5. Smith ME
    (2006) Patterns of laryngeal electromyography and the activity of the respiratory system during spontaneous laughter. J Neurophysiol 96(1):442–450
    .
    OpenUrlAbstract/FREE Full Text
  13. ↵
    1. Titze IR,
    2. Finnegan EM,
    3. Laukkanen AM,
    4. Fuja M,
    5. Hoffman H
    (2008) Laryngeal muscle activity in giggle: A damped oscillation model. J Voice 22(6):644–648
    .
    OpenUrlCrossRefPubMed
  14. ↵
    1. Bachorowski JA,
    2. Smoski MJ,
    3. Owren MJ
    (2001) The acoustic features of human laughter. J Acoust Soc Am 110(3 Pt 1):1581–1597
    .
    OpenUrlCrossRefPubMed
  15. ↵
    1. Provine RR
    (1993) Laughter punctuates speech: Linguistic, social and gender contexts of laughter. Ethology 95(4):291–298
    .
    OpenUrl
  16. ↵
    1. Davila Ross M,
    2. Owren MJ,
    3. Zimmermann E
    (2009) Reconstructing the evolution of laughter in great apes and humans. Curr Biol 19(13):1106–1111
    .
    OpenUrlCrossRefPubMed
  17. ↵
    1. Hinde RA
    1. van Hooff JA
    (1972) A comparative approach to the phylogeny of laughter and smiling. Nonverbal Communication, ed Hinde RA (Cambridge Univ Press, Cambridge, England), pp 209–241
    .
  18. ↵
    1. Bachorowski JA,
    2. Owren MJ
    (2001) Not all laughs are alike: Voiced but not unvoiced laughter readily elicits positive affect. Psychol Sci 12(3):252–257
    .
    OpenUrlAbstract/FREE Full Text
  19. ↵
    1. Kipper S,
    2. Todt D
    (2001) Variation of sound parameters affects the evaluation of human laughter. Behaviour 138(9):1161–1178
    .
    OpenUrlCrossRef
  20. ↵
    1. Bryant GA,
    2. Aktipis CA
    (2014) The animal nature of spontaneous human laughter. Evol Hum Behav 35(4):327–335
    .
    OpenUrlCrossRef
  21. ↵
    1. McGettigan C, et al.
    (2015) Individual differences in laughter perception reveal roles for mentalizing and sensorimotor systems in the evaluation of emotional authenticity. Cereb Cortex 25(1):246–257
    .
    OpenUrlAbstract/FREE Full Text
  22. ↵
    1. Szameitat DP, et al.
    (2010) It is not always tickling: Distinct cerebral responses during perception of different laughter types. Neuroimage 53(4):1264–1271
    .
    OpenUrlCrossRefPubMed
  23. ↵
    1. Jürgens U
    (2002) Neural pathways underlying vocal control. Neurosci Biobehav Rev 26(2):235–258
    .
    OpenUrlCrossRefPubMed
  24. ↵
    1. Ackermann H,
    2. Hage SR,
    3. Ziegler W
    (2014) Brain mechanisms of acoustic communication in humans and nonhuman primates: An evolutionary perspective. Behav Brain Sci 37(6):529–546
    .
    OpenUrlCrossRefPubMed
  25. ↵
    1. Ambady N,
    2. Bernieri FJ,
    3. Richeson JA
    (2000) Toward a histology of social behavior: Judgmental accuracy from thin slices of the behavioral stream. Adv Exp Soc Psychol 32:201–271
    .
    OpenUrlCrossRef
  26. ↵
    1. Henrich J,
    2. Heine SJ,
    3. Norenzayan A
    (2010) The weirdest people in the world? Behav Brain Sci 33(2-3):61–83, discussion 83–135
    .
    OpenUrlCrossRefPubMed
  27. ↵
    1. Bates D,
    2. Maechler M,
    3. Bolker B,
    4. Walker S
    (2014) lme4: Linear mixed-effects models using Eigen and S4 R package version 11-7. Available at https://cran.r-project.org/web/packages/lme4/index.html. Accessed September 9, 2014
    .
  28. ↵
    1. R Core Team
    (2014) R: A Language and Environment for Statistical Computing (R Foundation for Statistical Computing, Vienna, Austria)
    .
  29. ↵
    1. Zou H,
    2. Hastie T
    (2005) Regularization and variable selection via the elastic net. J R Stat Soc, B 67(2):301–320
    .
    OpenUrlCrossRef
  30. ↵
    1. Williams CE,
    2. Stevens KN
    (1972) Emotions and speech: Some acoustical correlates. J Acoust Soc Am 52(4B 4):1238–1250
    .
    OpenUrlCrossRefPubMed
  31. ↵
    1. Silk JB
    (2007) Social components of fitness in primate groups. Science 317(5843):1347–1351
    .
    OpenUrlAbstract/FREE Full Text
  32. ↵
    1. Bryant GA
    (2010) Prosodic contrasts in ironic speech. Discourse Process 47(7):545–566
    .
    OpenUrlCrossRef
PreviousNext
Back to top
Article Alerts
Email Article

Thank you for your interest in spreading the word on PNAS.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Detecting affiliation in colaughter across 24 societies
(Your Name) has sent you a message from PNAS
(Your Name) thought you would like to see the PNAS web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
Detecting affiliation in colaughter
Gregory A. Bryant, Daniel M. T. Fessler, Riccardo Fusaroli, Edward Clint, Lene Aarøe, Coren L. Apicella, Michael Bang Petersen, Shaneikiah T. Bickham, Alexander Bolyanatz, Brenda Chavez, Delphine De Smet, Cinthya Díaz, Jana Fančovičová, Michal Fux, Paulina Giraldo-Perez, Anning Hu, Shanmukh V. Kamble, Tatsuya Kameda, Norman P. Li, Francesca R. Luberti, Pavol Prokop, Katinka Quintelier, Brooke A. Scelza, Hyun Jung Shin, Montserrat Soler, Stefan Stieger, Wataru Toyokawa, Ellis A. van den Hende, Hugo Viciana-Asensio, Saliha Elif Yildizhan, Jose C. Yong, Tessa Yuditha, Yi Zhou
Proceedings of the National Academy of Sciences Apr 2016, 113 (17) 4682-4687; DOI: 10.1073/pnas.1524993113

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Request Permissions
Share
Detecting affiliation in colaughter
Gregory A. Bryant, Daniel M. T. Fessler, Riccardo Fusaroli, Edward Clint, Lene Aarøe, Coren L. Apicella, Michael Bang Petersen, Shaneikiah T. Bickham, Alexander Bolyanatz, Brenda Chavez, Delphine De Smet, Cinthya Díaz, Jana Fančovičová, Michal Fux, Paulina Giraldo-Perez, Anning Hu, Shanmukh V. Kamble, Tatsuya Kameda, Norman P. Li, Francesca R. Luberti, Pavol Prokop, Katinka Quintelier, Brooke A. Scelza, Hyun Jung Shin, Montserrat Soler, Stefan Stieger, Wataru Toyokawa, Ellis A. van den Hende, Hugo Viciana-Asensio, Saliha Elif Yildizhan, Jose C. Yong, Tessa Yuditha, Yi Zhou
Proceedings of the National Academy of Sciences Apr 2016, 113 (17) 4682-4687; DOI: 10.1073/pnas.1524993113
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Mendeley logo Mendeley

Article Classifications

  • Social Sciences
  • Psychological and Cognitive Sciences
Proceedings of the National Academy of Sciences: 113 (17)
Table of Contents

Submit

Sign up for Article Alerts

Jump to section

  • Article
    • Abstract
    • Results
    • Discussion
    • Methods
    • Acknowledgments
    • Footnotes
    • References
  • Figures & SI
  • Info & Metrics
  • PDF

You May Also be Interested in

Smoke emanates from Japan’s Fukushima nuclear power plant a few days after tsunami damage
Core Concept: Muography offers a new way to see inside a multitude of objects
Muons penetrate much further than X-rays, they do essentially zero damage, and they are provided for free by the cosmos.
Image credit: Science Source/Digital Globe.
Water from a faucet fills a glass.
News Feature: How “forever chemicals” might impair the immune system
Researchers are exploring whether these ubiquitous fluorinated molecules might worsen infections or hamper vaccine effectiveness.
Image credit: Shutterstock/Dmitry Naumov.
Venus flytrap captures a fly.
Journal Club: Venus flytrap mechanism could shed light on how plants sense touch
One protein seems to play a key role in touch sensitivity for flytraps and other meat-eating plants.
Image credit: Shutterstock/Kuttelvaserova Stuchelova.
Illustration of groups of people chatting
Exploring the length of human conversations
Adam Mastroianni and Daniel Gilbert explore why conversations almost never end when people want them to.
Listen
Past PodcastsSubscribe
Horse fossil
Mounted horseback riding in ancient China
A study uncovers early evidence of equestrianism in ancient China.
Image credit: Jian Ma.

Similar Articles

Site Logo
Powered by HighWire
  • Submit Manuscript
  • Twitter
  • Facebook
  • RSS Feeds
  • Email Alerts

Articles

  • Current Issue
  • Special Feature Articles – Most Recent
  • List of Issues

PNAS Portals

  • Anthropology
  • Chemistry
  • Classics
  • Front Matter
  • Physics
  • Sustainability Science
  • Teaching Resources

Information

  • Authors
  • Editorial Board
  • Reviewers
  • Subscribers
  • Librarians
  • Press
  • Cozzarelli Prize
  • Site Map
  • PNAS Updates
  • FAQs
  • Accessibility Statement
  • Rights & Permissions
  • About
  • Contact

Feedback    Privacy/Legal

Copyright © 2021 National Academy of Sciences. Online ISSN 1091-6490