New Research In
Physical Sciences
Social Sciences
Featured Portals
Articles by Topic
Biological Sciences
Featured Portals
Articles by Topic
- Agricultural Sciences
- Anthropology
- Applied Biological Sciences
- Biochemistry
- Biophysics and Computational Biology
- Cell Biology
- Developmental Biology
- Ecology
- Environmental Sciences
- Evolution
- Genetics
- Immunology and Inflammation
- Medical Sciences
- Microbiology
- Neuroscience
- Pharmacology
- Physiology
- Plant Biology
- Population Biology
- Psychological and Cognitive Sciences
- Sustainability Science
- Systems Biology
Linguistic inferences without words
Edited by Barbara H. Partee, University of Massachusetts, Amherst, MA, and approved March 18, 2019 (received for review December 10, 2018)

Significance
Linguistic meaning encompasses a rich typology of inferences, characterized by distinct patterns of interaction with logical expressions. For example, “Robin has continued to smoke” triggers the presuppositional inference that Robin smoked before, characterized by the preservation of the inference under negation in “Robin hasn’t continued to smoke.” We show experimentally that four main inference types can be robustly replicated with iconic gestures and visual animations. These nonlinguistic objects thus display the same type of logical behavior as spoken words. Because the gestures and animations were novel to the participants, the results suggest that people may productively divide new informational content among the components of the inferential typology using general algorithms that apply to linguistic and nonlinguistic objects alike.
Abstract
Contemporary semantics has uncovered a sophisticated typology of linguistic inferences, characterized by their conversational status and their behavior in complex sentences. This typology is usually thought to be specific to language and in part lexically encoded in the meanings of words. We argue that it is neither. Using a method involving “composite” utterances that include normal words alongside novel nonlinguistic iconic representations (gestures and animations), we observe successful “one-shot learning” of linguistic meanings, with four of the main inference types (implicatures, presuppositions, supplements, homogeneity) replicated with gestures and animations. The results suggest a deeper cognitive source for the inferential typology than usually thought: Domain-general cognitive algorithms productively divide both linguistic and nonlinguistic information along familiar parts of the linguistic typology.
The investigation of meaning gave rise to two major insights in the 20th century, first in the philosophy of language and then in linguistics. One was that English and other natural languages can be modeled as logical languages with an explicit semantics (1⇓–3). The other was that unlike standard logics, natural languages do not just convey information by way of entailments. Rather, they have a rich array of inference types, the investigation of which has led to models of increasing formal sophistication within the last 50 y. In an initial breakthrough, the philosopher Bertrand Russell (1) provided a logical analysis of the definite determiner “the,” whereby “The dog barks” is analyzed by way of a logical formula akin to There is exactly one dog, and it barks. The philosopher Peter Strawson (4) famously replied that this entirely missed the point of definite descriptions: “The dog barks” presupposes (rather than entails) that there is exactly one dog, and it entails that it barks. For this reason, “The dog doesn’t bark” preserves the presupposition and denies the entailment. But the distinction between presuppositions and entailments, a cornerstone of contemporary linguistics, is only the tip of the iceberg: Linguistic inferences are known to be a diverse bunch, which also includes implicatures (5), supplements, and homogeneity inferences.
These inferences are usually thought to be specific to language. Moreover, several are taken to be lexical in nature, i.e., encoded in the meanings of words, which then need to be learned. We argue that both assumptions are incorrect, by replicating the inferential typology with unfamiliar gestures and animations in place of words. We conclude that a considerable part of what is normally classified as linguistic and lexical meaning is neither linguistic nor lexical, but has a much deeper source: productive, domain-general cognitive algorithms.
One-Shot Learning and “Composite” Utterances
We investigate the linguistic behavior of nonlinguistic expressions such as gestures and visual animations. Their informational content is iconic and can for this reason be understood upon a single exposure. This allows us to investigate how this content is productively divided within the inferential typology. To do so, we follow refs. 6 and 7 in embedding these iconic depictions within sentences to assess telltale properties of the various inference types (8⇓–10). Clark (ref. 6, p. 325) highlights the importance of such “composite” utterances made of words and iconic depictions, which he analyzes as “physical scenes that people stage for others to use in imagining the scenes they are depicting.” But how are such depictions semantically and grammatically integrated within sentences? A typology has been developed that depends on whether gestural depictions cooccur with, follow, or replace words (7, 11⇓–13). Focusing on the latter two cases, it has been argued that gestural content is divided among familiar slots of the inferential typology (7).
More concretely, we investigate sentences such as “John will turn-wheel,” where turn-wheel is a silent gesture representing the turning of a steering wheel. The gesture fully replaces a part of speech and is for this reason called a “pro-speech gesture” (7) (referred to by ref. 6 as an “embedded depiction”). We show that the resulting meaning is complex: “John will turn-wheel” presupposes that there is exactly one salient wheel and entails that John will turn it. For this reason, “John won’t turn-wheel”—just like the sentence containing the presuppositional word “the” (“John won’t turn the wheel”)—preserves the presupposition and denies the entailment.
One might wonder whether turn-wheel triggers this presupposition because it is mentally translated into the words “turn the wheel.” This is unlikely because the gesture conveys fine-grained iconic information that is absent from the corresponding words. For instance, if the gesture represents a small or a large wheel, one will get different and potentially gradient information about the size of the denoted object. Our experimental results reveal that some of these iconic implications are indeed understood by our participants.
The sophisticated linguistic behavior of pro-speech gestures is in itself interesting and might suggest that human language is even more multimodal than standardly thought (14, 15); this dovetails with recent studies of primate communication, as apes are now known to exchange information not just with calls, but also with a rich inventory of gestures, some of which can be silent (16). The conclusion, then, might be that iconic gestures should be treated as full-fledged (if nonstandard) words that speakers might even have quite a bit of experience with. In fact, in the spirit of ref. 6, it may be that entirely nonlinguistic objects with the same informational content can be treated in the very same way. We show that this is indeed the case: All our conclusions are replicated with novel pro-speech visual animations, embedded within written sentences.
The Experimental Method: Inferential Judgments
A total of 103 Amazon Mechanical Turk workers participated in the gesture experiment and another group of 99 workers participated in the animation experiment. Informed consent was obtained from all participants. [Ethical approval for this study was obtained from the CERES (“Comité d’évaluation éthique des projets de recherche en santé non soumis à CPP”) under approval number 2013/46.] Participants were asked to watch videos and to judge how strongly the videos led them to draw the inferences that appeared in text below the videos, by using a continuous slider scale that was mapped linearly to a dependent measure ranging from 0 to 100% (17, 18).
All participants saw all items in their respective modality (gesture/animation), allowing us to assess the presence of four main inference types: implicatures, presuppositions, supplements, and so-called “homogeneity inferences.” Every participant saw all trial types, including targets and controls; there were 72 trials in total in the gesture experiment and 48 trials in the animation experiment.
The results are summarized in Fig. 1; all of the inferential phenomena were replicated with both gestures and animations. In the following sections, we review each phenomenon and present the associated results in more detail. We report the results of comparisons of linear regression models (using R version 3.4.3, ref. 19) with and without the factor of interest, following recommendations in ref. 20. The experimental materials, instructions to participants, data, and R scripts for the analyses are available at https://osf.io/q9zyf (21).
Mean endorsement across all phenomena and conditions. Error bars represent standard error of the mean across participants; dots represent individual participants.
Scalar Implicatures
Implicatures in Words (Traditional).
Scalar implicatures typically arise when an utterance enters into competition with a more informative alternative. For example, the target sentences in 1b and 2b compete with the more informative sentences in 1c and 2c, respectively. This is presumably because the contexts in 1a and 2a make these alternatives salient [although alternatives can also be generated without a context (22)]. As a result, the alternatives are understood to be false (5), leading to the inferences in 1d and 2d.
(1) a. Context: Yesterday at the party, Mary talked to a lot of people.
b. Target sentence: Bill talked to some people.
c. Alternative: Bill talked to a lot of people.
d. Inference: Bill did not talk to a lot of people.
(2) a. Context: Yesterday at the party, Mary did not talk to anyone.
b. Target sentence: Bill did not talk to a lot of people.
c. Alternative: Bill did not talk to anyone.
d. Inference: Bill talked to a few people.
The crucial feature of these examples is that the alternatives are logically more informative than the target sentences (23), which can be the case for both positive (example 1) and negative (example 2) sentences. The negative case in example 2 makes a further theoretical point. In the positive example, the resulting meaning could be obtained by postulating that “talk” is somehow enriched along the lines of talk but not to a lot of people. In the negative case, no enrichment of “talk to a lot of people” can explain the resulting meaning, which therefore has to come from a mechanism of implicatures akin to the one we are after.
Implicatures in Gestures.
To investigate the presence of scalar implicatures with gestures, we tested participants’ interpretation of gestures of different informational strength, which could therefore compete with one another. A positive example and a negative example are described in examples 3–5 and 6–8. The initial context (examples 3/6) helped to raise the salience of the relevant alternatives. Participants saw two kinds of premises crossed with two kinds of inferences. In the positive cases, the target premises contained weak gestures (e.g., turn-wheel), while control premises contained maximally informative (“strong”) gestures that could generate no further enrichment (e.g., turn-wheel-completely) (example 4). Under negation, logical strength is reversed, so the target premises contained strong gestures and the control premises contained weak gestures (example 7). Participants were asked to judge how strongly the implicature (i.e., the target inference) followed and how strongly the negation of the implicature (i.e., the baseline inference) followed (example 8).
(3) Context: John is training to be a stunt driver. Yesterday, at the first mile marker, he was taught to turn-wheel-completely.
(4) Target premise: Today, at the next mile marker, he will turn-wheel.
Control premise: Today, at the next mile marker, he will turn-wheel-completely.
(5) Target inference: John will turn the wheel, but not completely.
Baseline inference: John will turn the wheel completely.
(6) Context: John is training to be a stunt boat driver. Out by the first buoy, he decided to turn-wheel-completely, but at the second one he did not turn-wheel.
(7) Target premise: At the next buoy, he will not turn-wheel-completely.
Control premise: At the next buoy, he will not turn-wheel.
(8) Target inference: John will turn the wheel, but not completely.
Baseline inference: John will not turn the wheel at all.
We observed strong endorsement of the target inferences in response to the target premises. Quantitatively, there was an interaction between the two factors (premise/inference), indicating that the inferences were not due to a default endorsement bias for one kind of inference over another: For the target premises, the target inferences were endorsed more strongly than their respective negations, while the reverse was true for the control premises (positive cases,
These findings are consistent with participants computing scalar implicatures. As mentioned, in the positive cases this could alternatively be due to a stronger than expected interpretation of the “weak” gesture, namely an exact(ly this much) interpretation (e.g., “John will turn the wheel exactly this much”). But the negative examples circumvent this worry: No exact(ly this much) interpretation for turn-wheel-completely could explain participants’ behavior. For the target premise in example 7 to mean that John will turn the wheel but not completely, the positive “John will turn-wheel-completely” would have to mean that John will not turn the wheel at all or he will turn it completely, which is implausible.
Implicatures in Animations.
We constructed implicature conditions that were analogous to those in the gesture experiment, except that the videos involved a combination of written text and animations (rather than speech and gestures). Parallel to examples 3–5 and 6–8, a positive example and a negative example are given in examples 9–11 and 12–14. The “//” marks indicate changes of screen. flash-one and flash-many stand for pictures containing one flash and many flashes, as shown in Fig. 2, meant to represent different amounts of punching. In the target premise, the image appeared to “pop” onto the screen, mimicking the effect of stress that is often involved in securing an implicature-based interpretation.
(9) Context: John the alien has been training on the punching bag at the gym. // At last week’s workout, John had a lot of energy. He was able to… // flash-many.
(10) Target premise: This week, John will… // flash-onepop.
Control premise: This week, John will… // flash-many.
(11) Target inference: This week, John will punch, but not a lot.
Baseline inference: This week, John will punch a lot.
(12) Context: Jenny the alien has been training on the punching bag at the gym. // In her first week of training, Jenny had a lot of energy. She was able to… // flash-many // but in the second week, Jenny did not… // flash-one.
(13) Target premise: This week, Jenny will not… // flash-manypop.
Control premise: This week, Jenny will not… // flash-one.
(14) Target inference: This week, Jenny will punch, but not a lot.
Baseline inference: This week, Jenny will not punch at all.
Animation stimuli representing different amounts of punching.
Consistent with the presence of scalar implicatures, including under negation, we observed greater endorsement of target inferences compared with baseline inferences and a significant interaction between inference and premise type (positive,
Conclusion About Implicatures: Competition Among Nonwords.
Implicatures are expected to arise whenever a representation competes with a more informative one. Given the generality of this mechanism, gestures are expected to trigger implicatures, and indeed they do. More remarkably, however, we observe that implicatures are also triggered by animated representations that cannot be physically produced by human speech or gesture.
Presuppositions
Presuppositions in Words (Traditional).
As mentioned at the outset in connection with the word “the,” presuppositions are characterized by two properties: They are normally taken for granted in the conversation, and they are inherited by sentences across a variety of logical operators including negation. In example 15a, three further constructions of the form x stopped smoking, x continued smoking, and x regretted smoking trigger the presupposition that x smoked before; this presupposition is preserved under negation, as in example 15b, and in questions, as in example 15c. Under the negative quantifier “none” (example 15d), a universal positive inference is typically observed (24).
(15) a. Mary stopped / continued / regretted smoking.
→ Mary smoked before.
b. Mary did not stop / continue / regret smoking.
→ Mary smoked before.
c. Did Mary stop / continue / regret smoking?
→ Mary smoked before.
d. None of my students stopped / continued / regretted smoking.
→ Each of my students smoked before.
Unlike scalar implicatures, which are uncontroversially productive, presuppositions are often treated as an arbitrary property of certain words, although there is a widespread (but hard to formalize) intuition that presuppositions owe their special behavior to the fact that they constitute a “precondition” for the rest of the meaning of the sentence (e.g., ref. 25). Strikingly, novel gestures and animations also appear to generate presuppositions, as we show next.
Presuppositions in Gestures.
To trigger presuppositions in the gestural domain, we used gestures that convey two kinds of information, one of which can intuitively be taken to be a precondition of the other: a sentence of the form x will remove-glasses turns out to presuppose that x will be wearing glasses and to assert that x will remove them; similarly, x will turn-wheel presupposes that x is in the driver’s seat and asserts that x will turn the wheel.
The presupposition condition involved three kinds of gestures. Each appeared in a question (example 17) and under the negative quantifier “none” (example 18); such environments correspond to traditional tests for presupposition.
(16) Context: During an experimental session, Valerie watches her graduate students use microscopes and says to the laboratory assistant standing next to her:
(17) Question environment
Target premise: For the next phase of the experiment, will our visiting student remove-glasses?
Target inference: Valerie’s visiting student currently has glasses on.
Baseline inference: Valerie’s visiting student does not currently have glasses on.
(18) “None” environment
Target premise: For the next phase of the experiment, none of my students will remove-glasses.
Target inference: Each of Valerie’s students currently has glasses on.
Baseline inference: Not all of Valerie’s students currently have glasses on.
Consistent with participants deriving the target presuppositions, we observed an effect of inference type, with greater endorsement of the target presuppositional inferences (p and everybody p, respectively) than of the baseline inferences (not p and not everybody p, respectively) (questions,
Presuppositions in Animations.
As in the gesture experiment, we triggered presuppositions by using animations that conveyed two types of information, one of which could intuitively be taken as a precondition of the other. Each kind of animation appeared in a question (example 19) and under the term “none” (example 21).
(19) A virus has struck the alien population. // The virus needs to be diagnosed as soon as possible. If treatment is not administered, the aliens’ antennae become spotted for a whole month. // Susan is observing her secretary and says // “Will the secretary’s antenna…” // Animation: green bar is unspotted at first and then slowly becomes entirely spotted.
(20) Target inference: The secretary’s antenna is not currently spotted.
Baseline inference: The secretary’s antenna is currently spotted.
(21) A virus has struck the alien population. // The virus needs to be diagnosed as soon as possible. If treatment is not administered, the aliens’ antennae become spotted for a whole month. // Meryl is observing her secretaries and says // “None of the secretaries’ antennae will…” // Animation: green bar is unspotted at first and then slowly becomes entirely spotted.
(22) Target inference: None of the secretaries’ antennae are currently spotted.
Baseline inference: Some of the secretaries’ antennae are currently spotted.
As expected if participants derived the target presupposition, we observed an effect of inference type, with greater endorsement of the target presuppositional inferences than of the baseline inferences (questions,
Conclusion About Presuppositions: Triggering by Nonwords.
Several researchers have argued that general algorithms can predict when an inference triggered by a given word is treated as a presupposition (e.g., ref. 26), in part because across languages constructions that convey the same global information seem to trigger the same presuppositions. But it is difficult to demonstrate the productivity of such algorithms, as one cannot exclude the possibility that the data that make it possible to learn the informational content of a word also make it possible to learn which of its inferences are presuppositions (through exposure to their behavior in various linguistic environments). With plausibly unfamiliar gestures, and even more so with entirely novel animations, things are different: Our results clearly display such algorithms in action. Future research might determine the precise form of this presupposition-triggering algorithm, which should be sufficiently general to apply to words, gestures, and animations alike.
Supplements
Supplements in Words (Traditional).
Nonrestrictive relative clauses are believed to trigger a special type of inference, called a “supplement,” characterized by two main properties. First, unlike presuppositions, supplements are informative; i.e., they are not typically taken for granted in the conversation. Second, even when embedded under logical words, they trigger the same inferences as independent, unembedded sentences (as opposed to embedded conjunctions). Thus, the supplements in the a examples below behave like the b examples and not like the c examples.
(23) a. It is unlikely that Robin lifts weights, which is harmful.
b. It is unlikely that Robin lifts weights. This is harmful.
c. It is unlikely that Robin lifts weights and that this is harmful.
(24) a. If Ann lifts weights, which will adversely affect her health, we should talk to her.
b. If Ann lifts weights, we should talk to her. This will adversely affect her health.
c. If Ann lifts weights and this adversely affects her health, we should talk to her.
Supplements in Gestures.
Gestures have been argued to trigger supplemental meanings in the same way that nonrestrictive relative clauses do (12, 27). Here we provide a partial inferential argument based on gestural versions of sentences like example 24c.
The supplement condition involved two kinds of premises, one containing the target gesture and the other a control in which the gesture co-occurred with the deictic “this” in “and does so like this” (example 26). Each premise was paired with two kinds of inferences, the target supplemental inference and a weaker baseline inference (example 27). (The baseline inference did not correspond exactly to the negation of the target inference, which could be paraphrased as, It’s not the case that if June bugs a classmate today, it will involve hitting her. Instead, we opted for the statement in example 27, which is semantically similar but less convoluted.)
(25) Context: June has been misbehaving a lot on the playground these days, and her teachers are not very happy with her.
(26) Target premise: If June bugs a classmate today—hit, she will get a detention.
Control premise: If June bugs a classmate and does so like this_hit today, she will get a detention.
(27) Target inference: If June bugs a classmate today, it will involve hitting her.
Baseline inference: If June bugs a classmate today, it will not necessarily involve hitting her.
If participants accessed the supplemental inference from the target premise, we expected greater endorsement of the target inference than of the baseline inference; in contrast, for the control premise, we expected no to low endorsement of both the target and baseline inferences. As expected, we observed a statistical interaction between inference type and premise type (
Supplements in Animations.
The supplement condition involved two kinds of premises, one containing a target animation and one containing the equivalent of a co-speech control, in which the animation co-occurred with the deictic “this” in “and does so like this” (see example 30) (this is an instance of what ref. 6 refers to as an “indexed depiction”). Each premise was paired with two kinds of inferences, the target supplemental inference and a weaker baseline inference, as in examples 29 and 31.
(28) Target premise: The alien children like to flash lasers to annoy their friends on the playground. They can use different colors, which vary in how annoying they are. // Cheryl is annoying. // If Cheryl annoys a friend today // Animation: pink spot appears in the center of the screen and disappears // she’s going to get a detention.
(29) Target inference: If Cheryl annoys a friend today, it will involve flashing a pink laser.
Baseline inference: If Cheryl annoys a friend today, it will not necessarily involve flashing a pink laser.
(30) Control premise (“co-speech”): The alien children like to flash lasers to annoy their friends on the playground. They can use different colors, which vary in how annoying they are. // Mitchell is annoying. // If Mitchell annoys a friend today, // and does so like this_[Animation: pink spot appears in the center of the screen and disappears] // he’s going to get a detention.
(31) Target inference: If Mitchell annoys a friend today, it will involve flashing a pink laser.
Baseline inference: If Mitchell annoys a friend today, it will not necessarily involve flashing a pink laser.
We observed a marginal interaction between inference type and premise type (
Homogeneity Inferences
Homogeneity Inferences in Words (Traditional).
It has been argued in recent literature that plural definite noun phrases such as “her presents” trigger a homogeneity inference. This special inferential type is characterized by the fact that in positive sentences, the plural definite behaves like a universal (i.e., “all her presents”), but in negative sentences it behaves like an existential (i.e., “at least one of her presents”) (28⇓–30). (A further characteristic property, not investigated here, is that this inference involves some vagueness; in contrast to the universal “Mary will find all her presents,” the definite “Mary will find her presents” may, depending on the context, allow for certain exceptions.)
(32) a. Mary will find her presents.
→ Mary will find all of her presents.
b. Mary will not find her presents.
→ Mary will find none of her presents.
The resulting meaning thus oscillates between all of her presents and none of her presents. This characteristic inferential behavior is referred to as “homogeneous,” since all presents behave in the same way relative to the predicate.
Homogeneity Inferences in Gestures.
While it may be difficult to produce gestures that are unambiguously interpreted as plural definite descriptions, gestural plurals can be realized by iterating a gesture (e.g., illustrating a “cross” or a “coin”) in different positions, as in example 33; this is in fact a common means of plural formation in sign language (31). By introducing a gestural verb, such as take-2-handed, which targets the position in which the repetition was effected, one can obtain a meaning akin to take them. This makes it possible to investigate homogeneity inferences in gestures, since the gesture for take them implicitly contains a plural definite description.
In our experiment, the homogeneity condition contained two kinds of premises crossed with two kinds of inferences. Participants saw positive and negative premises paired with target (homogeneous) inferences and baseline (nonhomogeneous) inferences, as in examples 34 and 35.
(33) Context: Sam is participating in a treasure hunt in the forest, and she is looking for crosses and coins. Very quickly, Sam will find [cross-rep3]_left and [coin-rep3]_right.
(34) Positive environment
Target premise: Sam will take-2-handed-right.
Target inference: Sam will take all of the coins.
Baseline inference: Sam will take some, but not all of the coins.
(35) Negative environment
Target premise: Sam will not take-2-handed-right.
Target inference: Sam will not take any coins.
Baseline inference: Sam will take some, but not all of the coins.
Participants rated target inferences higher than baseline inferences for both positive (
Homogeneity Inferences in Animations.
To investigate homogeneity inferences in animations, we presented groups of geometric shapes on the screen and a visual representation of a “laser” that could appear to roughly target the cluster of shapes. As before, the homogeneity condition contained two kinds of premises (positive, negative) crossed with two kinds of inferences (homogeneous, nonhomogeneous), as in examples 36 and 37.
(36) In their favorite game, aliens flash lasers to destroy different kinds of objects. // At tonight’s game, there will be…// Animation: three rows of three gray stars each appear on left of screen and disappear; three rows of three gray triangles each appear on right of screen and disappear. //
Positive target premise: Lucas will…/
Negative target premise: Lucas will not…//
Animation: blue spot appears on left (centered on where the group of stars was) and disappears.
(37) Target inference: Lucas will laser all of the stars.
Baseline inference: Lucas will laser some, but not all, of the stars.
Participants rated the target homogeneous inferences higher than the baseline nonhomogeneous inferences for both positive (
Conclusion
We collected semantic judgments about composite utterances containing regular words mixed with either gestures or animations. Due to the iconic nature of the gestures and animations, it was expected that participants would be able to understand their informational content upon a single exposure. The remarkable finding is that the participants furthermore divided the informational content of these nonconventionalized, nonlinguistic expressions among entirely standard types of linguistic inferences. While the gestural data might simply lead us to conclude that spoken language is more multimodal than usually thought and that iconic gestures behave like normal words, the animation data yield a far more radical conclusion: Participants are able to analyze iconic content they have not previously encountered in a linguistic context, in the same way that they analyze words and gestures—productively dividing it among well-established components of the inferential typology. This finding has implications for the nature of the inferential typology and its acquisition. In particular, it suggests that presupposition generation might not be acquired by lexical learning, i.e., on an item-by-item basis; rather, it might be that once speakers know the informational content of a word, or any other representation, they can generate its presupposition “on the fly.” More generally, our results suggest that inference types that are usually thought to be language-specific and in some cases lexically encoded in fact result from productive, domain-general cognitive algorithms.
Acknowledgments
The research leading to this work was supported by Western Sydney University through the University’s Research Theme Champion support funding, by the European Research Council (ERC) under the European Union’s Seventh Framework Program (FP/2007-2013)/ERC Grant 313610, by ANR-17-EURE-0017, and by the Australian Research Council Centre of Excellence in Cognition and its Disorders (Grant CE110001021).
Footnotes
- ↵1To whom correspondence should be addressed. Email: lyn.tieu{at}gmail.com.
↵2P.S. and E.C. contributed equally to this work.
Author contributions: L.T., P.S., and E.C. designed research; L.T. performed research; L.T. analyzed data; and L.T., P.S., and E.C. wrote the paper.
The authors declare no conflict of interest.
This article is a PNAS Direct Submission.
Data deposition: The experimental materials, instructions to participants, data, and R scripts for the analyses are available at https://osf.io/q9zyf.
Published under the PNAS license.
References
- ↵
- Russell B
- ↵
- ↵
- Montague R
- ↵
- Strawson P
- ↵
- Grice P
- ↵
- Clark HH
- ↵
- ↵
- Goldin-Meadow S,
- So WC,
- Özyürek A,
- Mylander C
- ↵
- Reber A,
- Scarborough D
- Gleitman LR,
- Rozin P
- ↵
- ↵
- Ebert C,
- Ebert C
- ↵
- Schlenker P
- ↵
- Schlenker P
- ↵
- McNeill D
- ↵
- Goldin-Meadow S,
- Brentari D
- ↵
- Byrne R, et al.
- ↵
- Tieu L,
- Pasternak R,
- Schlenker P,
- Chemla E
- ↵
- Tieu L,
- Pasternak R,
- Schlenker P,
- Chemla E
- ↵
- R Core Team
- ↵
- ↵
- Tieu L,
- Schlenker P,
- Chemla E
- ↵
- ↵
- Spector B
- ↵
- Chemla E
- ↵
- ↵
- ↵
- Schlenker P
- ↵
- Križ M,
- Spector B
- ↵
- Križ M
- ↵
- Spector B
- ↵
- Pfau R,
- Steinbach M
Citation Manager Formats
Sign up for Article Alerts
Article Classifications
- Social Sciences
- Psychological and Cognitive Sciences