Skip to main content
  • Submit
  • About
    • Editorial Board
    • PNAS Staff
    • FAQ
    • Accessibility Statement
    • Rights and Permissions
    • Site Map
  • Contact
  • Journal Club
  • Subscribe
    • Subscription Rates
    • Subscriptions FAQ
    • Open Access
    • Recommend PNAS to Your Librarian
  • Log in
  • My Cart

Main menu

  • Home
  • Articles
    • Current
    • Special Feature Articles - Most Recent
    • Special Features
    • Colloquia
    • Collected Articles
    • PNAS Classics
    • List of Issues
  • Front Matter
  • News
    • For the Press
    • This Week In PNAS
    • PNAS in the News
  • Podcasts
  • Authors
    • Information for Authors
    • Editorial and Journal Policies
    • Submission Procedures
    • Fees and Licenses
  • Submit
  • About
    • Editorial Board
    • PNAS Staff
    • FAQ
    • Accessibility Statement
    • Rights and Permissions
    • Site Map
  • Contact
  • Journal Club
  • Subscribe
    • Subscription Rates
    • Subscriptions FAQ
    • Open Access
    • Recommend PNAS to Your Librarian

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Home
Home

Advanced Search

  • Home
  • Articles
    • Current
    • Special Feature Articles - Most Recent
    • Special Features
    • Colloquia
    • Collected Articles
    • PNAS Classics
    • List of Issues
  • Front Matter
  • News
    • For the Press
    • This Week In PNAS
    • PNAS in the News
  • Podcasts
  • Authors
    • Information for Authors
    • Editorial and Journal Policies
    • Submission Procedures
    • Fees and Licenses

New Research In

Physical Sciences

Featured Portals

  • Physics
  • Chemistry
  • Sustainability Science

Articles by Topic

  • Applied Mathematics
  • Applied Physical Sciences
  • Astronomy
  • Computer Sciences
  • Earth, Atmospheric, and Planetary Sciences
  • Engineering
  • Environmental Sciences
  • Mathematics
  • Statistics

Social Sciences

Featured Portals

  • Anthropology
  • Sustainability Science

Articles by Topic

  • Economic Sciences
  • Environmental Sciences
  • Political Sciences
  • Psychological and Cognitive Sciences
  • Social Sciences

Biological Sciences

Featured Portals

  • Sustainability Science

Articles by Topic

  • Agricultural Sciences
  • Anthropology
  • Applied Biological Sciences
  • Biochemistry
  • Biophysics and Computational Biology
  • Cell Biology
  • Developmental Biology
  • Ecology
  • Environmental Sciences
  • Evolution
  • Genetics
  • Immunology and Inflammation
  • Medical Sciences
  • Microbiology
  • Neuroscience
  • Pharmacology
  • Physiology
  • Plant Biology
  • Population Biology
  • Psychological and Cognitive Sciences
  • Sustainability Science
  • Systems Biology
Research Article

Linguistic inferences without words

View ORCID ProfileLyn Tieu, Philippe Schlenker, and View ORCID ProfileEmmanuel Chemla
PNAS May 14, 2019 116 (20) 9796-9801; first published April 24, 2019; https://doi.org/10.1073/pnas.1821018116
Lyn Tieu
aOffice of the Pro Vice-Chancellor (Research and Innovation), Western Sydney University, Penrith NSW 2751, Australia;
bSchool of Education, Western Sydney University, Penrith NSW 2751, Australia;
cMARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith NSW 2751, Australia;
dAustralian Research Council (ARC) Centre of Excellence in Cognition and its Disorders, Australian Hearing Hub, Macquarie University, Sydney NSW 2109, Australia;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Lyn Tieu
  • For correspondence: lyn.tieu@gmail.com
Philippe Schlenker
eDépartement d’Etudes Cognitives, Ecole Normale Supérieure (ENS), Université Paris Sciences et Lettres (PSL), Ecole des Hautes Etudes en Sciences Sociales (EHESS), Centre National de la Recherche Scientifique (CNRS), 75005 Paris, France;
fInstitut Jean-Nicod, CNRS, 75005 Paris, France;
gDepartment of Linguistics, New York University, New York, NY 10003;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Emmanuel Chemla
eDépartement d’Etudes Cognitives, Ecole Normale Supérieure (ENS), Université Paris Sciences et Lettres (PSL), Ecole des Hautes Etudes en Sciences Sociales (EHESS), Centre National de la Recherche Scientifique (CNRS), 75005 Paris, France;
hLaboratoire de Sciences Cognitives et Psycholinguistique, CNRS, 75005 Paris, France
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Emmanuel Chemla
  1. Edited by Barbara H. Partee, University of Massachusetts, Amherst, MA, and approved March 18, 2019 (received for review December 10, 2018)

  • Article
  • Figures & SI
  • Info & Metrics
  • PDF
Loading

Significance

Linguistic meaning encompasses a rich typology of inferences, characterized by distinct patterns of interaction with logical expressions. For example, “Robin has continued to smoke” triggers the presuppositional inference that Robin smoked before, characterized by the preservation of the inference under negation in “Robin hasn’t continued to smoke.” We show experimentally that four main inference types can be robustly replicated with iconic gestures and visual animations. These nonlinguistic objects thus display the same type of logical behavior as spoken words. Because the gestures and animations were novel to the participants, the results suggest that people may productively divide new informational content among the components of the inferential typology using general algorithms that apply to linguistic and nonlinguistic objects alike.

Abstract

Contemporary semantics has uncovered a sophisticated typology of linguistic inferences, characterized by their conversational status and their behavior in complex sentences. This typology is usually thought to be specific to language and in part lexically encoded in the meanings of words. We argue that it is neither. Using a method involving “composite” utterances that include normal words alongside novel nonlinguistic iconic representations (gestures and animations), we observe successful “one-shot learning” of linguistic meanings, with four of the main inference types (implicatures, presuppositions, supplements, homogeneity) replicated with gestures and animations. The results suggest a deeper cognitive source for the inferential typology than usually thought: Domain-general cognitive algorithms productively divide both linguistic and nonlinguistic information along familiar parts of the linguistic typology.

  • gesture
  • inference
  • implicature
  • presupposition
  • iconicity

The investigation of meaning gave rise to two major insights in the 20th century, first in the philosophy of language and then in linguistics. One was that English and other natural languages can be modeled as logical languages with an explicit semantics (1⇓–3). The other was that unlike standard logics, natural languages do not just convey information by way of entailments. Rather, they have a rich array of inference types, the investigation of which has led to models of increasing formal sophistication within the last 50 y. In an initial breakthrough, the philosopher Bertrand Russell (1) provided a logical analysis of the definite determiner “the,” whereby “The dog barks” is analyzed by way of a logical formula akin to There is exactly one dog, and it barks. The philosopher Peter Strawson (4) famously replied that this entirely missed the point of definite descriptions: “The dog barks” presupposes (rather than entails) that there is exactly one dog, and it entails that it barks. For this reason, “The dog doesn’t bark” preserves the presupposition and denies the entailment. But the distinction between presuppositions and entailments, a cornerstone of contemporary linguistics, is only the tip of the iceberg: Linguistic inferences are known to be a diverse bunch, which also includes implicatures (5), supplements, and homogeneity inferences.

These inferences are usually thought to be specific to language. Moreover, several are taken to be lexical in nature, i.e., encoded in the meanings of words, which then need to be learned. We argue that both assumptions are incorrect, by replicating the inferential typology with unfamiliar gestures and animations in place of words. We conclude that a considerable part of what is normally classified as linguistic and lexical meaning is neither linguistic nor lexical, but has a much deeper source: productive, domain-general cognitive algorithms.

One-Shot Learning and “Composite” Utterances

We investigate the linguistic behavior of nonlinguistic expressions such as gestures and visual animations. Their informational content is iconic and can for this reason be understood upon a single exposure. This allows us to investigate how this content is productively divided within the inferential typology. To do so, we follow refs. 6 and 7 in embedding these iconic depictions within sentences to assess telltale properties of the various inference types (8⇓–10). Clark (ref. 6, p. 325) highlights the importance of such “composite” utterances made of words and iconic depictions, which he analyzes as “physical scenes that people stage for others to use in imagining the scenes they are depicting.” But how are such depictions semantically and grammatically integrated within sentences? A typology has been developed that depends on whether gestural depictions cooccur with, follow, or replace words (7, 11⇓–13). Focusing on the latter two cases, it has been argued that gestural content is divided among familiar slots of the inferential typology (7).

More concretely, we investigate sentences such as “John will turn-wheel,” where turn-wheel is a silent gesture representing the turning of a steering wheel. The gesture fully replaces a part of speech and is for this reason called a “pro-speech gesture” (7) (referred to by ref. 6 as an “embedded depiction”). We show that the resulting meaning is complex: “John will turn-wheel” presupposes that there is exactly one salient wheel and entails that John will turn it. For this reason, “John won’t turn-wheel”—just like the sentence containing the presuppositional word “the” (“John won’t turn the wheel”)—preserves the presupposition and denies the entailment.

One might wonder whether turn-wheel triggers this presupposition because it is mentally translated into the words “turn the wheel.” This is unlikely because the gesture conveys fine-grained iconic information that is absent from the corresponding words. For instance, if the gesture represents a small or a large wheel, one will get different and potentially gradient information about the size of the denoted object. Our experimental results reveal that some of these iconic implications are indeed understood by our participants.

The sophisticated linguistic behavior of pro-speech gestures is in itself interesting and might suggest that human language is even more multimodal than standardly thought (14, 15); this dovetails with recent studies of primate communication, as apes are now known to exchange information not just with calls, but also with a rich inventory of gestures, some of which can be silent (16). The conclusion, then, might be that iconic gestures should be treated as full-fledged (if nonstandard) words that speakers might even have quite a bit of experience with. In fact, in the spirit of ref. 6, it may be that entirely nonlinguistic objects with the same informational content can be treated in the very same way. We show that this is indeed the case: All our conclusions are replicated with novel pro-speech visual animations, embedded within written sentences.

The Experimental Method: Inferential Judgments

A total of 103 Amazon Mechanical Turk workers participated in the gesture experiment and another group of 99 workers participated in the animation experiment. Informed consent was obtained from all participants. [Ethical approval for this study was obtained from the CERES (“Comité d’évaluation éthique des projets de recherche en santé non soumis à CPP”) under approval number 2013/46.] Participants were asked to watch videos and to judge how strongly the videos led them to draw the inferences that appeared in text below the videos, by using a continuous slider scale that was mapped linearly to a dependent measure ranging from 0 to 100% (17, 18).

All participants saw all items in their respective modality (gesture/animation), allowing us to assess the presence of four main inference types: implicatures, presuppositions, supplements, and so-called “homogeneity inferences.” Every participant saw all trial types, including targets and controls; there were 72 trials in total in the gesture experiment and 48 trials in the animation experiment.

The results are summarized in Fig. 1; all of the inferential phenomena were replicated with both gestures and animations. In the following sections, we review each phenomenon and present the associated results in more detail. We report the results of comparisons of linear regression models (using R version 3.4.3, ref. 19) with and without the factor of interest, following recommendations in ref. 20. The experimental materials, instructions to participants, data, and R scripts for the analyses are available at https://osf.io/q9zyf (21).

Fig. 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 1.

Mean endorsement across all phenomena and conditions. Error bars represent standard error of the mean across participants; dots represent individual participants.

Scalar Implicatures

Implicatures in Words (Traditional).

Scalar implicatures typically arise when an utterance enters into competition with a more informative alternative. For example, the target sentences in 1b and 2b compete with the more informative sentences in 1c and 2c, respectively. This is presumably because the contexts in 1a and 2a make these alternatives salient [although alternatives can also be generated without a context (22)]. As a result, the alternatives are understood to be false (5), leading to the inferences in 1d and 2d.

  • (1) a. Context: Yesterday at the party, Mary talked to a lot of people.

  • b. Target sentence: Bill talked to some people.

  • c. Alternative: Bill talked to a lot of people.

  • d. Inference: Bill did not talk to a lot of people.

  • (2) a. Context: Yesterday at the party, Mary did not talk to anyone.

  • b. Target sentence: Bill did not talk to a lot of people.

  • c. Alternative: Bill did not talk to anyone.

  • d. Inference: Bill talked to a few people.

The crucial feature of these examples is that the alternatives are logically more informative than the target sentences (23), which can be the case for both positive (example 1) and negative (example 2) sentences. The negative case in example 2 makes a further theoretical point. In the positive example, the resulting meaning could be obtained by postulating that “talk” is somehow enriched along the lines of talk but not to a lot of people. In the negative case, no enrichment of “talk to a lot of people” can explain the resulting meaning, which therefore has to come from a mechanism of implicatures akin to the one we are after.

Implicatures in Gestures.

To investigate the presence of scalar implicatures with gestures, we tested participants’ interpretation of gestures of different informational strength, which could therefore compete with one another. A positive example and a negative example are described in examples 3–5 and 6–8. The initial context (examples 3/6) helped to raise the salience of the relevant alternatives. Participants saw two kinds of premises crossed with two kinds of inferences. In the positive cases, the target premises contained weak gestures (e.g., turn-wheel), while control premises contained maximally informative (“strong”) gestures that could generate no further enrichment (e.g., turn-wheel-completely) (example 4). Under negation, logical strength is reversed, so the target premises contained strong gestures and the control premises contained weak gestures (example 7). Participants were asked to judge how strongly the implicature (i.e., the target inference) followed and how strongly the negation of the implicature (i.e., the baseline inference) followed (example 8).

  • (3) Context: John is training to be a stunt driver. Yesterday, at the first mile marker, he was taught to turn-wheel-completely.

  • (4) Target premise: Today, at the next mile marker, he will turn-wheel.

    Control premise: Today, at the next mile marker, he will turn-wheel-completely.

  • (5) Target inference: John will turn the wheel, but not completely.

    Baseline inference: John will turn the wheel completely.

  • (6) Context: John is training to be a stunt boat driver. Out by the first buoy, he decided to turn-wheel-completely, but at the second one he did not turn-wheel.

  • (7) Target premise: At the next buoy, he will not turn-wheel-completely.

    Control premise: At the next buoy, he will not turn-wheel.

  • (8) Target inference: John will turn the wheel, but not completely.

    Baseline inference: John will not turn the wheel at all.

We observed strong endorsement of the target inferences in response to the target premises. Quantitatively, there was an interaction between the two factors (premise/inference), indicating that the inferences were not due to a default endorsement bias for one kind of inference over another: For the target premises, the target inferences were endorsed more strongly than their respective negations, while the reverse was true for the control premises (positive cases, χ2(1)=3,089,P<0.001; negative cases, χ2(1)=55,P<0.001).

These findings are consistent with participants computing scalar implicatures. As mentioned, in the positive cases this could alternatively be due to a stronger than expected interpretation of the “weak” gesture, namely an exact(ly this much) interpretation (e.g., “John will turn the wheel exactly this much”). But the negative examples circumvent this worry: No exact(ly this much) interpretation for turn-wheel-completely could explain participants’ behavior. For the target premise in example 7 to mean that John will turn the wheel but not completely, the positive “John will turn-wheel-completely” would have to mean that John will not turn the wheel at all or he will turn it completely, which is implausible.

Implicatures in Animations.

We constructed implicature conditions that were analogous to those in the gesture experiment, except that the videos involved a combination of written text and animations (rather than speech and gestures). Parallel to examples 3–5 and 6–8, a positive example and a negative example are given in examples 9–11 and 12–14. The “//” marks indicate changes of screen. flash-one and flash-many stand for pictures containing one flash and many flashes, as shown in Fig. 2, meant to represent different amounts of punching. In the target premise, the image appeared to “pop” onto the screen, mimicking the effect of stress that is often involved in securing an implicature-based interpretation.

  • (9) Context: John the alien has been training on the punching bag at the gym. // At last week’s workout, John had a lot of energy. He was able to… // flash-many.

  • (10) Target premise: This week, John will… // flash-onepop.

    Control premise: This week, John will… // flash-many.

  • (11) Target inference: This week, John will punch, but not a lot.

  • Baseline inference: This week, John will punch a lot.

  • (12) Context: Jenny the alien has been training on the punching bag at the gym. // In her first week of training, Jenny had a lot of energy. She was able to… // flash-many // but in the second week, Jenny did not… // flash-one.

  • (13) Target premise: This week, Jenny will not… // flash-manypop.

    Control premise: This week, Jenny will not… // flash-one.

  • (14) Target inference: This week, Jenny will punch, but not a lot.

    Baseline inference: This week, Jenny will not punch at all.

Fig. 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 2.

Animation stimuli representing different amounts of punching.

Consistent with the presence of scalar implicatures, including under negation, we observed greater endorsement of target inferences compared with baseline inferences and a significant interaction between inference and premise type (positive, χ2(1)=1,361,P<0.001; negative, χ2(1)=60,P<0.001).

Conclusion About Implicatures: Competition Among Nonwords.

Implicatures are expected to arise whenever a representation competes with a more informative one. Given the generality of this mechanism, gestures are expected to trigger implicatures, and indeed they do. More remarkably, however, we observe that implicatures are also triggered by animated representations that cannot be physically produced by human speech or gesture.

Presuppositions

Presuppositions in Words (Traditional).

As mentioned at the outset in connection with the word “the,” presuppositions are characterized by two properties: They are normally taken for granted in the conversation, and they are inherited by sentences across a variety of logical operators including negation. In example 15a, three further constructions of the form x stopped smoking, x continued smoking, and x regretted smoking trigger the presupposition that x smoked before; this presupposition is preserved under negation, as in example 15b, and in questions, as in example 15c. Under the negative quantifier “none” (example 15d), a universal positive inference is typically observed (24).

  • (15) a. Mary stopped / continued / regretted smoking.

    → Mary smoked before.

  • b. Mary did not stop / continue / regret smoking.

    → Mary smoked before.

  • c. Did Mary stop / continue / regret smoking?

    → Mary smoked before.

  • d. None of my students stopped / continued / regretted smoking.

    → Each of my students smoked before.

Unlike scalar implicatures, which are uncontroversially productive, presuppositions are often treated as an arbitrary property of certain words, although there is a widespread (but hard to formalize) intuition that presuppositions owe their special behavior to the fact that they constitute a “precondition” for the rest of the meaning of the sentence (e.g., ref. 25). Strikingly, novel gestures and animations also appear to generate presuppositions, as we show next.

Presuppositions in Gestures.

To trigger presuppositions in the gestural domain, we used gestures that convey two kinds of information, one of which can intuitively be taken to be a precondition of the other: a sentence of the form x will remove-glasses turns out to presuppose that x will be wearing glasses and to assert that x will remove them; similarly, x will turn-wheel presupposes that x is in the driver’s seat and asserts that x will turn the wheel.

The presupposition condition involved three kinds of gestures. Each appeared in a question (example 17) and under the negative quantifier “none” (example 18); such environments correspond to traditional tests for presupposition.

  • (16) Context: During an experimental session, Valerie watches her graduate students use microscopes and says to the laboratory assistant standing next to her:

  • (17) Question environment

    Target premise: For the next phase of the experiment, will our visiting student remove-glasses?

    Target inference: Valerie’s visiting student currently has glasses on.

  • Baseline inference: Valerie’s visiting student does not currently have glasses on.

  • (18) “None” environment

    Target premise: For the next phase of the experiment, none of my students will remove-glasses.

    Target inference: Each of Valerie’s students currently has glasses on.

    Baseline inference: Not all of Valerie’s students currently have glasses on.

Consistent with participants deriving the target presuppositions, we observed an effect of inference type, with greater endorsement of the target presuppositional inferences (p and everybody p, respectively) than of the baseline inferences (not p and not everybody p, respectively) (questions, χ2(1)=230,P<0.001; “none,” χ2(1)=14,P<0.001).

Presuppositions in Animations.

As in the gesture experiment, we triggered presuppositions by using animations that conveyed two types of information, one of which could intuitively be taken as a precondition of the other. Each kind of animation appeared in a question (example 19) and under the term “none” (example 21).

  • (19) A virus has struck the alien population. // The virus needs to be diagnosed as soon as possible. If treatment is not administered, the aliens’ antennae become spotted for a whole month. // Susan is observing her secretary and says // “Will the secretary’s antenna…” // Animation: green bar is unspotted at first and then slowly becomes entirely spotted.

  • (20) Target inference: The secretary’s antenna is not currently spotted.

    Baseline inference: The secretary’s antenna is currently spotted.

  • (21) A virus has struck the alien population. // The virus needs to be diagnosed as soon as possible. If treatment is not administered, the aliens’ antennae become spotted for a whole month. // Meryl is observing her secretaries and says // “None of the secretaries’ antennae will…” // Animation: green bar is unspotted at first and then slowly becomes entirely spotted.

  • (22) Target inference: None of the secretaries’ antennae are currently spotted.

    Baseline inference: Some of the secretaries’ antennae are currently spotted.

As expected if participants derived the target presupposition, we observed an effect of inference type, with greater endorsement of the target presuppositional inferences than of the baseline inferences (questions, χ2(1)=49,P<0.001; “none,” χ2(1)=79,P<0.001).

Conclusion About Presuppositions: Triggering by Nonwords.

Several researchers have argued that general algorithms can predict when an inference triggered by a given word is treated as a presupposition (e.g., ref. 26), in part because across languages constructions that convey the same global information seem to trigger the same presuppositions. But it is difficult to demonstrate the productivity of such algorithms, as one cannot exclude the possibility that the data that make it possible to learn the informational content of a word also make it possible to learn which of its inferences are presuppositions (through exposure to their behavior in various linguistic environments). With plausibly unfamiliar gestures, and even more so with entirely novel animations, things are different: Our results clearly display such algorithms in action. Future research might determine the precise form of this presupposition-triggering algorithm, which should be sufficiently general to apply to words, gestures, and animations alike.

Supplements

Supplements in Words (Traditional).

Nonrestrictive relative clauses are believed to trigger a special type of inference, called a “supplement,” characterized by two main properties. First, unlike presuppositions, supplements are informative; i.e., they are not typically taken for granted in the conversation. Second, even when embedded under logical words, they trigger the same inferences as independent, unembedded sentences (as opposed to embedded conjunctions). Thus, the supplements in the a examples below behave like the b examples and not like the c examples.

  • (23) a. It is unlikely that Robin lifts weights, which is harmful.

  • b. It is unlikely that Robin lifts weights. This is harmful.

  • c. It is unlikely that Robin lifts weights and that this is harmful.

  • (24) a. If Ann lifts weights, which will adversely affect her health, we should talk to her.

  • b. If Ann lifts weights, we should talk to her. This will adversely affect her health.

  • c. If Ann lifts weights and this adversely affects her health, we should talk to her.

Supplements in Gestures.

Gestures have been argued to trigger supplemental meanings in the same way that nonrestrictive relative clauses do (12, 27). Here we provide a partial inferential argument based on gestural versions of sentences like example 24c.

The supplement condition involved two kinds of premises, one containing the target gesture and the other a control in which the gesture co-occurred with the deictic “this” in “and does so like this” (example 26). Each premise was paired with two kinds of inferences, the target supplemental inference and a weaker baseline inference (example 27). (The baseline inference did not correspond exactly to the negation of the target inference, which could be paraphrased as, It’s not the case that if June bugs a classmate today, it will involve hitting her. Instead, we opted for the statement in example 27, which is semantically similar but less convoluted.)

  • (25) Context: June has been misbehaving a lot on the playground these days, and her teachers are not very happy with her.

  • (26) Target premise: If June bugs a classmate today—hit, she will get a detention.

    Control premise: If June bugs a classmate and does so like this_hit today, she will get a detention.

  • (27) Target inference: If June bugs a classmate today, it will involve hitting her.

    Baseline inference: If June bugs a classmate today, it will not necessarily involve hitting her.

If participants accessed the supplemental inference from the target premise, we expected greater endorsement of the target inference than of the baseline inference; in contrast, for the control premise, we expected no to low endorsement of both the target and baseline inferences. As expected, we observed a statistical interaction between inference type and premise type (χ2(1)=27,P<0.001), with a greater difference between target and baseline inferences for the target premise than for the control premise.

Supplements in Animations.

The supplement condition involved two kinds of premises, one containing a target animation and one containing the equivalent of a co-speech control, in which the animation co-occurred with the deictic “this” in “and does so like this” (see example 30) (this is an instance of what ref. 6 refers to as an “indexed depiction”). Each premise was paired with two kinds of inferences, the target supplemental inference and a weaker baseline inference, as in examples 29 and 31.

  • (28) Target premise: The alien children like to flash lasers to annoy their friends on the playground. They can use different colors, which vary in how annoying they are. // Cheryl is annoying. // If Cheryl annoys a friend today // Animation: pink spot appears in the center of the screen and disappears // she’s going to get a detention.

  • (29) Target inference: If Cheryl annoys a friend today, it will involve flashing a pink laser.

    Baseline inference: If Cheryl annoys a friend today, it will not necessarily involve flashing a pink laser.

  • (30) Control premise (“co-speech”): The alien children like to flash lasers to annoy their friends on the playground. They can use different colors, which vary in how annoying they are. // Mitchell is annoying. // If Mitchell annoys a friend today, // and does so like this_[Animation: pink spot appears in the center of the screen and disappears] // he’s going to get a detention.

  • (31) Target inference: If Mitchell annoys a friend today, it will involve flashing a pink laser.

    Baseline inference: If Mitchell annoys a friend today, it will not necessarily involve flashing a pink laser.

We observed a marginal interaction between inference type and premise type (χ2(1)=3.5,P=0.06), with a greater difference between target and baseline inferences for the target premise than for the control premise. This provides some preliminary evidence that animations might trigger non–at-issue inferences, consistent with the behavior of nonrestrictive relative clauses. (As an anonymous reviewer points out, the supplement target was the one case in the experiment where participants could easily have ignored the gesture and still be left with a complete sentence; it is rather striking then that participants clearly did not ignore the gesture.)

Homogeneity Inferences

Homogeneity Inferences in Words (Traditional).

It has been argued in recent literature that plural definite noun phrases such as “her presents” trigger a homogeneity inference. This special inferential type is characterized by the fact that in positive sentences, the plural definite behaves like a universal (i.e., “all her presents”), but in negative sentences it behaves like an existential (i.e., “at least one of her presents”) (28⇓–30). (A further characteristic property, not investigated here, is that this inference involves some vagueness; in contrast to the universal “Mary will find all her presents,” the definite “Mary will find her presents” may, depending on the context, allow for certain exceptions.)

  • (32) a. Mary will find her presents.

    → Mary will find all of her presents.

    b. Mary will not find her presents.

    → Mary will find none of her presents.

The resulting meaning thus oscillates between all of her presents and none of her presents. This characteristic inferential behavior is referred to as “homogeneous,” since all presents behave in the same way relative to the predicate.

Homogeneity Inferences in Gestures.

While it may be difficult to produce gestures that are unambiguously interpreted as plural definite descriptions, gestural plurals can be realized by iterating a gesture (e.g., illustrating a “cross” or a “coin”) in different positions, as in example 33; this is in fact a common means of plural formation in sign language (31). By introducing a gestural verb, such as take-2-handed, which targets the position in which the repetition was effected, one can obtain a meaning akin to take them. This makes it possible to investigate homogeneity inferences in gestures, since the gesture for take them implicitly contains a plural definite description.

In our experiment, the homogeneity condition contained two kinds of premises crossed with two kinds of inferences. Participants saw positive and negative premises paired with target (homogeneous) inferences and baseline (nonhomogeneous) inferences, as in examples 34 and 35.

  • (33) Context: Sam is participating in a treasure hunt in the forest, and she is looking for crosses and coins. Very quickly, Sam will find [cross-rep3]_left and [coin-rep3]_right.

  • (34) Positive environment

    Target premise: Sam will take-2-handed-right.

    Target inference: Sam will take all of the coins.

    Baseline inference: Sam will take some, but not all of the coins.

  • (35) Negative environment

    Target premise: Sam will not take-2-handed-right.

    Target inference: Sam will not take any coins.

    Baseline inference: Sam will take some, but not all of the coins.

Participants rated target inferences higher than baseline inferences for both positive (χ2(1)=84,P<0.001) and negative premises (χ2(1)=132,P<0.001), suggesting homogeneity inferences can be triggered by purely gestural means.

Homogeneity Inferences in Animations.

To investigate homogeneity inferences in animations, we presented groups of geometric shapes on the screen and a visual representation of a “laser” that could appear to roughly target the cluster of shapes. As before, the homogeneity condition contained two kinds of premises (positive, negative) crossed with two kinds of inferences (homogeneous, nonhomogeneous), as in examples 36 and 37.

  • (36) In their favorite game, aliens flash lasers to destroy different kinds of objects. // At tonight’s game, there will be…// Animation: three rows of three gray stars each appear on left of screen and disappear; three rows of three gray triangles each appear on right of screen and disappear. //

    Positive target premise: Lucas will…/

    Negative target premise: Lucas will not…//

    Animation: blue spot appears on left (centered on where the group of stars was) and disappears.

  • (37) Target inference: Lucas will laser all of the stars.

  • Baseline inference: Lucas will laser some, but not all, of the stars.

Participants rated the target homogeneous inferences higher than the baseline nonhomogeneous inferences for both positive (χ2(1)=42,P<0.001) and negative premises (χ2(1)=87,P<0.001), supporting the existence of homogeneity inferences even in this nonspeech, nongesture domain of animations.

Conclusion

We collected semantic judgments about composite utterances containing regular words mixed with either gestures or animations. Due to the iconic nature of the gestures and animations, it was expected that participants would be able to understand their informational content upon a single exposure. The remarkable finding is that the participants furthermore divided the informational content of these nonconventionalized, nonlinguistic expressions among entirely standard types of linguistic inferences. While the gestural data might simply lead us to conclude that spoken language is more multimodal than usually thought and that iconic gestures behave like normal words, the animation data yield a far more radical conclusion: Participants are able to analyze iconic content they have not previously encountered in a linguistic context, in the same way that they analyze words and gestures—productively dividing it among well-established components of the inferential typology. This finding has implications for the nature of the inferential typology and its acquisition. In particular, it suggests that presupposition generation might not be acquired by lexical learning, i.e., on an item-by-item basis; rather, it might be that once speakers know the informational content of a word, or any other representation, they can generate its presupposition “on the fly.” More generally, our results suggest that inference types that are usually thought to be language-specific and in some cases lexically encoded in fact result from productive, domain-general cognitive algorithms.

Acknowledgments

The research leading to this work was supported by Western Sydney University through the University’s Research Theme Champion support funding, by the European Research Council (ERC) under the European Union’s Seventh Framework Program (FP/2007-2013)/ERC Grant 313610, by ANR-17-EURE-0017, and by the Australian Research Council Centre of Excellence in Cognition and its Disorders (Grant CE110001021).

Footnotes

  • ↵1To whom correspondence should be addressed. Email: lyn.tieu{at}gmail.com.
  • ↵2P.S. and E.C. contributed equally to this work.

  • Author contributions: L.T., P.S., and E.C. designed research; L.T. performed research; L.T. analyzed data; and L.T., P.S., and E.C. wrote the paper.

  • The authors declare no conflict of interest.

  • This article is a PNAS Direct Submission.

  • Data deposition: The experimental materials, instructions to participants, data, and R scripts for the analyses are available at https://osf.io/q9zyf.

Published under the PNAS license.

View Abstract

References

  1. ↵
    1. Russell B
    (1905) On denoting. Mind 14:479–493.
    OpenUrl
  2. ↵
    1. Tarski A
    (1943) The semantic conception of truth and the foundation of semantics. Philos Phenomenol Res 4:341–376.
    OpenUrlCrossRef
  3. ↵
    1. Montague R
    (1970) English as a formal language. Linguaggi nella Società e nella Tecnica, ed Visentini B (Edizioni di Communita, Milan), pp 189–224; reprinted in Thomason RH, ed (1974) Formal Philosophy. Selected Papers of Richard Montague (Yale Univ Press, New Haven), pp 188–221.
  4. ↵
    1. Strawson P
    (1950) On referring. Mind 59:320–344.
    OpenUrl
  5. ↵
    1. Grice P
    (1975) Logic and conversation. The Logic of Grammar, eds Davidson D, Harman GH (Dickenson Publishing Company, Encino, CA), pp 64–75; reprinted in Grice HP (ed) 1989 Studies in the Way of Words (Harvard Univ Press, Cambridge, MA), pp 22–40.
  6. ↵
    1. Clark HH
    (2016) Depicting as a method of communication. Psychol Rev 123:324–347.
    OpenUrl
  7. ↵
    1. Schlenker P
    (November 28, 2018) Gestural semantics: Replicating the typology of linguistic inferences with pro- and post-speech gestures. Nat Lang Ling Theory doi:10.1007/s11049-018-9414-3.
    OpenUrlCrossRef
  8. ↵
    1. Goldin-Meadow S,
    2. So WC,
    3. Özyürek A,
    4. Mylander C
    (2008) The natural order of events: How speakers of different languages represent events nonverbally. Proc Natl Acad Sci USA 105:9163–9168.
    OpenUrlAbstract/FREE Full Text
  9. ↵
    1. Reber A,
    2. Scarborough D
    1. Gleitman LR,
    2. Rozin P
    (1977) The structure and acquisition of reading I: Relations between orthographies and the structure of language. Toward a Psychology of Reading, eds Reber A, Scarborough D (Erlbaum, Hillsdale, NJ).
  10. ↵
    1. Potter MC,
    2. Staub A,
    3. O’Connor DH
    (2004) Pictorial and conceptual representation of glimpsed pictures. J Exp Psychol Hum Percept Perform 30:478–489.
    OpenUrlCrossRefPubMed
  11. ↵
    1. Ebert C,
    2. Ebert C
    (2014) Gestures, demonstratives, and the attributive/referential distinction. Available at https://semanticsarchive.net/Archive/GJjYzkwN/EbertEbert-SPE-2014-slides.pdf. Accessed August 28, 2018.
  12. ↵
    1. Schlenker P
    (2018) Iconic pragmatics. Nat Lang Ling Theory 36:877–936.
    OpenUrl
  13. ↵
    1. Schlenker P
    , Gestural grammar. Nat Lang Ling Theory, in press.
  14. ↵
    1. McNeill D
    (2005) Gesture and Thought (Univ Chicago Press, Chicago).
  15. ↵
    1. Goldin-Meadow S,
    2. Brentari D
    (2017) Gesture, sign, and language: The coming of age of sign language and gesture studies. Behav Brain Sci 40:e46.
    OpenUrl
  16. ↵
    1. Byrne R, et al.
    (2017) Great ape gestures: Intentional communication with a rich set of innate signals. Anim Cogn 20:755–769.
    OpenUrl
  17. ↵
    1. Tieu L,
    2. Pasternak R,
    3. Schlenker P,
    4. Chemla E
    (2017) Co-speech gesture projection: Evidence from truth-value judgment and picture selection tasks. Glossa J Gen Linguist 2:102.
    OpenUrl
  18. ↵
    1. Tieu L,
    2. Pasternak R,
    3. Schlenker P,
    4. Chemla E
    (2018) Inferences of co-speech gestures: Evidence from inferential judgments. Glossa J Gen Linguist 3:109.
    OpenUrl
  19. ↵
    1. R Core Team
    (2016) R: A Language and Environment for Statistical Computing (R Foundation for Statistical Computing, Vienna), Version 3.4.3.
  20. ↵
    1. Barr DJ,
    2. Levy R,
    3. Scheepers C,
    4. Tily HJ
    (2013) Random effects structure for confirmatory hypothesis testing: Keep it maximal. J Mem Lang 68:255–278.
    OpenUrlCrossRefPubMed
  21. ↵
    1. Tieu L,
    2. Schlenker P,
    3. Chemla E
    (2018) Data from “Linguistic inferences without words.” Open Science Framework. Available at https://osf.io/q9zyf. Deposited October 10, 2018.
  22. ↵
    1. Katzir R
    (2007) Structurally-defined alternatives. Linguist Philos 30:669–690.
    OpenUrlCrossRef
  23. ↵
    1. Spector B
    (2007) Aspects of the pragmatics of plural morphology: On higher-order implicatures. Presupposition and Implicature in Compositional Semantics, Palgrave Studies in Pragmatics, Language and Cognition, eds Sauerland U, Stateva P (Palgrave Macmillan, London), pp 243–281.
  24. ↵
    1. Chemla E
    (2009) Presuppositions of quantified sentences: Experimental data. Nat Lang Semant 17:299–340.
    OpenUrl
  25. ↵
    1. Abusch D
    (2010) Presupposition triggering from alternatives. J Semant 27:37–80.
    OpenUrlCrossRef
  26. ↵
    1. Abrusán M
    (2011) Predicting the presuppositions of soft triggers. Linguist Philos 34:491–535.
    OpenUrlCrossRef
  27. ↵
    1. Schlenker P
    (2018) Gesture projection and cosuppositions. Linguist Philos 41:295–365.
    OpenUrl
  28. ↵
    1. Križ M,
    2. Spector B
    (2017) Interpreting plural predication: Homogeneity and non-maximality. lingbuzz:003458. Preprint, posted May 18, 2017.
  29. ↵
    1. Križ M
    (2015) Aspects of homogeneity in the semantics of natural language. PhD thesis (University of Vienna, Vienna).
  30. ↵
    1. Spector B
    (2013) Homogeneity and plurals: From the strongest meaning hypothesis to supervaluations. Available at https://ehutb.ehu.eus/uploads/material/Video/3289/Sinn18_01.pdf. Accessed August 30, 2018.
  31. ↵
    1. Pfau R,
    2. Steinbach M
    (2006) Pluralization in sign and in speech: A cross-modal typological study. Linguist Typol 10:135–182.
    OpenUrl
PreviousNext
Back to top
Article Alerts
Email Article

Thank you for your interest in spreading the word on PNAS.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Linguistic inferences without words
(Your Name) has sent you a message from PNAS
(Your Name) thought you would like to see the PNAS web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
Linguistic inferences without words
Lyn Tieu, Philippe Schlenker, Emmanuel Chemla
Proceedings of the National Academy of Sciences May 2019, 116 (20) 9796-9801; DOI: 10.1073/pnas.1821018116

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Request Permissions
Share
Linguistic inferences without words
Lyn Tieu, Philippe Schlenker, Emmanuel Chemla
Proceedings of the National Academy of Sciences May 2019, 116 (20) 9796-9801; DOI: 10.1073/pnas.1821018116
Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Mendeley logo Mendeley
Proceedings of the National Academy of Sciences: 116 (20)
Table of Contents

Submit

Sign up for Article Alerts

Article Classifications

  • Social Sciences
  • Psychological and Cognitive Sciences

Jump to section

  • Article
    • Abstract
    • One-Shot Learning and “Composite” Utterances
    • The Experimental Method: Inferential Judgments
    • Scalar Implicatures
    • Presuppositions
    • Supplements
    • Homogeneity Inferences
    • Conclusion
    • Acknowledgments
    • Footnotes
    • References
  • Figures & SI
  • Info & Metrics
  • PDF

You May Also be Interested in

Abstract depiction of a guitar and musical note
Science & Culture: At the nexus of music and medicine, some see disease treatments
Although the evidence is still limited, a growing body of research suggests music may have beneficial effects for diseases such as Parkinson’s.
Image credit: Shutterstock/agsandrew.
Large piece of gold
News Feature: Tracing gold's cosmic origins
Astronomers thought they’d finally figured out where gold and other heavy elements in the universe came from. In light of recent results, they’re not so sure.
Image credit: Science Source/Tom McHugh.
Dancers in red dresses
Journal Club: Friends appear to share patterns of brain activity
Researchers are still trying to understand what causes this strong correlation between neural and social networks.
Image credit: Shutterstock/Yeongsik Im.
White and blue bird
Hazards of ozone pollution to birds
Amanda Rodewald, Ivan Rudik, and Catherine Kling talk about the hazards of ozone pollution to birds.
Listen
Past PodcastsSubscribe
Goats standing in a pin
Transplantation of sperm-producing stem cells
CRISPR-Cas9 gene editing can improve the effectiveness of spermatogonial stem cell transplantation in mice and livestock, a study finds.
Image credit: Jon M. Oatley.

Similar Articles

Site Logo
Powered by HighWire
  • Submit Manuscript
  • Twitter
  • Facebook
  • RSS Feeds
  • Email Alerts

Articles

  • Current Issue
  • Special Feature Articles – Most Recent
  • List of Issues

PNAS Portals

  • Anthropology
  • Chemistry
  • Classics
  • Front Matter
  • Physics
  • Sustainability Science
  • Teaching Resources

Information

  • Authors
  • Editorial Board
  • Reviewers
  • Librarians
  • Press
  • Site Map
  • PNAS Updates

Feedback    Privacy/Legal

Copyright © 2021 National Academy of Sciences. Online ISSN 1091-6490