Sighted people’s language is not helpful for blind individuals’ acquisition of typical animal colors
Research Article
May 21, 2019
How do you learn what things look like if you cannot see? Kim et al. (1) tackle this intriguing question by assessing knowledge about animal appearance in blind and sighted individuals. The authors evaluated 2 plausible hypotheses: The learn-from-description hypothesis that blind individuals learn directly from sighted people’s descriptions (e.g., “elephants are gray”) and the learn-from-kind hypothesis that blind people infer visual animal properties from knowledge they have about the animal’s taxonomic class (e.g., a crow is a bird and birds have feathers).
While group differences were observed for all visual properties, blindness had the largest effect on color knowledge: Only sighted participants consistently grouped animals with the same canonical color together. This striking difference between blind and sighted participants arose despite the finding that colors were the easiest to verbalize of all properties tested. From this the authors concluded that blind people do not use verbal descriptions (e.g., “elephants are gray”) as a primary source of information. This conclusion rests on the assumption that information about highly verbalizable properties (such as color) is conveyed in speech. Hence, if blind individuals learn from verbal descriptions produced by sighted people they should have no problem acquiring the canonical colors of animals.
We performed an analysis of cooccurrence statistics in a large corpus of spoken language (2) which revealed that this assumption is not met (Fig. 1): For 23 out of the 30 animals used in Kim et al. (1), the color mentioned most often was noncanonical (“white elephant”) rather than canonical (“gray elephant”), and in only 25% of all of the instances where animals were described as having a color was that color canonical. Thus, contrary to the authors’ claim, their results for color are compatible with the learn-from-description hypothesis; inconsistent descriptions are associated with inconsistent responses in blind individuals. The authors overlooked the fact that language use is geared toward efficiency (3) such that it avoids redundant information (people rarely talk about a “round ball”). Since indices of verbalizability as used by Kim et al. (1) do not appear to be a good proxy for the language input that people get, it will be imperative that future research systematically evaluates to what extent language input predicts what blind individuals know.
Sign up for PNAS alerts.
Get alerts for new articles, or get an alert when an article is cited.
Fig. 1.
On a theoretical level, the alternative account proposed by Kim et al. (1), whereby blind people primarily learn via inferences from ontological kind, runs into a circular reasoning problem: Knowledge about ontological kind itself has to be learned and this is likely achieved via verbal input. Moreover, inferences from kind are limited in the specificity of knowledge that can be derived. Hence, the most plausible scenario is that learning from verbal descriptions and learning via inferences are deeply intertwined, so that one cannot happen without the other and one cannot be said to take precedence over the other when it comes to the acquisition of knowledge in the blind.
Data Availability
Data deposition: The code for the corpus analyses and additional information have been deposited in Zenodo (https://doi.org/10.5281/zenodo.3406143).
Acknowledgments
We thank the OpenSubtitles.org team for making their data available for research purposes. Funding for this work was provided by the Swedish Research Council Grant 2018-00245 (G.M.-M.).
References
1
J. S. Kim, G. V. Elli, M. Bedny, Knowledge of animal appearance among sighted and blind adults. Proc. Natl. Acad. Sci. U.S.A. 116, 11213–11222 (2019).
2
P. Lison, J. Tiedemann, “OpenSubtitles2016: Extracting large parallel corpora from movie and TV subtitles” in Proceedings of the 10th Annual Conference on Language Resources and Evaluation, N. Calzolari et al., Eds. (European Language Resources Association, Paris, 2016), pp. 923–929.
3
H. P. Grice, “Logic and conversation” in Syntax and Semantics, Volume 3: Speech Acts, P. Cole, J. L. Morgan, Eds. (Academic Press, 1975), pp. 41–58.
Information & Authors
Information
Published in
Classifications
Copyright
© 2019. Published under the PNAS license.
Data Availability
Data deposition: The code for the corpus analyses and additional information have been deposited in Zenodo (https://doi.org/10.5281/zenodo.3406143).
Submission history
Published online: October 15, 2019
Published in issue: October 29, 2019
Acknowledgments
We thank the OpenSubtitles.org team for making their data available for research purposes. Funding for this work was provided by the Swedish Research Council Grant 2018-00245 (G.M.-M.).
Authors
Competing Interests
The authors declare no competing interest.
Metrics & Citations
Metrics
Citation statements
Altmetrics
Citations
If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.
Cited by
Loading...
View Options
View options
PDF format
Download this article as a PDF file
DOWNLOAD PDFGet Access
Login options
Check if you have access through your login credentials or your institution to get full access on this article.
Personal login Institutional LoginRecommend to a librarian
Recommend PNAS to a LibrarianPurchase options
Purchase this article to access the full text.