Skip to main content
  • Submit
  • About
    • Editorial Board
    • PNAS Staff
    • FAQ
    • Rights and Permissions
    • Site Map
  • Contact
  • Journal Club
  • Subscribe
    • Subscription Rates
    • Subscriptions FAQ
    • Open Access
    • Recommend PNAS to Your Librarian
  • Log in
  • My Cart

Main menu

  • Home
  • Articles
    • Current
    • Latest Articles
    • Special Features
    • Colloquia
    • Collected Articles
    • PNAS Classics
    • Archive
  • Front Matter
  • News
    • For the Press
    • Highlights from Latest Articles
    • PNAS in the News
  • Podcasts
  • Authors
    • Information for Authors
    • Purpose and Scope
    • Editorial and Journal Policies
    • Submission Procedures
    • For Reviewers
    • Author FAQ
  • Submit
  • About
    • Editorial Board
    • PNAS Staff
    • FAQ
    • Rights and Permissions
    • Site Map
  • Contact
  • Journal Club
  • Subscribe
    • Subscription Rates
    • Subscriptions FAQ
    • Open Access
    • Recommend PNAS to Your Librarian

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Home
Home

Advanced Search

  • Home
  • Articles
    • Current
    • Latest Articles
    • Special Features
    • Colloquia
    • Collected Articles
    • PNAS Classics
    • Archive
  • Front Matter
  • News
    • For the Press
    • Highlights from Latest Articles
    • PNAS in the News
  • Podcasts
  • Authors
    • Information for Authors
    • Purpose and Scope
    • Editorial and Journal Policies
    • Submission Procedures
    • For Reviewers
    • Author FAQ

New Research In

Physical Sciences

Featured Portals

  • Physics
  • Chemistry
  • Sustainability Science

Articles by Topic

  • Applied Mathematics
  • Applied Physical Sciences
  • Astronomy
  • Computer Sciences
  • Earth, Atmospheric, and Planetary Sciences
  • Engineering
  • Environmental Sciences
  • Mathematics
  • Statistics

Social Sciences

Featured Portals

  • Anthropology
  • Sustainability Science

Articles by Topic

  • Economic Sciences
  • Environmental Sciences
  • Political Sciences
  • Psychological and Cognitive Sciences
  • Social Sciences

Biological Sciences

Featured Portals

  • Sustainability Science

Articles by Topic

  • Agricultural Sciences
  • Anthropology
  • Applied Biological Sciences
  • Biochemistry
  • Biophysics and Computational Biology
  • Cell Biology
  • Developmental Biology
  • Ecology
  • Environmental Sciences
  • Evolution
  • Genetics
  • Immunology and Inflammation
  • Medical Sciences
  • Microbiology
  • Neuroscience
  • Pharmacology
  • Physiology
  • Plant Biology
  • Population Biology
  • Psychological and Cognitive Sciences
  • Sustainability Science
  • Systems Biology

Whether the hearing brain hears it or the deaf brain sees it, it’s just the same

Marcin Szwed, Łukasz Bola, and Maria Zimmermann
PNAS August 1, 2017 114 (31) 8135-8137; published ahead of print July 21, 2017 https://doi.org/10.1073/pnas.1710492114
Marcin Szwed
aDepartment of Psychology, Jagiellonian University, 30-060 Krakow, Poland;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: mfszwed@gmail.com
Łukasz Bola
aDepartment of Psychology, Jagiellonian University, 30-060 Krakow, Poland;bLaboratory of Brain Imaging, Neurobiology Center, Nencki Institute of Experimental Biology, Polish Academy of Sciences, 02-093 Warsaw, Poland
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Maria Zimmermann
aDepartment of Psychology, Jagiellonian University, 30-060 Krakow, Poland;bLaboratory of Brain Imaging, Neurobiology Center, Nencki Institute of Experimental Biology, Polish Academy of Sciences, 02-093 Warsaw, Poland
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site

See related content:

  • Functional selectivity for face processing in the temporal voice area of early deaf individuals
    - Aug 01, 2017
  • Article
  • Figures & SI
  • Info & Metrics
  • PDF
Loading

“Now that he’d remembered what he meant to tell her, he seemed to lose interest. She didn’t have to see his face to know this. It was in the air. It was in the pause that trailed from his remark of eight, ten, twelve seconds ago” (1). Lauren Hartke, Don DeLillo’s protagonist in The Body Artist (1), does not have to look at her husband to feel him drifting away. Words and pauses between them carry a wealth of information about the speakers’ intentions and emotions. Words, however, cannot only be heard but also can be seen being signed in sign language. How does the brain go about extracting identity and emotional cues from sentences when they arrive via different senses? Are they duly processed in their corresponding sensory cortices: the speaking speaker’s identity and emotional tone in the auditory cortex and the signing speaker’s identity and emotional tone in the visual cortex? Alternately, do they converge rapidly into one common brain area, irrespective of whether the conversation is heard or seen? If the latter is the case, where would that area be? Deaf signers can communicate through the visual channel with a dexterity that matches spoken communication between two speakers. This capacity gives one a rare opportunity to explore fundamental questions on how different parts of the brain go about dividing their labor. Now, in PNAS, Benetti et al. (2) bring important insights into this matter.

For many decades, it has been thought that a fundamental principle of brain organization is the division of that labor between separate sensory systems (e.g., visual, auditory, tactile). Consequently, experience-dependent changes were thought to occur almost exclusively within the bounds of this division: Visual training would cause changes in the visual cortex, and so on (e.g., ref. 3). Against this view, the past 30 years of research have shown that atypical sensory experience can trigger changes that overcome this division. Such “cross-modal plasticity” is particularly well documented in the visual cortex of the blind, where it follows the rule of task-specific reorganization (4): The sensory input of a given region is being switched, but its typical functional specialization is being preserved. For example, separate ventral visual regions in the blind respond to tactile and auditory object recognition (5, 6), tactile and auditory reading (7, 8), and auditory perception of body shapes (9), and this division of labor corresponds to the typical organization of the visual cortex in the sighted (10). Until recently, however, it remained an open question as to whether such task-specific reorganization is unique to the visual cortex or, alternatively, whether it is a general principle applying to other cortical areas. True, task-specific reorganization of the auditory cortex has been demonstrated in deaf cats. Impressive experiments, some of them involving reversible inactivation of auditory cortex with cooling loops, have shown that a distinct auditory area supports peripheral visual localization and visual motion detection in deaf cats, and that the same region supports these functions in the auditory modality in hearing cats (11, 12). However, human evidence, in contrast, has been relatively scarce (13).

In their article, Benetti et al. (2) study cross-modal plasticity in the system that supports recognition of a person’s identity: a skill that is critical in everyday social interactions. Typically, this goal is achieved by combining face and voice recognition, because both of these channels convey important information about the person’s individual characteristics. Through specialized brain areas, these social cues are extracted from faces and voices with ease and accuracy. Face processing is primarily supported by the fusiform face area, a ventral visual region described by Kanwisher et al. (14) 20 y ago, whereas voice processing is supported by the temporal voice area identified in the auditory cortex by Belin et al. (15) 3 y later.

Benetti et al. (2) ask how the brain refashions itself when spoken words cannot be heard, as is the case in the deaf. Using functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG), they showed that in these circumstances, the temporal voice area reorganizes and becomes selective for faces. Their results show a near-perfect overlap between the area activated by voice recognition in the hearing and the novel, additional face activation emerging in the auditory cortex of the deaf (Fig. 1A). Furthermore, they used a sophisticated fMRI-adaptation paradigm to show that the temporal voice area of the deaf is activated more strongly when deaf individuals perceive different faces rather than only one face repeated several times. This finding constitutes powerful evidence that, indeed, part of the auditory cortex in the deaf supports face identity processing.

Fig. 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 1.

Task-specific auditory cortex reorganization in deaf humans. Cross-modal plasticity in the auditory cortex of congenitally deaf people overcomes the division between visual and auditory processing streams, as regions of high-level auditory cortex become recruited for visual face processing and visual rhythm perception. (A) fMRI results show peaks of activation for auditory voice recognition in the hearing (blue) and for visual face recognition in the deaf (yellow). Reproduced from ref. 2. (B) fMRI results also show peaks of activation for auditory rhythm perception in the hearing (blue) and visual rhythm perception in the deaf (yellow). Adapted from ref. 13.

To make sure that these activations to faces in the temporal voice area were not a result of acquiring sign language itself, the researchers studied a special control subject group: hearing subjects who are fluent in sign language. Just like the deaf subjects, this additional control group was proficient in Italian sign language. However, they did not exhibit these additional auditory cortex activations to faces visible only in the deaf, most likely because their temporal voice area was also exposed to regular spoken language. Benetti et al. (2) also took advantage of the superior temporal resolution offered by MEG, and demonstrated that the face selectivity in the temporal voice area of deaf individuals emerges very fast, within the first 200 ms following stimulus onset, which is only milliseconds later than the activation in the main face recognition area, the fusiform face area (2). This finding is another indirect, yet potent, sign of the importance of the processing that occurs in the temporal voice area of the deaf at onset, only milliseconds later than the activation in the main face recognition area, the fusiform face area (2).

Benetti et al. (2) propose that the temporal voice area of deaf individuals becomes incorporated in the face recognition system, because face and voice processing share a common functional goal: recognition of one’s identity. This proposition implies that following hearing loss, auditory areas switch their sensory modality but maintain a relation to its typical function. Following the same thread, another study recently published in PNAS by Bola et al. (13) showed an analogous outcome. Using similar groups of deaf and hearing subjects, we explored how the deaf’s auditory cortex processes rhythmic stimuli. In an fMRI experiment, we asked deaf and hearing adults to discriminate between temporally complex sequences of flashes and beeps. Our results demonstrated that the posterior part of the high-level auditory cortex in the deaf was activated by rhythmic visual sequences but not by regular visual stimulation. Moreover, this region was the same auditory region that was activated when hearing subjects perceived rhythmic sequences in the auditory modality (Fig. 1B). Our results thus demonstrated that in deaf humans, the auditory cortex preserves its typical specialization for rhythm processing despite switching to a different sensory modality.

Overall, these two studies demonstrate that the high-level auditory cortex in the deaf switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality (2, 13). These findings mean that that task-specific reorganization is not limited to the visual cortex, but might be a general principle that guides cortical plasticity in the brain. This possibility naturally opens new vistas for future research. For example, it seems natural to ask whether the mechanism of task-specific brain reorganization is limited to the very particular circumstances of prolonged sensory deprivation. Previous studies on visual cortex have already shown that some forms of task-specific recruitment of the visual cortex are possible in nondeprived adults, either after several days of blindfolding (16) or after extensive tactile or auditory training (6, 17, 18). In their experiment, Benetti et al. (2) report a marginally significant trend in the fMRI adaptation effect for faces in hearing participants, suggesting that the temporal voice area may have a similar cross-modal potential for face discrimination in both hearing and deaf subjects. It remains to be explained to what extent the potential for multimodal changes can be exploited in nondeprived adults engaging in complex human activity and whether task-specific reorganization could reach beyond high-order cortices to primary cortices, which are classically considered more bounded to their specific sensory modality. Nevertheless, one can already suppose that some chapters in textbooks of neuroscience will soon need to be amended.

Acknowledgments

The authors thank an anonymous advisor for help in finding the right opening sentences. M.S. is supported by National Science Centre Poland Grants 2015/19/B/HS6/01256 and 2016/21/B/HS6/03703; Marie Curie Career Integration Grant 618347; and funds from the Polish Ministry of Science and Higher Education for cofinancing of international projects, years 2013–2017. Ł.B. is supported by National Science Centre Poland Grant 2014/15/N/HS6/04184.

Footnotes

  • ↵1To whom correspondence should be addressed. Email: mfszwed{at}gmail.com.
  • Author contributions: M.S., Ł.B., and Z.M. wrote the paper.

  • The authors declare no conflict of interest.

  • See companion article on page E6437.

References

  1. ↵
    1. DeLillo D
    (2011) The Body Artist (Picador, London), pp 9–10.
    .
  2. ↵
    1. Benetti S, et al.
    (2017) Functional selectivity for face processing in the temporal voice area of early deaf individuals. Proc Natl Acad Sci USA 114:E6437–E6446.
    .
    OpenUrlAbstract/FREE Full Text
  3. ↵
    1. Draganski B, et al.
    (2004) Neuroplasticity: Changes in grey matter induced by training. Nature 427:311–312.
    .
    OpenUrlCrossRefPubMed
  4. ↵
    1. Amedi A,
    2. Hofstetter S,
    3. Maidenbaum S,
    4. Heimler B
    (2017) Task selectivity as a comprehensive principle for brain organization. Trends Cogn Sci 21:307–310.
    .
    OpenUrl
  5. ↵
    1. Amedi A,
    2. Malach R,
    3. Hendler T,
    4. Peled S,
    5. Zohary E
    (2001) Visuo-haptic object-related activation in the ventral visual pathway. Nat Neurosci 4:324–330.
    .
    OpenUrlCrossRefPubMed
  6. ↵
    1. Amedi A, et al.
    (2007) Shape conveyed by visual-to-auditory sensory substitution activates the lateral occipital complex. Nat Neurosci 10:687–689.
    .
    OpenUrlCrossRefPubMed
  7. ↵
    1. Reich L,
    2. Szwed M,
    3. Cohen L,
    4. Amedi A
    (2011) A ventral visual stream reading center independent of visual experience. Curr Biol 21:363–368.
    .
    OpenUrlCrossRefPubMed
  8. ↵
    1. Striem-Amit E,
    2. Cohen L,
    3. Dehaene S,
    4. Amedi A
    (2012) Reading with sounds: Sensory substitution selectively activates the visual word form area in the blind. Neuron 76:640–652.
    .
    OpenUrlCrossRefPubMed
  9. ↵
    1. Striem-Amit E,
    2. Amedi A
    (2014) Visual cortex extrastriate body-selective area activation in congenitally blind people “seeing” by using sounds. Curr Biol 24:687–692.
    .
    OpenUrlCrossRefPubMed
  10. ↵
    1. Hirsch GV,
    2. Corinna MB,
    3. Merabet LB
    (2015) Using structural and functional brain imaging to uncover how the brain adapts to blindness. Ann Neurosci Psychol 5:2.
    .
    OpenUrl
  11. ↵
    1. Meredith MA, et al.
    (2011) Crossmodal reorganization in the early deaf switches sensory, but not behavioral roles of auditory cortex. Proc Natl Acad Sci USA 108:8856–8861.
    .
    OpenUrlAbstract/FREE Full Text
  12. ↵
    1. Lomber SG,
    2. Meredith MA,
    3. Kral A
    (2010) Cross-modal plasticity in specific auditory cortices underlies visual compensations in the deaf. Nat Neurosci 13:1421–1427.
    .
    OpenUrlCrossRefPubMed
  13. ↵
    1. Bola Ł, et al.
    (2017) Task-specific reorganization of the auditory cortex in deaf humans. Proc Natl Acad Sci USA 114:E600–E609.
    .
    OpenUrlAbstract/FREE Full Text
  14. ↵
    1. Kanwisher N,
    2. McDermott J,
    3. Chun MM
    (1997) The fusiform face area: A module in human extrastriate cortex specialized for face perception. J Neurosci 17:4302–4311.
    .
    OpenUrlAbstract/FREE Full Text
  15. ↵
    1. Belin P,
    2. Zatorre RJ,
    3. Lafaille P,
    4. Ahad P,
    5. Pike B
    (2000) Voice-selective areas in human auditory cortex. Nature 403:309–312.
    .
    OpenUrlCrossRefPubMed
  16. ↵
    1. Merabet LB, et al.
    (2008) Rapid and reversible recruitment of early visual cortex for touch. PLoS One 3:e3046.
    .
    OpenUrlCrossRefPubMed
  17. ↵
    1. Siuda-Krzywicka K, et al.
    (2016) Massive cortical reorganization in sighted Braille readers. eLife 5:e10762.
    .
    OpenUrl
  18. ↵
    1. Kim JK,
    2. Zatorre RJ
    (2011) Tactile-auditory shape learning engages the lateral occipital complex. J Neurosci 31:7848–7856.
    .
    OpenUrlAbstract/FREE Full Text
View Abstract
PreviousNext
Back to top
Article Alerts
Email Article

Thank you for your interest in spreading the word on PNAS.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Whether the hearing brain hears it or the deaf brain sees it, it’s just the same
(Your Name) has sent you a message from PNAS
(Your Name) thought you would like to see the PNAS web site.
Citation Tools
Whether you hear it or see it, it’s the same
Marcin Szwed, Łukasz Bola, Maria Zimmermann
Proceedings of the National Academy of Sciences Aug 2017, 114 (31) 8135-8137; DOI: 10.1073/pnas.1710492114

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Request Permissions
Share
Whether you hear it or see it, it’s the same
Marcin Szwed, Łukasz Bola, Maria Zimmermann
Proceedings of the National Academy of Sciences Aug 2017, 114 (31) 8135-8137; DOI: 10.1073/pnas.1710492114
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Mendeley logo Mendeley
Proceedings of the National Academy of Sciences: 116 (7)
Current Issue

Submit

Sign up for Article Alerts

Jump to section

  • Article
    • Acknowledgments
    • Footnotes
    • References
  • Figures & SI
  • Info & Metrics
  • PDF

You May Also be Interested in

Several aspects of the proposal, which aims to expand open access, require serious discussion and, in some cases, a rethink.
Opinion: “Plan S” falls short for society publishers—and for the researchers they serve
Several aspects of the proposal, which aims to expand open access, require serious discussion and, in some cases, a rethink.
Image credit: Dave Cutler (artist).
Several large or long-lived animals seem strangely resistant to developing cancer. Elucidating the reasons why could lead to promising cancer-fighting strategies in humans.
Core Concept: Solving Peto’s Paradox to better understand cancer
Several large or long-lived animals seem strangely resistant to developing cancer. Elucidating the reasons why could lead to promising cancer-fighting strategies in humans.
Image credit: Shutterstock.com/ronnybas frimages.
Featured Profile
PNAS Profile of NAS member and biochemist Hao Wu
 Nonmonogamous strawberry poison frog (Oophaga pumilio).  Image courtesy of Yusan Yang (University of Pittsburgh, Pittsburgh).
Putative signature of monogamy
A study suggests a putative gene-expression hallmark common to monogamous male vertebrates of some species, namely cichlid fishes, dendrobatid frogs, passeroid songbirds, common voles, and deer mice, and identifies 24 candidate genes potentially associated with monogamy.
Image courtesy of Yusan Yang (University of Pittsburgh, Pittsburgh).
Active lifestyles. Image courtesy of Pixabay/MabelAmber.
Meaningful life tied to healthy aging
Physical and social well-being in old age are linked to self-assessments of life worth, and a spectrum of behavioral, economic, health, and social variables may influence whether aging individuals believe they are leading meaningful lives.
Image courtesy of Pixabay/MabelAmber.

More Articles of This Classification

  • Solution to the 50-year-old Okazaki-fragment problem
  • Musical pleasure and musical emotions
  • Biodiversity conservation of Morlocks in west-central Texas
Show more

Related Content

  • Cross-modal face selectivity in the deaf
  • Scopus
  • PubMed
  • Google Scholar

Cited by...

  • No citing articles found.
  • Google Scholar

Similar Articles

Site Logo
Powered by HighWire
  • Submit Manuscript
  • Twitter
  • Facebook
  • RSS Feeds
  • Email Alerts

Articles

  • Current Issue
  • Latest Articles
  • Archive

PNAS Portals

  • Classics
  • Front Matter
  • Teaching Resources
  • Anthropology
  • Chemistry
  • Physics
  • Sustainability Science

Information

  • Authors
  • Editorial Board
  • Reviewers
  • Press
  • Site Map

Feedback    Privacy/Legal

Copyright © 2019 National Academy of Sciences. Online ISSN 1091-6490