Send to

Choose Destination
See comment in PubMed Commons below
Brain Lang. 2017 Jan;164:77-105. doi: 10.1016/j.bandl.2016.10.004. Epub 2016 Nov 5.

The hearing ear is always found close to the speaking tongue: Review of the role of the motor system in speech perception.

Author information

  • 1Experimental Psychology, University College London, United Kingdom. Electronic address:
  • 2Experimental Psychology, University College London, United Kingdom.
  • 3Experimental Psychology, University College London, United Kingdom; Department of Experimental Psychology, University of Oxford, United Kingdom.


Does "the motor system" play "a role" in speech perception? If so, where, how, and when? We conducted a systematic review that addresses these questions using both qualitative and quantitative methods. The qualitative review of behavioural, computational modelling, non-human animal, brain damage/disorder, electrical stimulation/recording, and neuroimaging research suggests that distributed brain regions involved in producing speech play specific, dynamic, and contextually determined roles in speech perception. The quantitative review employed region and network based neuroimaging meta-analyses and a novel text mining method to describe relative contributions of nodes in distributed brain networks. Supporting the qualitative review, results show a specific functional correspondence between regions involved in non-linguistic movement of the articulators, covertly and overtly producing speech, and the perception of both nonword and word sounds. This distributed set of cortical and subcortical speech production regions are ubiquitously active and form multiple networks whose topologies dynamically change with listening context. Results are inconsistent with motor and acoustic only models of speech perception and classical and contemporary dual-stream models of the organization of language and the brain. Instead, results are more consistent with complex network models in which multiple speech production related networks and subnetworks dynamically self-organize to constrain interpretation of indeterminant acoustic patterns as listening context requires.


Articulation; Complex network; Neuroimaging meta-analysis; Phoneme; Speech production

[PubMed - in process]
Free full text

Publication Types

Publication Types

PubMed Commons home

PubMed Commons

How to join PubMed Commons

    Supplemental Content

    Full text links

    Icon for Elsevier Science
    Loading ...
    Support Center