Format

Send to

Choose Destination
Neuropsychologia. 2019 Jul 2;132:107132. doi: 10.1016/j.neuropsychologia.2019.107132. [Epub ahead of print]

Speech-accompanying gestures are not processed by the language-processing mechanisms.

Author information

1
Massachusetts Institute of Technology, Cambridge, MA, 02139, USA; Carleton University, Ottawa, ON K1S 5B6, Canada. Electronic address: olessiaj@mit.edu.
2
Princeton University, Princeton, NJ, 08544, USA.
3
Massachusetts Institute of Technology, Cambridge, MA, 02139, USA.
4
University of Chicago, Chicago, IL, 60637, USA.
5
University of Chicago, Chicago, IL, 60637, USA. Electronic address: sgm@uchicago.edu.
6
Massachusetts Institute of Technology, Cambridge, MA, 02139, USA; McGovern Institute for Brain Research, Cambridge, MA, 02139, USA; Massachusetts General Hospital, Boston, MA, 02114, USA. Electronic address: evelina9@mit.edu.

Abstract

Speech-accompanying gestures constitute one information channel during communication. Some have argued that processing gestures engages the brain regions that support language comprehension. However, studies that have been used as evidence for shared mechanisms suffer from one or more of the following limitations: they (a) have not directly compared activations for gesture and language processing in the same study and relied on the fallacious reverse inference (Poldrack, 2006) for interpretation, (b) relied on traditional group analyses, which are bound to overestimate overlap (e.g., Nieto-Castañon and Fedorenko, 2012), (c) failed to directly compare the magnitudes of response (e.g., Chen et al., 2017), and (d) focused on gestures that may have activated the corresponding linguistic representations (e.g., "emblems"). To circumvent these limitations, we used fMRI to examine responses to gesture processing in language regions defined functionally in individual participants (e.g., Fedorenko et al., 2010), including directly comparing effect sizes, and covering a broad range of spontaneously generated co-speech gestures. Whenever speech was present, language regions responded robustly (and to a similar degree regardless of whether the video contained gestures or grooming movements). In contrast, and critically, responses in the language regions were low - at or slightly above the fixation baseline - when silent videos were processed (again, regardless of whether they contained gestures or grooming movements). Brain regions outside of the language network, including some in close proximity to its regions, differentiated between gestures and grooming movements, ruling out the possibility that the gesture/grooming manipulation was too subtle. Behavioral studies on the critical video materials further showed robust differentiation between the gesture and grooming conditions. In summary, contra prior claims, language-processing regions do not respond to co-speech gestures in the absence of speech, suggesting that these regions are selectively driven by linguistic input (e.g., Fedorenko et al., 2011). Although co-speech gestures are uncontroversially important in communication, they appear to be processed in brain regions distinct from those that support language comprehension, similar to other extra-linguistic communicative signals, like facial expressions and prosody.

KEYWORDS:

Co-speech gestures; Communication; Functional specificity; Language network; Multiple demand (MD) network

Supplemental Content

Full text links

Icon for Elsevier Science
Loading ...
Support Center