Format

Send to

Choose Destination
Hum Brain Mapp. 2011 Oct;32(10):1660-76. doi: 10.1002/hbm.21139. Epub 2010 Sep 17.

Visual phonetic processing localized using speech and nonspeech face gestures in video and point-light displays.

Author information

1
Division of Communication and Auditory Neuroscience, House Ear Institute, Los Angeles, California, USA. lbernste@gwu.edu

Abstract

The talking face affords multiple types of information. To isolate cortical sites with responsibility for integrating linguistically relevant visual speech cues, speech and nonspeech face gestures were presented in natural video and point-light displays during fMRI scanning at 3.0T. Participants with normal hearing viewed the stimuli and also viewed localizers for the fusiform face area (FFA), the lateral occipital complex (LOC), and the visual motion (V5/MT) regions of interest (ROIs). The FFA, the LOC, and V5/MT were significantly less activated for speech relative to nonspeech and control stimuli. Distinct activation of the posterior superior temporal sulcus and the adjacent middle temporal gyrus to speech, independent of media, was obtained in group analyses. Individual analyses showed that speech and nonspeech stimuli were associated with adjacent but different activations, with the speech activations more anterior. We suggest that the speech activation area is the temporal visual speech area (TVSA), and that it can be localized with the combination of stimuli used in this study.

PMID:
20853377
PMCID:
PMC3120928
DOI:
10.1002/hbm.21139
[Indexed for MEDLINE]
Free PMC Article

Supplemental Content

Full text links

Icon for Wiley Icon for PubMed Central
Loading ...
Support Center