Format

Send to

Choose Destination
Neuroimage. 2018 May 15;172:597-607. doi: 10.1016/j.neuroimage.2018.02.006. Epub 2018 Feb 7.

Integration of visual and non-visual self-motion cues during voluntary head movements in the human brain.

Author information

1
Vision and Cognition Lab, Centre for Integrative Neuroscience, University of Tübingen, Tübingen 72076, Germany; Department of Psychology, University of Tübingen, Tübingen 72076, Germany; Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany. Electronic address: andreas.schindler@tuebingen.mpg.de.
2
Vision and Cognition Lab, Centre for Integrative Neuroscience, University of Tübingen, Tübingen 72076, Germany; Department of Psychology, University of Tübingen, Tübingen 72076, Germany; Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany. Electronic address: andreas.bartels@tuebingen.mpg.de.

Abstract

Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation.

[Indexed for MEDLINE]

Supplemental Content

Full text links

Icon for Elsevier Science
Loading ...
Support Center