Format

Send to

Choose Destination
J Neurophysiol. 2004 Jul;92(1):10-9.

Spatial transformations for eye-hand coordination.

Author information

1
Canadian Institutes of Health Research Group for Action and Perception, York Centre for Vision Research, Department of Psychology, York University, 4700 Keele St., Toronto, Ontario M3J 1P3, Canada. jdc@yorku.ca

Abstract

Eye-hand coordination is complex because it involves the visual guidance of both the eyes and hands, while simultaneously using eye movements to optimize vision. Since only hand motion directly affects the external world, eye movements are the slave in this system. This eye-hand visuomotor system incorporates closed-loop visual feedback but here we focus on early feedforward mechanisms that allow primates to make spatially accurate reaches. First, we consider how the parietal cortex might store and update gaze-centered representations of reach targets during a sequence of gaze shifts and fixations. Recent evidence suggests that such representations might be compared with hand position signals within this early gaze-centered frame. However, the resulting motor error commands cannot be treated independently of their frame of origin or the frame of their destined motor command. Behavioral experiments show that the brain deals with the nonlinear aspects of such reference frame transformations, and incorporates internal models of the complex linkage geometry of the eye-head-shoulder system. These transformations are modeled as a series of vector displacement commands, rotated by eye and head orientation, and implemented between parietal and frontal cortex through efficient parallel neuronal architectures. Finally, we consider how this reach system might interact with the visually guided grasp system through both parallel and coordinated neural algorithms.

PMID:
15212434
DOI:
10.1152/jn.00117.2004
[Indexed for MEDLINE]
Free full text

Supplemental Content

Full text links

Icon for Atypon
Loading ...
Support Center