Format

Send to

Choose Destination
Vision Res. 2003 Feb;43(3):333-46.

Oculomotor strategies for the direction of gaze tested with a real-world activity.

Author information

1
Wilmer Eye Institute, The Johns Hopkins University School of Medicine, Baltimore, MD 21205-2020, USA. kathy@lions.med.jhu.edu

Abstract

Laboratory-based models of oculomotor strategy that differ in the amount and type of top-down information were evaluated against a baseline case of random scanning for predicting the gaze patterns of subjects performing a real-world activity--walking to a target. Images of four subjects' eyes and field of view were simultaneously recorded as they performed the mobility task. Offline analyses generated movies of the eye on scene and a categorization scheme was used to classify the locations of the fixations. Frames from each subject's eye-on-scene movie served as input to the models, and the location of each model's predicted fixations was classified using the same categorization scheme. The results showed that models with no top-down information (visual salience model) or with only coarse feature information performed no better than a random scanner; the models' ordered fixation locations (gaze pattern) matched less than a quarter of the subjects' gaze patterns. A model that used only geographic information outperformed the random scanner and matched approximately a third of the gaze patterns. The best performance was obtained from an oculomotor strategy that used both coarse feature and geographic information, matching nearly half the gaze patterns (48%). Thus, a model that uses top-down information about a target's coarse features and general vicinity does a fairly good job predicting fixation behavior, but it does not fully specify the gaze pattern of a subject walking to a target. Additional information is required, perhaps in the form of finer feature information or knowledge of a task's procedure.

PMID:
12535991
DOI:
10.1016/s0042-6989(02)00498-4
[Indexed for MEDLINE]
Free full text

Supplemental Content

Full text links

Icon for Elsevier Science
Loading ...
Support Center