Send to

Choose Destination
Exp Brain Res. 2003 Mar;149(1):48-61. Epub 2002 Dec 19.

Encoding, learning, and spatial updating of multiple object locations specified by 3-D sound, spatial language, and vision.

Author information

Department of Psychology, Carnegie-Mellon University, Pittsburgh, PA 15213,USA.


Participants standing at an origin learned the distance and azimuth of target objects that were specified by 3-D sound, spatial language, or vision. We tested whether the ensuing target representations functioned equivalently across modalities for purposes of spatial updating. In experiment 1, participants localized targets by pointing to each and verbalizing its distance, both directly from the origin and at an indirect waypoint. In experiment 2, participants localized targets by walking to each directly from the origin and via an indirect waypoint. Spatial updating bias was estimated by the spatial-coordinate difference between indirect and direct localization; noise from updating was estimated by the difference in variability of localization. Learning rate and noise favored vision over the two auditory modalities. For all modalities, bias during updating tended to move targets forward, comparably so for three and five targets and for forward and rightward indirect-walking directions. Spatial language produced additional updating bias and noise from updating. Although spatial representations formed from language afford updating, they do not function entirely equivalently to those from intrinsically spatial modalities.

[Indexed for MEDLINE]

Supplemental Content

Full text links

Icon for Springer
Loading ...
Support Center