Format

Send to

Choose Destination
Cognition. 2019 Jan 23;185:91-120. doi: 10.1016/j.cognition.2018.09.006. [Epub ahead of print]

Does direction matter? Linguistic asymmetries reflected in visual attention.

Author information

1
CITEC (Cognitive Interaction Technology Excellence Cluster), Bielefeld University, Inspiration 1, 33619 Bielefeld, Germany. Electronic address: tkluth@cit-ec.uni-bielefeld.de.
2
CITEC (Cognitive Interaction Technology Excellence Cluster), Bielefeld University, Inspiration 1, 33619 Bielefeld, Germany. Electronic address: mburigo@cit-ec.uni-bielefeld.de.
3
Bremen Spatial Cognition Center, University of Bremen, Enrique-Schmidt-Str. 5, 28359 Bremen, Germany. Electronic address: schulth@uni-bremen.de.
4
Berlin School of Mind and Brain, Einstein Center for Neuroscience Berlin, and Department of German Studies and Linguistics, Humboldt University, Unter den Linden 6, 10099 Berlin, Germany. Electronic address: pia.knoeferle@hu-berlin.de.

Abstract

Language and vision interact in non-trivial ways. Linguistically, spatial utterances are often asymmetrical as they relate more stable objects (reference objects) to less stable objects (located objects). Researchers have claimed that such linguistic asymmetry should also be reflected in the allocation of visual attention when people process a depicted spatial relation described by spatial language. More specifically, it was assumed that people move their attention from the reference object to the located object. However, recent theoretical and empirical findings challenge the directionality of this attentional shift. In this article, we present the results of an empirical study based on predictions generated by computational cognitive models implementing different directionalities of attention. Moreover, we thoroughly analyze the computational models. While our results do not favor any of the implemented directionalities of attention, we found that two unknown sources of geometric information affect spatial language understanding. We provide modifications to the computational models that substantially improve their performance on empirical data.

KEYWORDS:

Cognitive modeling; Language and vision; Spatial language; Spatial relations; Visual attention

PMID:
30682714
DOI:
10.1016/j.cognition.2018.09.006
Free full text

Supplemental Content

Full text links

Icon for Elsevier Science
Loading ...
Support Center