Format

Send to

Choose Destination
Elife. 2017 Feb 22;6. pii: e22341. doi: 10.7554/eLife.22341.

Bottom-up and top-down computations in word- and face-selective cortex.

Author information

1
Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, United States.
2
Institute for Learning and Brain Sciences, University of Washington, Seattle, United States.
3
Department of Speech and Hearing Sciences, University of Washington, Seattle, United States.

Abstract

The ability to read a page of text or recognize a person's face depends on category-selective visual regions in ventral temporal cortex (VTC). To understand how these regions mediate word and face recognition, it is necessary to characterize how stimuli are represented and how this representation is used in the execution of a cognitive task. Here, we show that the response of a category-selective region in VTC can be computed as the degree to which the low-level properties of the stimulus match a category template. Moreover, we show that during execution of a task, the bottom-up representation is scaled by the intraparietal sulcus (IPS), and that the level of IPS engagement reflects the cognitive demands of the task. These results provide an account of neural processing in VTC in the form of a model that addresses both bottom-up and top-down effects and quantitatively predicts VTC responses.

KEYWORDS:

computational biology; computational model; fMRI; human; neuroscience; systems biology; visual cortex

PMID:
28226243
PMCID:
PMC5358981
DOI:
10.7554/eLife.22341
[Indexed for MEDLINE]
Free PMC Article

Supplemental Content

Full text links

Icon for eLife Sciences Publications, Ltd Icon for PubMed Central
Loading ...
Support Center