Send to

Choose Destination
Cereb Cortex. 2017 Mar 1;27(3):2276-2288. doi: 10.1093/cercor/bhw077.

Human-Object Interactions Are More than the Sum of Their Parts.

Author information

Department of Computer Science, Stanford University, Stanford, CA 94305, USA.
Department of Psychology and Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA.


Understanding human-object interactions is critical for extracting meaning from everyday visual scenes and requires integrating complex relationships between human pose and object identity into a new percept. To understand how the brain builds these representations, we conducted 2 fMRI experiments in which subjects viewed humans interacting with objects, noninteracting human-object pairs, and isolated humans and objects. A number of visual regions process features of human-object interactions, including object identity information in the lateral occipital complex (LOC) and parahippocampal place area (PPA), and human pose information in the extrastriate body area (EBA) and posterior superior temporal sulcus (pSTS). Representations of human-object interactions in some regions, such as the posterior PPA (retinotopic maps PHC1 and PHC2) are well predicted by a simple linear combination of the response to object and pose information. Other regions, however, especially pSTS, exhibit representations for human-object interaction categories that are not predicted by their individual components, indicating that they encode human-object interactions as more than the sum of their parts. These results reveal the distributed networks underlying the emergent representation of human-object interactions necessary for social perception.


MVPA; action perception; cross-decoding; fMRI; scene perception

[Indexed for MEDLINE]
Free PMC Article

Supplemental Content

Full text links

Icon for Silverchair Information Systems Icon for PubMed Central
Loading ...
Support Center