Format

Send to

Choose Destination
Neuroimage. 2014 Feb 15;87:96-110. doi: 10.1016/j.neuroimage.2013.10.067. Epub 2013 Nov 15.

On the interpretation of weight vectors of linear models in multivariate neuroimaging.

Author information

1
Fachgebiet Maschinelles Lernen, Technische Universität Berlin, Germany; Bernstein Focus: Neurotechnology, Berlin, Germany. Electronic address: stefan.haufe@tu-berlin.de.
2
Zalando GmbH, Berlin, Germany; Fachgebiet Maschinelles Lernen, Technische Universität Berlin, Germany.
3
Bernstein Center for Computational Neuroscience, Charité - Universitätsmedizin, Berlin, Germany; Berlin Center for Advanced Neuroimaging, Charité - Universitätsmedizin, Berlin, Germany; Fachgebiet Neurotechnologie, Technische Universität Berlin, Germany.
4
Fachgebiet Maschinelles Lernen, Technische Universität Berlin, Germany.
5
Bernstein Center for Computational Neuroscience, Charité - Universitätsmedizin, Berlin, Germany; Berlin Center for Advanced Neuroimaging, Charité - Universitätsmedizin, Berlin, Germany; Bernstein Focus: Neurotechnology, Berlin, Germany.
6
Fachgebiet Neurotechnologie, Technische Universität Berlin, Germany; Bernstein Focus: Neurotechnology, Berlin, Germany.
7
Korea University, Seoul, Republic of Korea; Fachgebiet Maschinelles Lernen, Technische Universität Berlin, Germany. Electronic address: felix.biessmann@tu-berlin.de.

Abstract

The increase in spatiotemporal resolution of neuroimaging devices is accompanied by a trend towards more powerful multivariate analysis methods. Often it is desired to interpret the outcome of these methods with respect to the cognitive processes under study. Here we discuss which methods allow for such interpretations, and provide guidelines for choosing an appropriate analysis for a given experimental goal: For a surgeon who needs to decide where to remove brain tissue it is most important to determine the origin of cognitive functions and associated neural processes. In contrast, when communicating with paralyzed or comatose patients via brain-computer interfaces, it is most important to accurately extract the neural processes specific to a certain mental state. These equally important but complementary objectives require different analysis methods. Determining the origin of neural processes in time or space from the parameters of a data-driven model requires what we call a forward model of the data; such a model explains how the measured data was generated from the neural sources. Examples are general linear models (GLMs). Methods for the extraction of neural information from data can be considered as backward models, as they attempt to reverse the data generating process. Examples are multivariate classifiers. Here we demonstrate that the parameters of forward models are neurophysiologically interpretable in the sense that significant nonzero weights are only observed at channels the activity of which is related to the brain process under study. In contrast, the interpretation of backward model parameters can lead to wrong conclusions regarding the spatial or temporal origin of the neural signals of interest, since significant nonzero weights may also be observed at channels the activity of which is statistically independent of the brain process under study. As a remedy for the linear case, we propose a procedure for transforming backward models into forward models. This procedure enables the neurophysiological interpretation of the parameters of linear backward models. We hope that this work raises awareness for an often encountered problem and provides a theoretical basis for conducting better interpretable multivariate neuroimaging analyses.

KEYWORDS:

Activation patterns; Decoding; EEG; Encoding; Extraction filters; Forward/backward models; Generative/discriminative models; Interpretability; Multivariate; Neuroimaging; Regularization; Sparsity; Univariate; fMRI

[Indexed for MEDLINE]
Free full text

Supplemental Content

Full text links

Icon for Elsevier Science
Loading ...
Support Center