On the interpretation of weight vectors of linear models in multivariate neuroimaging

被引:799
|
作者
Haufe, Stefan [1 ,2 ]
Meinecke, Frank [1 ,3 ]
Goergen, Kai [4 ,5 ,6 ]
Daehne, Sven [1 ]
Haynes, John-Dylan [2 ,4 ,5 ]
Blankertz, Benjamin [2 ,6 ]
Biessgmann, Felix [1 ,7 ]
机构
[1] Tech Univ Berlin, Fachgebiet Maschinelles Lernen, Berlin, Germany
[2] Bernstein Focus Neurotechnol, Berlin, Germany
[3] Zalando GmbH, Berlin, Germany
[4] Charite, Bernstein Ctr Computat Neurosci, D-13353 Berlin, Germany
[5] Charite, Berlin Ctr Adv Neuroimaging, D-13353 Berlin, Germany
[6] Tech Univ Berlin, Fachgebiet Neurotechnol, Berlin, Germany
[7] Korea Univ, Seoul, South Korea
基金
新加坡国家研究基金会;
关键词
Neuroimaging; Multivariate; Univariate; fMRI; EEG; Forward/backward models; Generative/discriminative models; Encoding; Decoding; Activation patterns; Extraction filters; Interpretability; Regularization; Sparsity; HEMODYNAMIC SIGNALS; NEURAL ACTIVITY; EEG; CONNECTIVITY; LOCALIZATION; OSCILLATIONS; SELECTION;
D O I
10.1016/j.neuroimage.2013.10.067
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
The increase in spatiotemporal resolution of neuroimaging devices is accompanied by a trend towards more powerful multivariate analysis methods. Often it is desired to interpret the outcome of these methods with respect to the cognitive processes under study. Here we discuss which methods allow for such interpretations, and provide guidelines for choosing an appropriate analysis for a given experimental goal: For a surgeon who needs to decide where to remove brain tissue it is most important to determine the origin of cognitive functions and associated neural processes. In contrast, when communicating with paralyzed or comatose patients via brain-computer interfaces, it is most important to accurately extract the neural processes specific to a certain mental state,These-equally-important but complementary objectives require differentanalysis methods. Determining the origin of neural processes in time or space from the parameters of a data-driven model requires what we call a forward model of the data; such a model explains how the measured data was generated from the neural sources. Examples are general linear models (GLMs). Methods for the extraction of neural information from data can be considered as backward models, as they attempt to reverse the data generating process. Examples are multivariate classifiers. Here we demonstrate that the parameters of forward models are neurophysiologically interpretable in the sense that significant nonzero weights are only observed at channels the activity of which is related to the brain process under study. In contrast, the interpretation of backward model parameters can lead to wrong conclusions regarding the spatial or temporal origin of the neural signals of interest, since significant nonzero weights may also be observed at channels the activity of which is statistically independent of the brain process under study. As a remedy for the linear case, we propose a procedure for transforming bacicward models into forward models. This procedure enables the neurophysiological interpretation of the parameters of linear backward models. We hope that this work raises awareness for an often encountered problem and provides a theoretical basis for conducting better interpretable multivariate neuroimaging analyses. (C) 2013 The Authors. Published by Elsevier Inc. All rights reserved.
引用
收藏
页码:96 / 110
页数:15
相关论文
共 50 条
  • [1] Parameter interpretation, regularization and source localization in multivariate linear models
    Haufe, Stefan
    Meinecke, Frank
    Goergen, Kai
    Daehne, Sven
    Haynes, John-Dylan
    Blankertz, Benjamin
    Biessmann, Felix
    [J]. 2014 INTERNATIONAL WORKSHOP ON PATTERN RECOGNITION IN NEUROIMAGING, 2014,
  • [2] Geometric interpretation of efficient weight vectors
    Szadoczki, Zsombor
    Bozoki, Sandor
    [J]. KNOWLEDGE-BASED SYSTEMS, 2024, 303
  • [3] Analysing linear multivariate pattern transformations in neuroimaging data
    Basti, Alessio
    Mur, Marieke
    Kriegeskorte, Nikolaus
    Pizzella, Vittorio
    Marzetti, Laura
    Hauk, Olaf
    [J]. PLOS ONE, 2019, 14 (10):
  • [4] Critical factors limiting the interpretation of regression vectors in multivariate calibration
    Brown, Christopher D.
    Green, Robert L.
    [J]. TRAC-TRENDS IN ANALYTICAL CHEMISTRY, 2009, 28 (04) : 506 - 514
  • [5] Multivariate partially linear models
    Pateiro-Lopez, Beatriz
    Gonzalez-Manteiga, Wenceslao
    [J]. STATISTICS & PROBABILITY LETTERS, 2006, 76 (14) : 1543 - 1549
  • [6] Improving Deep Neural Network Interpretation for Neuroimaging Using Multivariate Modeling
    Brady J. Williamson
    David Wang
    Vivek Khandwala
    Jennifer Scheler
    Achala Vagal
    [J]. SN Computer Science, 2022, 3 (2)
  • [7] Causal interpretation rules for encoding and decoding models in neuroimaging
    Weichwald, Sebastian
    Meyer, Timm
    Ozdenizci, Ozan
    Schoelkopf, Bernhard
    Ball, Tonio
    Grosse-Wentrup, Moritz
    [J]. NEUROIMAGE, 2015, 110 : 48 - 59
  • [8] PC TRANSLATION MODELS FOR RANDOM VECTORS AND MULTIVARIATE EXTREMES
    Grigoriu, Mircea
    [J]. SIAM JOURNAL ON SCIENTIFIC COMPUTING, 2019, 41 (02): : A1228 - A1251
  • [9] Mixtures of general linear models for functional neuroimaging
    Penny, W
    Friston, KJ
    [J]. IEEE TRANSACTIONS ON MEDICAL IMAGING, 2003, 22 (04) : 504 - 514
  • [10] Multivariate Covariance Generalized Linear Models
    Bonat, Wagner Hugo
    Jorgensen, Bent
    [J]. JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES C-APPLIED STATISTICS, 2016, 65 (05) : 649 - 675