Toward a unified framework for interpreting machine-learning models in neuroimaging

被引:0
|
作者
Lada Kohoutová
Juyeon Heo
Sungmin Cha
Sungwoo Lee
Taesup Moon
Tor D. Wager
Choong-Wan Woo
机构
[1] Institute for Basic Science,Center for Neuroscience Imaging Research
[2] Sungkyunkwan University,Department of Biomedical Engineering
[3] Sungkyunkwan University,Department of Electrical and Computer Engineering
[4] Dartmouth College,Department of Psychological and Brain Sciences
[5] University of Colorado Boulder,Department of Psychology and Neuroscience
[6] University of Colorado Boulder,Institute of Cognitive Science
来源
Nature Protocols | 2020年 / 15卷
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Machine learning is a powerful tool for creating computational models relating brain function to behavior, and its use is becoming widespread in neuroscience. However, these models are complex and often hard to interpret, making it difficult to evaluate their neuroscientific validity and contribution to understanding the brain. For neuroimaging-based machine-learning models to be interpretable, they should (i) be comprehensible to humans, (ii) provide useful information about what mental or behavioral constructs are represented in particular brain pathways or regions, and (iii) demonstrate that they are based on relevant neurobiological signal, not artifacts or confounds. In this protocol, we introduce a unified framework that consists of model-, feature- and biology-level assessments to provide complementary results that support the understanding of how and why a model works. Although the framework can be applied to different types of models and data, this protocol provides practical tools and examples of selected analysis methods for a functional MRI dataset and multivariate pattern-based predictive models. A user of the protocol should be familiar with basic programming in MATLAB or Python. This protocol will help build more interpretable neuroimaging-based machine-learning models, contributing to the cumulative understanding of brain mechanisms and brain health. Although the analyses provided here constitute a limited set of tests and take a few hours to days to complete, depending on the size of data and available computational resources, we envision the process of annotating and interpreting models as an open-ended process, involving collaborative efforts across multiple studies and laboratories.
引用
收藏
页码:1399 / 1435
页数:36
相关论文
共 50 条
  • [1] Toward a unified framework for interpreting machine-learning models in neuroimaging
    Kohoutova, Lada
    Heo, Juyeon
    Cha, Sungmin
    Lee, Sungwoo
    Moon, Taesup
    Wager, Tor D.
    Woo, Choong-Wan
    NATURE PROTOCOLS, 2020, 15 (04) : 1399 - 1435
  • [2] Machine-OIF-Action: a unified framework for developing and interpreting machine-learning models for chemosensory research
    Gupta, Anku
    Choudhary, Mohit
    Mohanty, Sanjay Kumar
    Mittal, Aayushi
    Gupta, Krishan
    Arya, Aditya
    Kumar, Suvendu
    Katyayan, Nikhil
    Dixit, Nilesh Kumar
    Kalra, Siddhant
    Goel, Manshi
    Sahni, Megha
    Singhal, Vrinda
    Mishra, Tripti
    Sengupta, Debarka
    Ahuja, Gaurav
    BIOINFORMATICS, 2021, 37 (12) : 1769 - 1771
  • [3] Toward a Unified Framework for Interpreting the Phase Rule
    Ravi, R.
    INDUSTRIAL & ENGINEERING CHEMISTRY RESEARCH, 2012, 51 (42) : 13853 - 13861
  • [4] A machine-learning framework for peridynamic material models with physical constraints
    Xu, Xiao
    D'Elia, Marta
    Foster, John T.
    COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING, 2021, 386
  • [5] Deep Forest as a framework for a new class of machine-learning models
    Utkin, Lev V.
    Meldo, Anna A.
    Konstantinov, Andrei V.
    NATIONAL SCIENCE REVIEW, 2019, 6 (02) : 186 - 187
  • [6] Deep Forest as a framework for a new class of machine-learning models
    Lev V.Utkin
    Anna A.Meldo
    Andrei V.Konstantinov
    National Science Review, 2019, 6 (02) : 186 - 187
  • [7] Interpreting and Stabilizing Machine-Learning Parametrizations of Convection
    Brenowitz, Noah D.
    Beucler, Tom
    Pritchard, Michael
    Bretherton, Christopher S.
    JOURNAL OF THE ATMOSPHERIC SCIENCES, 2020, 77 (12) : 4357 - 4375
  • [8] Interpreting Deep Learning Models for Multimodal Neuroimaging
    Mueller, K. R.
    Hofmann, S. M.
    2023 11TH INTERNATIONAL WINTER CONFERENCE ON BRAIN-COMPUTER INTERFACE, BCI, 2023,
  • [9] Certified Machine-Learning Models
    Damiani, Ernesto
    Ardagna, Claudio A.
    SOFSEM 2020: THEORY AND PRACTICE OF COMPUTER SCIENCE, 2020, 12011 : 3 - 15
  • [10] Defining "Better Prediction" by Machine-Learning Models Toward Clinical Application
    Hamaya, Rikuta
    Sahashi, Yuki
    Kagiyama, Nobuyuki
    JACC-CARDIOVASCULAR IMAGING, 2022, 15 (03) : 550 - 550