Structured Neural Decoding With Multitask Transfer Learning of Deep Neural Network Representations

被引:20
|
作者
Du, Changde [1 ,2 ,3 ]
Du, Changying [4 ]
Huang, Lijie [1 ]
Wang, Haibao [1 ,2 ]
He, Huiguang [1 ,2 ,5 ]
机构
[1] Chinese Acad Sci, Res Ctr Brain Inspired Intelligence, Natl Lab Pattern Recognit, Inst Automat, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
[3] Huawei Cloud BU EI Innovat Lab, Beijing 100085, Peoples R China
[4] Huawei Noahs Ark Lab, Beijing 100085, Peoples R China
[5] Chinese Acad Sci, Ctr Excellence Brain Sci & Intelligence Technol, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Decoding; Image reconstruction; Functional magnetic resonance imaging; Visualization; Task analysis; Brain; Correlation; Deep neural network (DNN); functional magnetic resonance imaging (fMRI); image reconstruction; multioutput regression; neural decoding; NATURAL IMAGES; BRAIN; RECONSTRUCTION; FACES;
D O I
10.1109/TNNLS.2020.3028167
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The reconstruction of visual information from human brain activity is a very important research topic in brain decoding. Existing methods ignore the structural information underlying the brain activities and the visual features, which severely limits their performance and interpretability. Here, we propose a hierarchically structured neural decoding framework by using multitask transfer learning of deep neural network (DNN) representations and a matrix-variate Gaussian prior. Our framework consists of two stages, Voxel2Unit and Unit2Pixel. In Voxel2Unit, we decode the functional magnetic resonance imaging (fMRI) data to the intermediate features of a pretrained convolutional neural network (CNN). In Unit2Pixel, we further invert the predicted CNN features back to the visual images. Matrix-variate Gaussian prior allows us to take into account the structures between feature dimensions and between regression tasks, which are useful for improving decoding effectiveness and interpretability. This is in contrast with the existing single-output regression models that usually ignore these structures. We conduct extensive experiments on two real-world fMRI data sets, and the results show that our method can predict CNN features more accurately and reconstruct the perceived natural images and faces with higher quality.
引用
收藏
页码:600 / 614
页数:15
相关论文
共 50 条
  • [1] Transfer learning of deep neural network representations for fMRI decoding
    Svanera, Michele
    Savardi, Mattia
    Benini, Sergio
    Signoroni, Alberto
    Raz, Gal
    Hendler, Talma
    Muckli, Lars
    Goebel, Rainer
    Valente, Giancarlo
    [J]. JOURNAL OF NEUROSCIENCE METHODS, 2019, 328
  • [2] Hybrid deep neural network using transfer learning for EEG motor imagery decoding
    Zhang, Ruilong
    Zong, Qun
    Dou, Liqian
    Zhao, Xinyi
    Tang, Yifan
    Li, Zhiyu
    [J]. BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2021, 63
  • [3] Low-Rank Deep Convolutional Neural Network for Multitask Learning
    Su, Fang
    Shang, Hai-Yang
    Wang, Jing-Yan
    [J]. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2019, 2019
  • [4] Learning to Authenticate with Deep Multibiometric Hashing and Neural Network Decoding
    Talreja, Veeru
    Soleymani, Sobhan
    Valenti, Matthew C.
    Nasrabadi, Nasser M.
    [J]. ICC 2019 - 2019 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2019,
  • [5] Learning structured and non-redundant representations with deep neural networks
    Yang, Jihai
    Xiong, Wei
    Li, Shijun
    Xu, Chang
    [J]. PATTERN RECOGNITION, 2019, 86 : 224 - 235
  • [6] Multitask-Learning-Based Deep Neural Network for Automatic Modulation Classification
    Chang, Shuo
    Huang, Sai
    Zhang, Ruiyun
    Feng, Zhiyong
    Liu, Liang
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (03) : 2192 - 2206
  • [7] Multitask painting categorization by deep multibranch neural network
    Bianco, Simone
    Mazzini, Davide
    Napoletano, Paolo
    Schettini, Raimondo
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2019, 135 : 90 - 101
  • [8] Sparse Deep Transfer Learning for Convolutional Neural Network
    Liu, Jiaming
    Wang, Yali
    Qiao, Yu
    [J]. THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 2245 - 2251
  • [9] The Structure of Deep Neural Network for Interpretable Transfer Learning
    Kim, Dowan
    Lim, Woohyun
    Hong, Minye
    Kim, Hyeoncheol
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (BIGCOMP), 2019, : 181 - 184
  • [10] Deep transfer neural network using hybrid representations of domain discrepancy
    Lu, Changsheng
    Gu, Chaochen
    Wu, Kaijie
    Xia, Siyu
    Wang, Haotian
    Guan, Xinping
    [J]. NEUROCOMPUTING, 2020, 409 : 60 - 73