Multi-head Attention-Based Masked Sequence Model for Mapping Functional Brain Networks

被引:3
|
作者
He, Mengshen [1 ]
Hou, Xiangyu [1 ]
Wang, Zhenwei [1 ]
Kang, Zili [1 ]
Zhang, Xin [2 ]
Qiang, Ning [1 ]
Ge, Bao [1 ]
机构
[1] Shaanxi Normal Univ, Sch Phys & Informat Technol, Xian, Peoples R China
[2] Northwestern Polytech Univ, Xian, Peoples R China
基金
中国国家自然科学基金;
关键词
Masked sequence modeling; Multi-head attention; Functional networks; ARCHITECTURE; FMRI;
D O I
10.1007/978-3-031-16431-6_28
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
It has been of great interest in the neuroimaging community to discover brain functional networks (FBNs) based on task functional magnetic resonance imaging (tfMRI). A variety of methods have been used to model tfMRI sequences so far, such as recurrent neural network (RNN) and Autoencoder. However, these models are not designed to incorporate the characteristics of tfMRI sequences, and the same signal values at different time points in a fMRI time series may rep-resent different states and meanings. Inspired by doze learning methods and the human ability to judge polysemous words based on context, we proposed a self-supervised a Multi-head Attention-based Masked Sequence Model (MAMSM), as BERT model uses (Masked Language Modeling) MLM and multi-head attention to learn the different meanings of the same word in different sentences. MAMSM masks and encodes tfMRI time series, uses multi-head attention to calculate different meanings corresponding to the same signal value in fMRI sequence, and obtains context information through MSM pre-training. Furthermore this work redefined a new loss function to extract FBNs according to the task de-sign information of tfMRI data. The model has been applied to the Human Connectome Project (HCP) task fMRI dataset and achieves state-of-the-art performance in brain temporal dynamics, the Pearson correlation coefficient between learning features and task design curves was more than 0.95, and the model can extract more meaningful network besides the known task related brain networks.
引用
收藏
页码:295 / 304
页数:10
相关论文
共 50 条
  • [1] Multi-head attention-based masked sequence model for mapping functional brain networks
    He, Mengshen
    Hou, Xiangyu
    Ge, Enjie
    Wang, Zhenwei
    Kang, Zili
    Qiang, Ning
    Zhang, Xin
    Ge, Bao
    [J]. FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [2] Modeling Functional Brain Networks with Multi-Head Attention-based Region-Enhancement for ADHD Classification
    Cao, Chunhong
    Fu, Huawei
    Li, Gai
    Wang, Mengyang
    Gao, Xieping
    [J]. PROCEEDINGS OF THE 2023 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2023, 2023, : 362 - 369
  • [3] Multi-Head Attention-Based Spectrum Sensing for Radio
    Devarakonda, B. V. Ravisankar
    Nandanavam, Venkateswararao
    [J]. INTERNATIONAL JOURNAL OF ELECTRICAL AND COMPUTER ENGINEERING SYSTEMS, 2023, 14 (02) : 135 - 143
  • [4] Self Multi-Head Attention-based Convolutional Neural Networks for fake news detection
    Fang, Yong
    Gao, Jian
    Huang, Cheng
    Peng, Hua
    Wu, Runpu
    [J]. PLOS ONE, 2019, 14 (09):
  • [5] Combining Multi-Head Attention and Sparse Multi-Head Attention Networks for Session-Based Recommendation
    Zhao, Zhiwei
    Wang, Xiaoye
    Xiao, Yingyuan
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [6] A multi-head attention-based transformer model for traffic flow forecasting with a comparative analysis to recurrent neural networks
    Reza, Selim
    Ferreira, Marta Campos
    Machado, J. J. M.
    Tavares, Joao Manuel R. S.
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2022, 202
  • [7] Multi-head attention-based model for reconstructing continuous missing time series data
    Wu, Huafeng
    Zhang, Yuxuan
    Liang, Linian
    Mei, Xiaojun
    Han, Dezhi
    Han, Bing
    Weng, Tien-Hsiung
    Li, Kuan-Ching
    [J]. JOURNAL OF SUPERCOMPUTING, 2023, 79 (18): : 20684 - 20711
  • [8] Multi-head attention-based model for reconstructing continuous missing time series data
    Huafeng Wu
    Yuxuan Zhang
    Linian Liang
    Xiaojun Mei
    Dezhi Han
    Bing Han
    Tien-Hsiung Weng
    Kuan-Ching Li
    [J]. The Journal of Supercomputing, 2023, 79 : 20684 - 20711
  • [9] Attention-based fusion of multiple graphheat networks for structural to functional brain mapping
    Oota, Subba Reddy
    Yadav, Archi
    Dash, Arpita
    Bapi, Raju S.
    Sharma, Avinash
    [J]. SCIENTIFIC REPORTS, 2024, 14 (01)
  • [10] Modeling spatio-temporal patterns of holistic functional brain networks via multi-head guided attention graph neural networks (Multi-Head GAGNNs)
    Yan, Jiadong
    Chen, Yuzhong
    Xiao, Zhenxiang
    Zhang, Shu
    Jiang, Mingxin
    Wang, Tianqi
    Zhang, Tuo
    Lv, Jinglei
    Becker, Benjamin
    Zhang, Rong
    Zhu, Dajiang
    Han, Junwei
    Yao, Dezhong
    Kendrick, Keith M.
    Liu, Tianming
    Jiang, Xi
    [J]. MEDICAL IMAGE ANALYSIS, 2022, 80