BrainMass: Advancing Brain Network Analysis for Diagnosis with Large-scale Self-Supervised Learning

被引:1
|
作者
Yang Y. [1 ,2 ]
Ye C. [2 ]
Su G. [3 ]
Zhang Z. [4 ]
Chang Z. [2 ]
Chen H. [1 ,2 ]
Chan P. [5 ]
Yu Y. [6 ]
Ma T. [1 ,2 ]
机构
[1] School of Electronics and Information Engineering, Harbin Institute of Technology at Shenzhen, Shenzhen
[2] Harbin Institute of Technology at Shenzhen, Shenzhen
[3] Tencent Data Platform, Shenzhen
[4] Shenzhen Institutes of Advanced Technology, Paul C. Lauterbur Research Center for Biomedical Imaging, Chinese Academy of Sciences, Shenzhen, Guangdong
[5] Xuanwu Hospital, Capital Medical University, Beijing
[6] Peng Cheng Laboratory, Shenzhen, Guangdong
基金
中国国家自然科学基金;
关键词
Adaptation models; Biological system modeling; Brain modeling; brain network; Data models; large-scale; pretrain; Self-supervised learning; Task analysis; Transformer; Transformers;
D O I
10.1109/TMI.2024.3414476
中图分类号
学科分类号
摘要
Foundation models pretrained on large-scale datasets via self-supervised learning demonstrate exceptional versatility across various tasks. Due to the heterogeneity and hard-to-collect medical data, this approach is especially beneficial for medical image analysis and neuroscience research, as it streamlines broad downstream tasks without the need for numerous costly annotations. However, there has been limited investigation into brain network foundation models, limiting their adaptability and generalizability for broad neuroscience studies. In this study, we aim to bridge this gap. In particular, (1) we curated a comprehensive dataset by collating images from 30 datasets, which comprises 70,781 samples of 46,686 participants. Moreover, we introduce pseudo-functional connectivity (pFC) to further generates millions of augmented brain networks by randomly dropping certain timepoints of the BOLD signal. (2) We propose the BrainMass framework for brain network self-supervised learning via mask modeling and feature alignment. BrainMass employs Mask-ROI Modeling (MRM) to bolster intra-network dependencies and regional specificity. Furthermore, Latent Representation Alignment (LRA) module is utilized to regularize augmented brain networks of the same participant with similar topological properties to yield similar latent representations by aligning their latent embeddings. Extensive experiments on eight internal tasks and seven external brain disorder diagnosis tasks show BrainMass’s superior performance, highlighting its significant generalizability and adaptability. Nonetheless, BrainMass demonstrates powerful few/zero-shot learning abilities and exhibits meaningful interpretation to various diseases, showcasing its potential use for clinical applications. IEEE
引用
收藏
页码:1 / 1
相关论文
共 50 条
  • [1] Self-supervised Learning for Large-scale Item Recommendations
    Yao, Tiansheng
    Yi, Xinyang
    Cheng, Derek Zhiyuan
    Yu, Felix
    Chen, Ting
    Menon, Aditya
    Hong, Lichan
    Chi, Ed H.
    Tjoa, Steve
    Kang, Jieqi
    Ettinger, Evan
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 4321 - 4330
  • [2] Self-supervised contrastive representation learning for large-scale trajectories
    Li, Shuzhe
    Chen, Wei
    Yan, Bingqi
    Li, Zhen
    Zhu, Shunzhi
    Yu, Yanwei
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 148 : 357 - 366
  • [3] LARGE-SCALE SELF-SUPERVISED SPEECH REPRESENTATION LEARNING FOR AUTOMATIC SPEAKER VERIFICATION
    Chen, Zhengyang
    Chen, Sanyuan
    Wu, Yu
    Qian, Yao
    Wang, Chengyi
    Liu, Shujie
    Qian, Yanmin
    Zeng, Michael
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 6147 - 6151
  • [4] Large-Scale Self-Supervised Human Activity Recognition
    Zadeh, Mohammad Zaki
    Jaiswal, Ashish
    Pavel, Hamza Reza
    Hebri, Aref
    Kapoor, Rithik
    Makedon, Fillia
    PROCEEDINGS OF THE 15TH INTERNATIONAL CONFERENCE ON PERVASIVE TECHNOLOGIES RELATED TO ASSISTIVE ENVIRONMENTS, PETRA 2022, 2022, : 298 - 299
  • [5] Self-supervised cognitive learning for multifaced interest in large-scale industrial recommender systems
    Wang, Yingshuai
    Zhang, Dezheng
    Wulamu, Aziguli
    INFORMATION SCIENCES, 2025, 686
  • [6] ContrastMotion: Self-supervised Scene Motion Learning for Large-Scale LiDAR Point Clouds
    Jia, Xiangze
    Zhou, Hui
    Zhu, Xinge
    Guo, Yandong
    Zhang, Ji
    Ma, Yuexin
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 929 - 937
  • [7] Self-Supervised Graph Transformer on Large-Scale Molecular Data
    Rong, Yu
    Bian, Yatao
    Xu, Tingyang
    Xie, Weiyang
    Wei, Ying
    Huang, Wenbing
    Huang, Junzhou
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [8] Self-supervised Natural Image Reconstruction and Large-scale Semantic Classification from Brain Activity
    Gaziv, Guy
    Beliy, Roman
    Granot, Niv
    Hoogi, Assaf
    Strappini, Francesca
    Golan, Tal
    Irani, Michal
    NEUROIMAGE, 2022, 254
  • [9] Graph Self-Supervised Learning With Application to Brain Networks Analysis
    Wen, Guangqi
    Cao, Peng
    Liu, Lingwen
    Yang, Jinzhu
    Zhang, Xizhe
    Wang, Fei
    Zaiane, Osmar R.
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2023, 27 (08) : 4154 - 4165
  • [10] DeepMapping2: Self-Supervised Large-Scale LiDAR Map Optimization
    Chen, Chao
    Liu, Xinhao
    Li, Yiming
    Ding, Li
    Feng, Chen
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 9306 - 9316