BrainMass: Advancing Brain Network Analysis for Diagnosis with Large-scale Self-Supervised Learning

被引:1
|
作者
Yang Y. [1 ,2 ]
Ye C. [2 ]
Su G. [3 ]
Zhang Z. [4 ]
Chang Z. [2 ]
Chen H. [1 ,2 ]
Chan P. [5 ]
Yu Y. [6 ]
Ma T. [1 ,2 ]
机构
[1] School of Electronics and Information Engineering, Harbin Institute of Technology at Shenzhen, Shenzhen
[2] Harbin Institute of Technology at Shenzhen, Shenzhen
[3] Tencent Data Platform, Shenzhen
[4] Shenzhen Institutes of Advanced Technology, Paul C. Lauterbur Research Center for Biomedical Imaging, Chinese Academy of Sciences, Shenzhen, Guangdong
[5] Xuanwu Hospital, Capital Medical University, Beijing
[6] Peng Cheng Laboratory, Shenzhen, Guangdong
基金
中国国家自然科学基金;
关键词
Adaptation models; Biological system modeling; Brain modeling; brain network; Data models; large-scale; pretrain; Self-supervised learning; Task analysis; Transformer; Transformers;
D O I
10.1109/TMI.2024.3414476
中图分类号
学科分类号
摘要
Foundation models pretrained on large-scale datasets via self-supervised learning demonstrate exceptional versatility across various tasks. Due to the heterogeneity and hard-to-collect medical data, this approach is especially beneficial for medical image analysis and neuroscience research, as it streamlines broad downstream tasks without the need for numerous costly annotations. However, there has been limited investigation into brain network foundation models, limiting their adaptability and generalizability for broad neuroscience studies. In this study, we aim to bridge this gap. In particular, (1) we curated a comprehensive dataset by collating images from 30 datasets, which comprises 70,781 samples of 46,686 participants. Moreover, we introduce pseudo-functional connectivity (pFC) to further generates millions of augmented brain networks by randomly dropping certain timepoints of the BOLD signal. (2) We propose the BrainMass framework for brain network self-supervised learning via mask modeling and feature alignment. BrainMass employs Mask-ROI Modeling (MRM) to bolster intra-network dependencies and regional specificity. Furthermore, Latent Representation Alignment (LRA) module is utilized to regularize augmented brain networks of the same participant with similar topological properties to yield similar latent representations by aligning their latent embeddings. Extensive experiments on eight internal tasks and seven external brain disorder diagnosis tasks show BrainMass’s superior performance, highlighting its significant generalizability and adaptability. Nonetheless, BrainMass demonstrates powerful few/zero-shot learning abilities and exhibits meaningful interpretation to various diseases, showcasing its potential use for clinical applications. IEEE
引用
收藏
页码:1 / 1
相关论文
共 50 条
  • [21] Self-supervised Learning for Endoscopic Video Analysis
    Hirsch, Roy
    Caron, Mathilde
    Cohen, Regev
    Livne, Amir
    Shapiro, Ron
    Golany, Tomer
    Goldenberg, Roman
    Freedman, Daniel
    Rivlin, Ehud
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023, PT V, 2023, 14224 : 569 - 578
  • [22] SELF-SUPERVISED LEARNING FOR INFANT CRY ANALYSIS
    Gorin, Arsenii
    Subakan, Cem
    Abdoli, Sajjad
    Wang, Junhao
    Latremouille, Samantha
    Onu, Charles
    2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,
  • [23] WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
    Chen, Sanyuan
    Wang, Chengyi
    Chen, Zhengyang
    Wu, Yu
    Liu, Shujie
    Chen, Zhuo
    Li, Jinyu
    Kanda, Naoyuki
    Yoshioka, Takuya
    Xiao, Xiong
    Wu, Jian
    Zhou, Long
    Ren, Shuo
    Qian, Yanmin
    Qian, Yao
    Zeng, Michael
    Yu, Xiangzhan
    Wei, Furu
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2022, 16 (06) : 1505 - 1518
  • [24] SELF-SUPERVISED LEARNING WITH RADIOLOGY REPORTS, A COMPARATIVE ANALYSIS OF STRATEGIES FOR LARGE VESSEL OCCLUSION AND BRAIN CTA IMAGES
    Pachade, S.
    Datta, S.
    Dong, Y.
    Salazar-Marioni, S.
    Abdelkhaleq, R.
    Niktabe, A.
    Roberts, K.
    Sheth, S. A.
    Giancardo, L.
    2023 IEEE 20TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI, 2023,
  • [25] Large-scale supervised similarity learning in networks
    Shiyu Chang
    Guo-Jun Qi
    Yingzhen Yang
    Charu C. Aggarwal
    Jiayu Zhou
    Meng Wang
    Thomas S. Huang
    Knowledge and Information Systems, 2016, 48 : 707 - 740
  • [26] Large-scale supervised similarity learning in networks
    Chang, Shiyu
    Qi, Guo-Jun
    Yang, Yingzhen
    Aggarwal, Charu C.
    Zhou, Jiayu
    Wang, Meng
    Huang, Thomas S.
    KNOWLEDGE AND INFORMATION SYSTEMS, 2016, 48 (03) : 707 - 740
  • [27] Practical intelligent diagnostic algorithm for wearable 12-lead ECG via self-supervised learning on large-scale dataset
    Lai, Jiewei
    Tan, Huixin
    Wang, Jinliang
    Ji, Lei
    Guo, Jun
    Han, Baoshi
    Shi, Yajun
    Feng, Qianjin
    Yang, Wei
    NATURE COMMUNICATIONS, 2023, 14 (01)
  • [28] Practical intelligent diagnostic algorithm for wearable 12-lead ECG via self-supervised learning on large-scale dataset
    Jiewei Lai
    Huixin Tan
    Jinliang Wang
    Lei Ji
    Jun Guo
    Baoshi Han
    Yajun Shi
    Qianjin Feng
    Wei Yang
    Nature Communications, 14
  • [29] Self-supervised Multi-scale Consistency for Weakly Supervised Segmentation Learning
    Valvano, Gabriele
    Leo, Andrea
    Tsaftaris, Sotirios A.
    DOMAIN ADAPTATION AND REPRESENTATION TRANSFER, AND AFFORDABLE HEALTHCARE AND AI FOR RESOURCE DIVERSE GLOBAL HEALTH (DART 2021), 2021, 12968 : 14 - 24
  • [30] Large-Scale Self- and Semi-Supervised Learning for Speech Translation
    Wang, Changhan
    Wu, Anne
    Pino, Juan
    Baevski, Alexei
    Auli, Michael
    Conneau, Alexis
    INTERSPEECH 2021, 2021, : 2242 - 2246