Self-supervised Learning Based on a Pre-trained Method for the Subtype Classification of Spinal Tumors

被引:2
|
作者
Jiao, Menglei [1 ,6 ]
Liu, Hong [1 ]
Yang, Zekang [1 ,6 ]
Tian, Shuai [2 ]
Ouyang, Hanqiang [3 ,4 ,5 ]
Li, Yuan [2 ]
Yuan, Yuan [2 ]
Liu, Jianfang [2 ]
Wang, Chunjie [2 ]
Lang, Ning [2 ]
Jiang, Liang [3 ,4 ,5 ]
Yuan, Huishu [2 ]
Qian, Yueliang [1 ]
Wang, Xiangdong [1 ]
机构
[1] Chinese Acad Sci, Inst Comp Technol, Beijing Key Lab Mobile Comp & Pervas Device, Beijing 100190, Peoples R China
[2] Peking Univ Third Hosp, Dept Radiol, Beijing 100191, Peoples R China
[3] Peking Univ Third Hosp, Dept Orthopaed, Beijing 100191, Peoples R China
[4] Engn Res Ctr Bone & Joint Precis Med, Beijing 100191, Peoples R China
[5] Beijing Key Lab Spinal Dis Res, Beijing 100191, Peoples R China
[6] Univ Chinese Acad Sci, Beijing 100086, Peoples R China
基金
北京市自然科学基金;
关键词
Self-supervised learning; Pre-training model; Subtype classification;
D O I
10.1007/978-3-031-17266-3_6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spinal tumors contain multiple pathological subtypes, and different subtypes may correspond to different treatments and prognoses. Diagnosis of spinal tumor subtypes from medical images in the early stage is of great clinical significance. Due to the complex morphology and high heterogeneity of spinal tumors, it can be challenging to diagnose subtypes from medical images accurately. In recent years, a number of researchers have applied deep learning technology to medical image analysis. However, such research usually requires a large number of labeled samples for training, which can be difficult to obtain. Therefore, the use of unlabeled medical images to improve the identification performance of models is a hot research topic. This study proposed a self-supervised learning based pre-trained method Res-MAE using a convolutional neural network and masked autoencoder. First, this method trains an efficient feature encoder using a large amount of unlabeled spinal medical data with an image reconstruction task. Then this encoder is transferred to the downstream subtype classification in a multi-modal fusion model for fine-tuning. This multi-modal fusion model adopts a bipartite graph and multi-branch for spinal tumor subtype classification. The experimental results show that the accuracy of the proposed method can be increased by up to 10.3%, and the F1 can be increased by up to 13.8% compared with the baseline method.
引用
收藏
页码:58 / 67
页数:10
相关论文
共 50 条
  • [1] BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning
    Jia, Jinyuan
    Liu, Yupei
    Gong, Neil Zhenqiang
    [J]. 43RD IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP 2022), 2022, : 2043 - 2059
  • [2] Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
    Liu, Hongbin
    Qu, Wenjie
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    [J]. PROCEEDINGS 45TH IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS, SPW 2024, 2024, : 144 - 156
  • [4] Prediction of Protein Tertiary Structure Using Pre-Trained Self-Supervised Learning Based on Transformer
    Kurniawan, Alif
    Jatmiko, Wisnu
    Hertadi, Rukman
    Habibie, Novian
    [J]. 2020 5TH INTERNATIONAL WORKSHOP ON BIG DATA AND INFORMATION SECURITY (IWBIS 2020), 2020, : 75 - 80
  • [5] SSCLNet: A Self-Supervised Contrastive Loss-Based Pre-Trained Network for Brain MRI Classification
    Mishra, Animesh
    Jha, Ritesh
    Bhattacharjee, Vandana
    [J]. IEEE ACCESS, 2023, 11 : 6673 - 6681
  • [6] Enhancing Pre-trained Language Models by Self-supervised Learning for Story Cloze Test
    Xie, Yuqiang
    Hu, Yue
    Xing, Luxi
    Wang, Chunhui
    Hu, Yong
    Wei, Xiangpeng
    Sun, Yajing
    [J]. KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT (KSEM 2020), PT I, 2020, 12274 : 271 - 279
  • [7] A Systematic Review of Transformer-Based Pre-Trained Language Models through Self-Supervised Learning
    Kotei, Evans
    Thirunavukarasu, Ramkumar
    [J]. INFORMATION, 2023, 14 (03)
  • [8] SPIQ: A Self-Supervised Pre-Trained Model for Image Quality Assessment
    Chen, Pengfei
    Li, Leida
    Wu, Qingbo
    Wu, Jinjian
    [J]. IEEE Signal Processing Letters, 2022, 29 : 513 - 517
  • [9] SPIQ: A Self-Supervised Pre-Trained Model for Image Quality Assessment
    Chen, Pengfei
    Li, Leida
    Wu, Qingbo
    Wu, Jinjian
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 513 - 517
  • [10] Self-Supervised Quantization of Pre-Trained Neural Networks for Multiplierless Acceleration
    Vogel, Sebastian
    Springer, Jannik
    Guntoro, Andre
    Ascheid, Gerd
    [J]. 2019 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE), 2019, : 1094 - 1099