The Effectiveness of Self-supervised Pre-training for Multi-modal Endometriosis Classification

被引:1
|
作者
Butler, David [1 ]
Wang, Hu [1 ]
Zhang, Yuan [1 ]
To, Minh-Son [2 ]
Condous, George [3 ]
Leonardi, Mathew [4 ]
Knox, Steven [5 ]
Avery, Jodie [6 ]
Hull, M. Louise [6 ]
Carneiro, Gustavo [7 ]
机构
[1] Univ Adelaide, Australian Inst Machine Learning, Adelaide, Australia
[2] Flinders Univ S Australia, Flinders Hlth & Med Res Inst, Adelaide, Australia
[3] Omnigynaecare, Sydney, Australia
[4] McMaster Univ, Hamilton, ON, Canada
[5] Benson Radiol, Adelaide, SA, Australia
[6] Univ Adelaide, Robinson Res Inst, Adelaide, Australia
[7] Univ Surrey, Ctr Vis Speech & Signal Proc, Guildford, England
关键词
Self-supervision; Multi-modal learning; MRI; Endometriosis;
D O I
10.1109/EMBC40787.2023.10340504
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Endometriosis is a debilitating condition affecting 5% to 10% of the women worldwide, where early detection and treatment are the best tools to manage the condition. Early detection can be done via surgery, but multi-modal medical imaging is preferable given the simpler and faster process. However, imaging-based endometriosis diagnosis is challenging as 1) there are few capable clinicians; and 2) it is characterised by small lesions unconfined to a specific location. These two issues challenge the development of endometriosis classifiers as the training datasets tend to be small and contain difficult samples, which leads to overfitting. Hence, it is important to consider generalisation techniques to mitigate this problem, particularly self-supervised pre-training methods that have shown outstanding results in computer vision and natural language processing applications. The main goal of this paper is to study the effectiveness of modern self-supervised pre-training techniques to overcome the two issues mentioned above for the classification of endometriosis from multi-modal imaging data. We also introduce a new masking image modelling self-supervised pre-training method that works with 3D multi-modal medical imaging. Furthermore, to the best of our knowledge, this paper presents the first endometriosis classifier, fine-tuned from the pre-trained model above, which works with multimodal (i.e., T1 and T2) magnetic resonance imaging (MRI) data. Our results show that self-supervised pre-training improves endometriosis classification by as much as 31%, when compared with classifiers trained from scratch.
引用
收藏
页数:5
相关论文
共 50 条
  • [21] Self-supervised pre-training on industrial time-series
    Biggio, Luca
    Kastanis, Iason
    [J]. 2021 8TH SWISS CONFERENCE ON DATA SCIENCE, SDS, 2021, : 56 - 57
  • [22] Self-supervised Pre-training for Semantic Segmentation in an Indoor Scene
    Shrestha, Sulabh
    Li, Yimeng
    Kosecka, Jana
    [J]. 2024 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS, WACVW 2024, 2024, : 625 - 635
  • [23] SPAKT: A Self-Supervised Pre-TrAining Method for Knowledge Tracing
    Ma, Yuling
    Han, Peng
    Qiao, Huiyan
    Cui, Chaoran
    Yin, Yilong
    Yu, Dehu
    [J]. IEEE ACCESS, 2022, 10 : 72145 - 72154
  • [24] CDS: Cross-Domain Self-supervised Pre-training
    Kim, Donghyun
    Saito, Kuniaki
    Oh, Tae-Hyun
    Plummer, Bryan A.
    Sclaroff, Stan
    Saenko, Kate
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9103 - 9112
  • [25] Masked Feature Prediction for Self-Supervised Visual Pre-Training
    Wei, Chen
    Fan, Haoqi
    Xie, Saining
    Wu, Chao-Yuan
    Yuille, Alan
    Feichtenhofer, Christoph
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 14648 - 14658
  • [26] FALL DETECTION USING SELF-SUPERVISED PRE-TRAINING MODEL
    Yhdego, Haben
    Audette, Michel
    Paolini, Christopher
    [J]. PROCEEDINGS OF THE 2022 ANNUAL MODELING AND SIMULATION CONFERENCE (ANNSIM'22), 2022, : 361 - 371
  • [27] DiT: Self-supervised Pre-training for Document Image Transformer
    Li, Junlong
    Xu, Yiheng
    Lv, Tengchao
    Cui, Lei
    Zhang, Cha
    Wei, Furu
    [J]. PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 3530 - 3539
  • [28] MEASURING THE IMPACT OF DOMAIN FACTORS IN SELF-SUPERVISED PRE-TRAINING
    Sanabria, Ramon
    Wei-Ning, Hsu
    Alexei, Baevski
    Auli, Michael
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,
  • [29] Contrastive Self-Supervised Pre-Training for Video Quality Assessment
    Chen, Pengfei
    Li, Leida
    Wu, Jinjian
    Dong, Weisheng
    Shi, Guangming
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 458 - 471
  • [30] Correlational Image Modeling for Self-Supervised Visual Pre-Training
    Li, Wei
    Xie, Jiahao
    Loy, Chen Change
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 15105 - 15115