Self-supervised contrastive learning with random walks for medical image segmentation with limited annotations

被引:11
|
作者
Fischer, Marc [1 ]
Hepp, Tobias [2 ]
Gatidis, Sergios [2 ]
Yang, Bin [1 ]
机构
[1] Univ Stuttgart, Inst Signal Proc & Syst Theory, D-70550 Stuttgart, Germany
[2] Max Planck Inst Intelligent Syst, D-72076 Tubingen, Germany
关键词
Contrastive learning; Cyclical random walk; Self; -supervision; Semantic segmentation; Semi;
D O I
10.1016/j.compmedimag.2022.102174
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Medical image segmentation has seen significant progress through the use of supervised deep learning. Hereby, large annotated datasets were employed to reliably segment anatomical structures. To reduce the requirement for annotated training data, self-supervised pre-training strategies on non-annotated data were designed. Especially contrastive learning schemes operating on dense pixel-wise representations have been introduced as an effective tool. In this work, we expand on this strategy and leverage inherent anatomical similarities in medical imaging data. We apply our approach to the task of semantic segmentation in a semi-supervised setting with limited amounts of annotated volumes. Trained alongside a segmentation loss in one single training stage, a contrastive loss aids to differentiate between salient anatomical regions that conform to the available annotations. Our approach builds upon the work of Jabri et al. (2020), who proposed cyclical contrastive random walks (CCRW) for self-supervision on palindromes of video frames. We adapt this scheme to operate on entries of paired embedded image slices. Using paths of cyclical random walks bypasses the need for negative samples, as commonly used in contrastive approaches, enabling the algorithm to discriminate among relevant salient (anatomical) regions implicitly. Further, a multi-level supervision strategy is employed, ensuring adequate representations of local and global characteristics of anatomical structures. The effectiveness of reducing the amount of required annotations is shown on three MRI datasets. A median increase of 8.01 and 5.90 pp in the Dice Similarity Coefficient (DSC) compared to our baseline could be achieved across all three datasets in the case of one and two available annotated examples per dataset.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Group Contrastive Self-Supervised Learning on Graphs
    Xu, Xinyi
    Deng, Cheng
    Xie, Yaochen
    Ji, Shuiwang
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (03) : 3169 - 3180
  • [32] A comprehensive perspective of contrastive self-supervised learning
    Chen, Songcan
    Geng, Chuanxing
    [J]. FRONTIERS OF COMPUTER SCIENCE, 2021, 15 (04)
  • [33] Self-supervised contrastive learning on agricultural images
    Guldenring, Ronja
    Nalpantidis, Lazaros
    [J]. COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2021, 191
  • [34] Similarity Contrastive Estimation for Self-Supervised Soft Contrastive Learning
    Denize, Julien
    Rabarisoa, Jaonary
    Orcesi, Astrid
    Herault, Romain
    Canu, Stephane
    [J]. 2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 2705 - 2715
  • [35] Self-Supervised Learning for Annotation Efficient Biomedical Image Segmentation
    Rettenberger, Luca
    Schilling, Marcel
    Elser, Stefan
    Bohland, Moritz
    Reischl, Markus
    [J]. IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2023, 70 (09) : 2519 - 2528
  • [36] Contrastive Transformation for Self-supervised Correspondence Learning
    Wang, Ning
    Zhou, Wengang
    Li, Hougiang
    [J]. THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 10174 - 10182
  • [37] Self-Supervised Contrastive Learning for Singing Voices
    Yakura, Hiromu
    Watanabe, Kento
    Goto, Masataka
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 1614 - 1623
  • [38] A self-supervised image aesthetic assessment combining masked image modeling and contrastive learning
    Yang, Shuai
    Wang, Zibei
    Wang, Guangao
    Ke, Yongzhen
    Qin, Fan
    Guo, Jing
    Chen, Liming
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2024, 101
  • [39] Self-supervised Correction Learning for Semi-supervised Biomedical Image Segmentation
    Zhang, Ruifei
    Liu, Sishuo
    Yu, Yizhou
    Li, Guanbin
    [J]. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT II, 2021, 12902 : 134 - 144
  • [40] JGCL: Joint Self-Supervised and Supervised Graph Contrastive Learning
    Akkas, Selahattin
    Azad, Ariful
    [J]. COMPANION PROCEEDINGS OF THE WEB CONFERENCE 2022, WWW 2022 COMPANION, 2022, : 1099 - 1105