Hierarchical Self-supervised Learning for Medical Image Segmentation Based on Multi-domain Data Aggregation

被引:11
|
作者
Zheng, Hao [1 ]
Han, Jun [1 ]
Wang, Hongxiao [1 ]
Yang, Lin [1 ]
Zhao, Zhuo [1 ]
Wang, Chaoli [1 ]
Chen, Danny Z. [1 ]
机构
[1] Univ Notre Dame, Dept Comp Sci & Engn, Notre Dame, IN 46556 USA
基金
美国国家科学基金会;
关键词
Self-supervised learning; Image segmentation; Multi-domain;
D O I
10.1007/978-3-030-87193-2_59
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A large labeled dataset is a key to the success of supervised deep learning, but for medical image segmentation, it is highly challenging to obtain sufficient annotated images for model training. In many scenarios, unannotated images are abundant and easy to acquire. Self-supervised learning (SSL) has shown great potentials in exploiting raw data information and representation learning. In this paper, we propose Hierarchical Self-Supervised Learning (HSSL), a new self-supervised framework that boosts medical image segmentation by making good use of unannotated data. Unlike the current literature on task-specific self-supervised pretraining followed by supervised fine-tuning, we utilize SSL to learn task-agnostic knowledge from heterogeneous data for various medical image segmentation tasks. Specifically, we first aggregate a dataset from several medical challenges, then pre-train the network in a self-supervised manner, and finally fine-tune on labeled data. We develop a new loss function by combining contrastive loss and classification loss, and pre-train an encoder-decoder architecture for segmentation tasks. Our extensive experiments show that multi-domain joint pretraining benefits downstream segmentation tasks and outperforms single-domain pre-training significantly. Compared to learning from scratch, our method yields better performance on various tasks (e.g., +0.69% to +18.60% in Dice with 5% of annotated data). With limited amounts of training data, our method can substantially bridge the performance gap with respect to denser annotations (e.g., 10% vs. 100% annotations).
引用
收藏
页码:622 / 632
页数:11
相关论文
共 50 条
  • [1] Self-Supervised Representation Learning From Multi-Domain Data
    Feng, Zeyu
    Xu, Chang
    Tao, Dacheng
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 3244 - 3254
  • [3] Self-Supervised Learning for Few-Shot Medical Image Segmentation
    Ouyang, Cheng
    Biffi, Carlo
    Chen, Chen
    Kart, Turkay
    Qiu, Huaqi
    Rueckert, Daniel
    [J]. IEEE TRANSACTIONS ON MEDICAL IMAGING, 2022, 41 (07) : 1837 - 1848
  • [4] Self-supervised Learning Based on Max-tree Representation for Medical Image Segmentation
    Tang, Qian
    Du, Bo
    Xu, Yongchao
    [J]. 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [5] Semi-supervised Multi-domain Learning for Medical Image Classification
    Chavhan, Ruchika
    Banerjee, Biplab
    Das, Nibaran
    [J]. RECENT TRENDS IN IMAGE PROCESSING AND PATTERN RECOGNITION, RTIP2R 2022, 2023, 1704 : 22 - 33
  • [6] Medical image segmentation based on self-supervised hybrid fusion network
    Zhao, Liang
    Jia, Chaoran
    Ma, Jiajun
    Shao, Yu
    Liu, Zhuo
    Yuan, Hong
    [J]. FRONTIERS IN ONCOLOGY, 2023, 13
  • [7] Localized Region Contrast for Enhancing Self-supervised Learning in Medical Image Segmentation
    Yan, Xiangyi
    Naushad, Junayed
    You, Chenyu
    Tang, Hao
    Sun, Shanlin
    Han, Kun
    Ma, Haoyu
    Duncan, James S.
    Xie, Xiaohui
    [J]. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023, PT II, 2023, 14221 : 468 - 478
  • [8] Self-supervised multi-task learning for medical image analysis
    Yu, Huihui
    Dai, Qun
    [J]. PATTERN RECOGNITION, 2024, 150
  • [9] Self-supervised learning and semi-supervised learning for multi-sequence medical image classification
    Wang, Yueyue
    Song, Danjun
    Wang, Wentao
    Rao, Shengxiang
    Wang, Xiaoying
    Wang, Manning
    [J]. NEUROCOMPUTING, 2022, 513 : 383 - 394
  • [10] Keypoint-Augmented Self-Supervised Learning for Medical Image Segmentation with Limited Annotation
    Yang, Zhangsihao
    Ren, Mengwei
    Ding, Kaize
    Gerig, Guido
    Wang, Yalin
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,