Transferability of Non-contrastive Self-supervised Learning to Chronic Wound Image Recognition

被引:0
|
作者
Akay, Julien Marteen [1 ]
Schenck, Wolfram [1 ]
机构
[1] Bielefeld Univ Appl Sci & Arts, D-33619 Bielefeld, Germany
关键词
Non-contrastive self-supervised learning; Convolutional neural networks; Deep learning; Transfer learning; Fine-tuning; Wound image recognition; SEGMENTATION;
D O I
10.1007/978-3-031-72353-7_31
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Chronic wounds pose significant challenges in medical practice, necessitating effective treatment approaches and reduced burden on healthcare staff. Computer-aided diagnosis (CAD) systems offer promising solutions to enhance treatment outcomes. However, the effective processing of wound images remains a challenge. Deep learning models, particularly convolutional neural networks (CNNs), have demonstrated proficiency in this task, typically relying on extensive labeled data for optimal generalization. Given the limited availability of medical images, a common approach involves pretraining models on data-rich tasks to transfer that knowledge as a prior to the main task, compensating for the lack of labeled wound images. In this study, we investigate the transferability of CNNs pretrained with non-contrastive self-supervised learning (SSL) to enhance generalization in chronic wound image recognition. Our findings indicate that leveraging non-contrastive SSL methods in conjunction with ConvNeXt models yields superior performance compared to other work's multimodal models that additionally benefit from affected body part location data. Furthermore, analysis using Grad-CAM reveals that ConvNeXt models pretrained with VICRegL exhibit improved focus on relevant wound properties compared to the conventional approach of ResNet-50 models pretrained with ImageNet classification. These results underscore the crucial role of the appropriate combination of pretraining method and model architecture in effectively addressing limited wound data settings. Among the various approaches explored, ConvNeXt-XL pretrained by VICRegL emerges as a reliable and stable method. This study makes a novel contribution by demonstrating the effectiveness of latest non-contrastive SSL-based transfer learning in advancing the field of chronic wound image recognition.
引用
收藏
页码:427 / 444
页数:18
相关论文
共 50 条
  • [1] The Mechanism of Prediction Head in Non-contrastive Self-supervised Learning
    Wen, Zixin
    Li, Yuanzhi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [2] What makes for uniformity for non-contrastive self-supervised learning?
    YinQuan Wang
    XiaoPeng Zhang
    Qi Tian
    JinHu Lü
    Science China Technological Sciences, 2022, 65 : 2399 - 2408
  • [3] What makes for uniformity for non-contrastive self-supervised learning?
    WANG YinQuan
    ZHANG XiaoPeng
    TIAN Qi
    Lü JinHu
    Science China(Technological Sciences), 2022, 65 (10) : 2399 - 2408
  • [4] What makes for uniformity for non-contrastive self-supervised learning?
    WANG YinQuan
    ZHANG XiaoPeng
    TIAN Qi
    L JinHu
    Science China(Technological Sciences), 2022, (10) : 2399 - 2408
  • [5] What makes for uniformity for non-contrastive self-supervised learning?
    Wang YinQuan
    Zhang XiaoPeng
    Tian Qi
    Lu JinHu
    SCIENCE CHINA-TECHNOLOGICAL SCIENCES, 2022, 65 (10) : 2399 - 2408
  • [6] Contrastive and Non-Contrastive Strategies for Federated Self-Supervised Representation Learning and Deep Clustering
    Miao, Runxuan
    Koyuncu, Erdem
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2024, 18 (06) : 1070 - 1084
  • [7] Non-Contrastive Self-Supervised Learning of Utterance-Level Speech Representations
    Cho, Jaejin
    Pappagari, Raghavendra
    Zelasko, Piotr
    Velazquez, Laureano Moro
    Villalba, Jesus
    Dehak, Najim
    INTERSPEECH 2022, 2022, : 4028 - 4032
  • [8] Contrastive and Non-Contrastive Self-Supervised Learning Recover Global and Local Spectral Embedding Methods
    Balestriero, Randall
    LeCun, Yann
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [9] C3-DINO: Joint Contrastive and Non-Contrastive Self-Supervised Learning for Speaker Verification
    Zhang, Chunlei
    Yu, Dong
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2022, 16 (06) : 1273 - 1283
  • [10] Bridging the Gap from Asymmetry Tricks to Decorrelation Principles in Non-contrastive Self-supervised Learning
    Liu, Kang-Jun
    Suganuma, Masanori
    Okatani, Takayuki
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,