Transferability of Non-contrastive Self-supervised Learning to Chronic Wound Image Recognition

被引:0
|
作者
Akay, Julien Marteen [1 ]
Schenck, Wolfram [1 ]
机构
[1] Bielefeld Univ Appl Sci & Arts, D-33619 Bielefeld, Germany
关键词
Non-contrastive self-supervised learning; Convolutional neural networks; Deep learning; Transfer learning; Fine-tuning; Wound image recognition; SEGMENTATION;
D O I
10.1007/978-3-031-72353-7_31
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Chronic wounds pose significant challenges in medical practice, necessitating effective treatment approaches and reduced burden on healthcare staff. Computer-aided diagnosis (CAD) systems offer promising solutions to enhance treatment outcomes. However, the effective processing of wound images remains a challenge. Deep learning models, particularly convolutional neural networks (CNNs), have demonstrated proficiency in this task, typically relying on extensive labeled data for optimal generalization. Given the limited availability of medical images, a common approach involves pretraining models on data-rich tasks to transfer that knowledge as a prior to the main task, compensating for the lack of labeled wound images. In this study, we investigate the transferability of CNNs pretrained with non-contrastive self-supervised learning (SSL) to enhance generalization in chronic wound image recognition. Our findings indicate that leveraging non-contrastive SSL methods in conjunction with ConvNeXt models yields superior performance compared to other work's multimodal models that additionally benefit from affected body part location data. Furthermore, analysis using Grad-CAM reveals that ConvNeXt models pretrained with VICRegL exhibit improved focus on relevant wound properties compared to the conventional approach of ResNet-50 models pretrained with ImageNet classification. These results underscore the crucial role of the appropriate combination of pretraining method and model architecture in effectively addressing limited wound data settings. Among the various approaches explored, ConvNeXt-XL pretrained by VICRegL emerges as a reliable and stable method. This study makes a novel contribution by demonstrating the effectiveness of latest non-contrastive SSL-based transfer learning in advancing the field of chronic wound image recognition.
引用
收藏
页码:427 / 444
页数:18
相关论文
共 50 条
  • [41] Contrastive Self-supervised Learning for Sensor-based Human Activity Recognition
    Khaertdinov, Bulat
    Ghaleb, Esam
    Asteriadis, Stylianos
    2021 INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS (IJCB 2021), 2021,
  • [42] JGCL: Joint Self-Supervised and Supervised Graph Contrastive Learning
    Akkas, Selahattin
    Azad, Ariful
    COMPANION PROCEEDINGS OF THE WEB CONFERENCE 2022, WWW 2022 COMPANION, 2022, : 1099 - 1105
  • [43] SELF-SUPERVISED CONTRASTIVE LEARNING FOR CROSS-DOMAIN HYPERSPECTRAL IMAGE REPRESENTATION
    Lee, Hyungtae
    Kwon, Heesung
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3239 - 3243
  • [44] Robust image hashing for content identification through contrastive self-supervised learning
    Fonseca-Bustos, Jesús
    Ramírez-Gutiérrez, Kelsey Alejandra
    Feregrino-Uribe, Claudia
    Neural Networks, 2022, 156 : 81 - 94
  • [45] Self-Supervised Learning With Learnable Sparse Contrastive Sampling for Hyperspectral Image Classification
    Liang, Miaomiao
    Dong, Jian
    Yu, Lingjuan
    Yu, Xiangchun
    Meng, Zhe
    Jiao, Licheng
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61 : 1 - 13
  • [46] Object and attribute recognition for product image with self-supervised learning
    Dai, Yong
    Li, Yi
    Sun, Bin
    NEUROCOMPUTING, 2023, 558
  • [47] Contrasting the landscape of contrastive and non-contrastive learning
    Pokle, Ashwini
    Tian, Jinjin
    Li, Yuchen
    Risteski, Andrej
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [48] SCL: Self-supervised contrastive learning for few-shot image classification
    Lim, Jit Yan
    Lim, Kian Ming
    Lee, Chin Poo
    Tan, Yong Xuan
    NEURAL NETWORKS, 2023, 165 : 19 - 30
  • [49] Robust image hashing for content identification through contrastive self-supervised learning
    Fonseca-Bustos, Jesus
    Alejandra Ramirez-Gutierrez, Kelsey
    Feregrino-Uribe, Claudia
    NEURAL NETWORKS, 2022, 156 : 81 - 94
  • [50] CONTRASTIVE HEARTBEATS: CONTRASTIVE LEARNING FOR SELF-SUPERVISED ECG REPRESENTATION AND PHENOTYPING
    Wei, Crystal T.
    Hsieh, Ming-En
    Liu, Chien-Liang
    Tseng, Vincent S.
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 1126 - 1130