CheSS: Chest X-Ray Pre-trained Model via Self-supervised Contrastive Learning

被引:9
|
作者
Cho, Kyungjin [1 ,2 ]
Kim, Ki Duk [2 ]
Nam, Yujin [1 ,2 ]
Jeong, Jiheon [1 ,2 ]
Kim, Jeeyoung [1 ,2 ]
Choi, Changyong [1 ,2 ]
Lee, Soyoung [1 ,2 ]
Lee, Jun Soo [6 ]
Woo, Seoyeon [7 ]
Hong, Gil-Sun [4 ,5 ]
Seo, Joon Beom [4 ,5 ]
Kim, Namkug [2 ,3 ]
机构
[1] Univ Ulsan, Asan Med Inst Convergence Sci & Technol, Coll Med, Asan Med Ctr,Dept Biomed Engn, Seoul, South Korea
[2] Univ Ulsan, Asan Med Inst Convergence Sci & Technol, Asan Med Ctr, Dept Convergence Med,Coll Med, 5F,26,Olymp-Ro 43-Gil, Seoul 05505, South Korea
[3] Univ Ulsan, Asan Med Ctr, Dept Radiol, Coll Med, Seoul, South Korea
[4] Univ Ulsan, Dept Radiol, Coll Med, Seoul, South Korea
[5] Univ Ulsan, Res Inst Radiol, Asan Med Ctr, Coll Med, Seoul, South Korea
[6] Seoul Natl Univ, Dept Ind Engn, Seoul, South Korea
[7] Univ Waterloo, Dept Biomed Engn, Waterloo, ON, Canada
关键词
Chest X-ray; Classification; Contrastive learning; Pretrained weight; Self-supervised learning; Bone suppression;
D O I
10.1007/s10278-023-00782-4
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Training deep learning models on medical images heavily depends on experts' expensive and laborious manual labels. In addition, these images, labels, and even models themselves are not widely publicly accessible and suffer from various kinds of bias and imbalances. In this paper, chest X-ray pre-trained model via self-supervised contrastive learning (CheSS) was proposed to learn models with various representations in chest radiographs (CXRs). Our contribution is a publicly accessible pretrained model trained with a 4.8-M CXR dataset using self-supervised learning with a contrastive learning and its validation with various kinds of downstream tasks including classification on the 6-class diseases in internal dataset, diseases classification in CheXpert, bone suppression, and nodule generation. When compared to a scratch model, on the 6-class classification test dataset, we achieved 28.5% increase in accuracy. On the CheXpert dataset, we achieved 1.3% increase in mean area under the receiver operating characteristic curve on the full dataset and 11.4% increase only using 1% data in stress test manner. On bone suppression with perceptual loss, we achieved improvement in peak signal to noise ratio from 34.99 to 37.77, structural similarity index measure from 0.976 to 0.977, and root-square-mean error from 4.410 to 3.301 when compared to ImageNet pretrained model. Finally, on nodule generation, we achieved improvement in Frechet inception distance from 24.06 to 17.07. Our study showed the decent transferability of CheSS weights. CheSS weights can help researchers overcome data imbalance, data shortage, and inaccessibility of medical image datasets. CheSS weight is available at https:// github. com/ mi2rl/ CheSS.
引用
收藏
页码:902 / 910
页数:9
相关论文
共 50 条
  • [41] Self-supervised learning for gastritis detection with gastric X-ray images
    Li, Guang
    Togo, Ren
    Ogawa, Takahiro
    Haseyama, Miki
    [J]. INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY, 2023, 18 (10) : 1841 - 1848
  • [42] KNOWLEDGE DISTILLATION FOR NEURAL TRANSDUCERS FROM LARGE SELF-SUPERVISED PRE-TRAINED MODELS
    Yang, Xiaoyu
    Li, Qiujia
    Woodland, Philip C.
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 8527 - 8531
  • [43] Explore the Use of Self-supervised Pre-trained Acoustic Features on Disguised Speech Detection
    Quan, Jie
    Yang, Yingchun
    [J]. BIOMETRIC RECOGNITION (CCBR 2021), 2021, 12878 : 483 - 490
  • [44] An AI-enabled pre-trained model-based Covid detection model using chest X-ray images
    Gupta, Rajeev Kumar
    Kunhare, Nilesh
    Pathik, Nikhlesh
    Pathik, Babita
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (26) : 37351 - 37377
  • [45] An AI-enabled pre-trained model-based Covid detection model using chest X-ray images
    Rajeev Kumar Gupta
    Nilesh Kunhare
    Nikhlesh Pathik
    Babita Pathik
    [J]. Multimedia Tools and Applications, 2022, 81 : 37351 - 37377
  • [46] An exploratory study of self-supervised pre-training on partially supervised multi-label classification on chest X-ray images
    Dong, Nanqing
    Kampffmeyer, Michael
    Su, Haoyang
    Xing, Eric
    [J]. APPLIED SOFT COMPUTING, 2024, 163
  • [47] COVID-19 detection based on self-supervised transfer learning using chest X-ray images
    Guang Li
    Ren Togo
    Takahiro Ogawa
    Miki Haseyama
    [J]. International Journal of Computer Assisted Radiology and Surgery, 2023, 18 : 715 - 722
  • [48] COVID-19 detection based on self-supervised transfer learning using chest X-ray images
    Li, Guang
    Togo, Ren
    Ogawa, Takahiro
    Haseyama, Miki
    [J]. INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY, 2023, 18 (04) : 715 - 722
  • [49] Slot Induction via Pre-trained Language Model Probing and Multi-level Contrastive Learning
    Nguyen, Hoang H.
    Zhang, Chenwei
    Liu, Ye
    Yu, Philip S.
    [J]. 24TH MEETING OF THE SPECIAL INTEREST GROUP ON DISCOURSE AND DIALOGUE, SIGDIAL 2023, 2023, : 470 - 481
  • [50] Stereo Depth Estimation via Self-supervised Contrastive Representation Learning
    Tukra, Samyakh
    Giannarou, Stamatia
    [J]. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT VII, 2022, 13437 : 604 - 614