CheSS: Chest X-Ray Pre-trained Model via Self-supervised Contrastive Learning

被引:9
|
作者
Cho, Kyungjin [1 ,2 ]
Kim, Ki Duk [2 ]
Nam, Yujin [1 ,2 ]
Jeong, Jiheon [1 ,2 ]
Kim, Jeeyoung [1 ,2 ]
Choi, Changyong [1 ,2 ]
Lee, Soyoung [1 ,2 ]
Lee, Jun Soo [6 ]
Woo, Seoyeon [7 ]
Hong, Gil-Sun [4 ,5 ]
Seo, Joon Beom [4 ,5 ]
Kim, Namkug [2 ,3 ]
机构
[1] Univ Ulsan, Asan Med Inst Convergence Sci & Technol, Coll Med, Asan Med Ctr,Dept Biomed Engn, Seoul, South Korea
[2] Univ Ulsan, Asan Med Inst Convergence Sci & Technol, Asan Med Ctr, Dept Convergence Med,Coll Med, 5F,26,Olymp-Ro 43-Gil, Seoul 05505, South Korea
[3] Univ Ulsan, Asan Med Ctr, Dept Radiol, Coll Med, Seoul, South Korea
[4] Univ Ulsan, Dept Radiol, Coll Med, Seoul, South Korea
[5] Univ Ulsan, Res Inst Radiol, Asan Med Ctr, Coll Med, Seoul, South Korea
[6] Seoul Natl Univ, Dept Ind Engn, Seoul, South Korea
[7] Univ Waterloo, Dept Biomed Engn, Waterloo, ON, Canada
关键词
Chest X-ray; Classification; Contrastive learning; Pretrained weight; Self-supervised learning; Bone suppression;
D O I
10.1007/s10278-023-00782-4
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Training deep learning models on medical images heavily depends on experts' expensive and laborious manual labels. In addition, these images, labels, and even models themselves are not widely publicly accessible and suffer from various kinds of bias and imbalances. In this paper, chest X-ray pre-trained model via self-supervised contrastive learning (CheSS) was proposed to learn models with various representations in chest radiographs (CXRs). Our contribution is a publicly accessible pretrained model trained with a 4.8-M CXR dataset using self-supervised learning with a contrastive learning and its validation with various kinds of downstream tasks including classification on the 6-class diseases in internal dataset, diseases classification in CheXpert, bone suppression, and nodule generation. When compared to a scratch model, on the 6-class classification test dataset, we achieved 28.5% increase in accuracy. On the CheXpert dataset, we achieved 1.3% increase in mean area under the receiver operating characteristic curve on the full dataset and 11.4% increase only using 1% data in stress test manner. On bone suppression with perceptual loss, we achieved improvement in peak signal to noise ratio from 34.99 to 37.77, structural similarity index measure from 0.976 to 0.977, and root-square-mean error from 4.410 to 3.301 when compared to ImageNet pretrained model. Finally, on nodule generation, we achieved improvement in Frechet inception distance from 24.06 to 17.07. Our study showed the decent transferability of CheSS weights. CheSS weights can help researchers overcome data imbalance, data shortage, and inaccessibility of medical image datasets. CheSS weight is available at https:// github. com/ mi2rl/ CheSS.
引用
收藏
页码:902 / 910
页数:9
相关论文
共 50 条
  • [1] CheSS: Chest X-Ray Pre-trained Model via Self-supervised Contrastive Learning
    Kyungjin Cho
    Ki Duk Kim
    Yujin Nam
    Jiheon Jeong
    Jeeyoung Kim
    Changyong Choi
    Soyoung Lee
    Jun Soo Lee
    Seoyeon Woo
    Gil-Sun Hong
    Joon Beom Seo
    Namkug Kim
    [J]. Journal of Digital Imaging, 2023, 36 : 902 - 910
  • [2] BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning
    Jia, Jinyuan
    Liu, Yupei
    Gong, Neil Zhenqiang
    [J]. 43RD IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP 2022), 2022, : 2043 - 2059
  • [3] SPIQ: A Self-Supervised Pre-Trained Model for Image Quality Assessment
    Chen, Pengfei
    Li, Leida
    Wu, Qingbo
    Wu, Jinjian
    [J]. IEEE Signal Processing Letters, 2022, 29 : 513 - 517
  • [4] SPIQ: A Self-Supervised Pre-Trained Model for Image Quality Assessment
    Chen, Pengfei
    Li, Leida
    Wu, Qingbo
    Wu, Jinjian
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 513 - 517
  • [6] Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
    Liu, Hongbin
    Qu, Wenjie
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    [J]. PROCEEDINGS 45TH IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS, SPW 2024, 2024, : 144 - 156
  • [7] Improved self-supervised learning for disease identification in chest X-ray images
    Ma, Yongjun
    Dong, Shi
    Jiang, Yuchao
    [J]. Journal of Electronic Imaging, 2024, 33 (04)
  • [8] Retrieval-Based Chest X-Ray Report Generation Using a Pre-trained Contrastive Language-Image Model
    Endo, Mark
    Krishnan, Rayan
    Krishna, Viswesh
    Ng, Andrew Y.
    Rajpurkar, Pranav
    [J]. MACHINE LEARNING FOR HEALTH, VOL 158, 2021, 158 : 209 - 219
  • [9] Speech Enhancement Using Self-Supervised Pre-Trained Model and Vector Quantization
    Zhao, Xiao-Ying
    Zhu, Qiu-Shi
    Zhang, Jie
    [J]. PROCEEDINGS OF 2022 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2022, : 330 - 334
  • [10] SSCLNet: A Self-Supervised Contrastive Loss-Based Pre-Trained Network for Brain MRI Classification
    Mishra, Animesh
    Jha, Ritesh
    Bhattacharjee, Vandana
    [J]. IEEE ACCESS, 2023, 11 : 6673 - 6681