CheSS: Chest X-Ray Pre-trained Model via Self-supervised Contrastive Learning

被引:0
|
作者
Kyungjin Cho
Ki Duk Kim
Yujin Nam
Jiheon Jeong
Jeeyoung Kim
Changyong Choi
Soyoung Lee
Jun Soo Lee
Seoyeon Woo
Gil-Sun Hong
Joon Beom Seo
Namkug Kim
机构
[1] Asan Medical Institute of Convergence Science and Technology,Department of Biomedical Engineering, Asan Medical Center, College of Medicine
[2] University of Ulsan,Department of Convergence Medicine, Asan Medical Center
[3] Asan Medical Institute of Convergence Science and Technology,Department of Radiology, Asan Medical Center
[4] University of Ulsan College of Medicine,Department of Radiology and Research Institute of Radiology, Asan Medical Center
[5] University of Ulsan College of Medicine,Department of Industrial Engineering
[6] University of Ulsan College of Medicine,Department of Biomedical Engineering
[7] Seoul National University,undefined
[8] University of Waterloo,undefined
来源
关键词
Chest X-ray; Classification; Contrastive learning; Pretrained weight; Self-supervised learning; Bone suppression;
D O I
暂无
中图分类号
学科分类号
摘要
Training deep learning models on medical images heavily depends on experts’ expensive and laborious manual labels. In addition, these images, labels, and even models themselves are not widely publicly accessible and suffer from various kinds of bias and imbalances. In this paper, chest X-ray pre-trained model via self-supervised contrastive learning (CheSS) was proposed to learn models with various representations in chest radiographs (CXRs). Our contribution is a publicly accessible pretrained model trained with a 4.8-M CXR dataset using self-supervised learning with a contrastive learning and its validation with various kinds of downstream tasks including classification on the 6-class diseases in internal dataset, diseases classification in CheXpert, bone suppression, and nodule generation. When compared to a scratch model, on the 6-class classification test dataset, we achieved 28.5% increase in accuracy. On the CheXpert dataset, we achieved 1.3% increase in mean area under the receiver operating characteristic curve on the full dataset and 11.4% increase only using 1% data in stress test manner. On bone suppression with perceptual loss, we achieved improvement in peak signal to noise ratio from 34.99 to 37.77, structural similarity index measure from 0.976 to 0.977, and root-square-mean error from 4.410 to 3.301 when compared to ImageNet pretrained model. Finally, on nodule generation, we achieved improvement in Fréchet inception distance from 24.06 to 17.07. Our study showed the decent transferability of CheSS weights. CheSS weights can help researchers overcome data imbalance, data shortage, and inaccessibility of medical image datasets. CheSS weight is available at https://github.com/mi2rl/CheSS.
引用
收藏
页码:902 / 910
页数:8
相关论文
共 50 条
  • [1] CheSS: Chest X-Ray Pre-trained Model via Self-supervised Contrastive Learning
    Cho, Kyungjin
    Kim, Ki Duk
    Nam, Yujin
    Jeong, Jiheon
    Kim, Jeeyoung
    Choi, Changyong
    Lee, Soyoung
    Lee, Jun Soo
    Woo, Seoyeon
    Hong, Gil-Sun
    Seo, Joon Beom
    Kim, Namkug
    [J]. JOURNAL OF DIGITAL IMAGING, 2023, 36 (03) : 902 - 910
  • [2] BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning
    Jia, Jinyuan
    Liu, Yupei
    Gong, Neil Zhenqiang
    [J]. 43RD IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP 2022), 2022, : 2043 - 2059
  • [3] SPIQ: A Self-Supervised Pre-Trained Model for Image Quality Assessment
    Chen, Pengfei
    Li, Leida
    Wu, Qingbo
    Wu, Jinjian
    [J]. IEEE Signal Processing Letters, 2022, 29 : 513 - 517
  • [4] SPIQ: A Self-Supervised Pre-Trained Model for Image Quality Assessment
    Chen, Pengfei
    Li, Leida
    Wu, Qingbo
    Wu, Jinjian
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 513 - 517
  • [6] Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
    Liu, Hongbin
    Qu, Wenjie
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    [J]. PROCEEDINGS 45TH IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS, SPW 2024, 2024, : 144 - 156
  • [7] Improved self-supervised learning for disease identification in chest X-ray images
    Ma, Yongjun
    Dong, Shi
    Jiang, Yuchao
    [J]. Journal of Electronic Imaging, 2024, 33 (04)
  • [8] Retrieval-Based Chest X-Ray Report Generation Using a Pre-trained Contrastive Language-Image Model
    Endo, Mark
    Krishnan, Rayan
    Krishna, Viswesh
    Ng, Andrew Y.
    Rajpurkar, Pranav
    [J]. MACHINE LEARNING FOR HEALTH, VOL 158, 2021, 158 : 209 - 219
  • [9] Speech Enhancement Using Self-Supervised Pre-Trained Model and Vector Quantization
    Zhao, Xiao-Ying
    Zhu, Qiu-Shi
    Zhang, Jie
    [J]. PROCEEDINGS OF 2022 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2022, : 330 - 334
  • [10] SSCLNet: A Self-Supervised Contrastive Loss-Based Pre-Trained Network for Brain MRI Classification
    Mishra, Animesh
    Jha, Ritesh
    Bhattacharjee, Vandana
    [J]. IEEE ACCESS, 2023, 11 : 6673 - 6681