Self-supervised pseudo multi-class pre-training for unsupervised anomaly detection and segmentation in medical images

被引:6
|
作者
Tian, Yu [1 ]
Liu, Fengbei [2 ]
Pang, Guansong [5 ]
Chen, Yuanhong [2 ]
Liu, Yuyuan [2 ]
Verjans, Johan W. [2 ,3 ,4 ]
Singh, Rajvinder [4 ]
Carneiro, Gustavo [6 ]
机构
[1] Harvard Med Sch, Harvard Ophthalmol AI Lab, Boston, MA 02115 USA
[2] Univ Adelaide, Australian Inst Machine Learning, Adelaide, Australia
[3] South Australian Hlth & Med Res Inst, Adelaide, Australia
[4] Univ Adelaide, Fac Hlth & Med Sci, Adelaide, Australia
[5] Singapore Management Univ, Sch Comp & Informat Syst, Singapore, Singapore
[6] Univ Surrey, Ctr Vis Speech & Signal Proc, Surrey, England
关键词
Unsupervised anomaly detection; Anomaly segmentation; One-class classification; Lesion segmentation; Self-supervised learning; Covid-19; Colonoscopy; Fundus image;
D O I
10.1016/j.media.2023.102930
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised anomaly detection (UAD) methods are trained with normal (or healthy) images only, but during testing, they are able to classify normal and abnormal (or disease) images. UAD is an important medical image analysis (MIA) method to be applied in disease screening problems because the training sets available for those problems usually contain only normal images. However, the exclusive reliance on normal images may result in the learning of ineffective low-dimensional image representations that are not sensitive enough to detect and segment unseen abnormal lesions of varying size, appearance, and shape. Pre-training UAD methods with self-supervised learning, based on computer vision techniques, can mitigate this challenge, but they are sub -optimal because they do not explore domain knowledge for designing the pretext tasks, and their contrastive learning losses do not try to cluster the normal training images, which may result in a sparse distribution of normal images that is ineffective for anomaly detection. In this paper, we propose a new self-supervised pre-training method for MIA UAD applications, named Pseudo Multi-class Strong Augmentation via Contrastive Learning (PMSACL). PMSACL consists of a novel optimisation method that contrasts a normal image class from multiple pseudo classes of synthesised abnormal images, with each class enforced to form a dense cluster in the feature space. In the experiments, we show that our PMSACL pre-training improves the accuracy of SOTA UAD methods on many MIA benchmarks using colonoscopy, fundus screening and Covid-19 Chest X-ray datasets.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] SPot-the-Difference Self-supervised Pre-training for Anomaly Detection and Segmentation
    Zou, Yang
    Jeong, Jongheon
    Pemula, Latha
    Zhang, Dongqing
    Dabeer, Onkar
    [J]. COMPUTER VISION - ECCV 2022, PT XXX, 2022, 13690 : 392 - 408
  • [2] Representation Recovering for Self-Supervised Pre-training on Medical Images
    Yan, Xiangyi
    Naushad, Junayed
    Sun, Shanlin
    Han, Kun
    Tang, Hao
    Kong, Deying
    Ma, Haoyu
    You, Chenyu
    Xie, Xiaohui
    [J]. 2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 2684 - 2694
  • [3] Self-supervised Pre-training for Nuclei Segmentation
    Haq, Mohammad Minhazul
    Huang, Junzhou
    [J]. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT II, 2022, 13432 : 303 - 313
  • [4] Self-supervised Pre-training for Mirror Detection
    Lin, Jiaying
    Lau, Rynson W. H.
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 12193 - 12202
  • [5] Self-supervised Pre-training for Semantic Segmentation in an Indoor Scene
    Shrestha, Sulabh
    Li, Yimeng
    Kosecka, Jana
    [J]. 2024 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS, WACVW 2024, 2024, : 625 - 635
  • [6] Self-supervised ECG pre-training
    Liu, Han
    Zhao, Zhenbo
    She, Qiang
    [J]. BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2021, 70
  • [7] Self-supervised anomaly detection, staging and segmentation for retinal images
    Li, Yiyue
    Lao, Qicheng
    Kang, Qingbo
    Jiang, Zekun
    Du, Shiyi
    Zhang, Shaoting
    Li, Kang
    [J]. MEDICAL IMAGE ANALYSIS, 2023, 87
  • [8] FALL DETECTION USING SELF-SUPERVISED PRE-TRAINING MODEL
    Yhdego, Haben
    Audette, Michel
    Paolini, Christopher
    [J]. PROCEEDINGS OF THE 2022 ANNUAL MODELING AND SIMULATION CONFERENCE (ANNSIM'22), 2022, : 361 - 371
  • [9] MULTI-MODAL SELF-SUPERVISED PRE-TRAINING FOR JOINT OPTIC DISC AND CUP SEGMENTATION IN EYE FUNDUS IMAGES
    Hervella, Alvaro S.
    Ramos, Lucia
    Rouco, Jose
    Novo, Jorge
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 961 - 965
  • [10] EFFECTIVENESS OF SELF-SUPERVISED PRE-TRAINING FOR ASR
    Baevski, Alexei
    Mohamed, Abdelrahman
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 7694 - 7698