Clustering-Guided Twin Contrastive Learning for Endomicroscopy Image Classification

被引:0
|
作者
Zhou, Jingjun [1 ]
Dong, Xiangjiang [2 ]
Liu, Qian [1 ,3 ]
机构
[1] Hainan Univ, Sch Biomed Engn, Haikou 570228, Peoples R China
[2] Huazhong Univ Sci & Technol, Wuhan Natl Lab Optoelect, Wuhan 430074, Peoples R China
[3] Hainan Univ, Sch Biomed Engn, Key Lab Biomed Engn Hainan Prov, Haikou 570228, Peoples R China
基金
中国国家自然科学基金;
关键词
Clustering; contrastive learning; image classification and gastrointestinal; probe-based confocal laser endomicroscopy (pCLE); CONFOCAL LASER ENDOMICROSCOPY; SAFETY;
D O I
10.1109/JBHI.2024.3366223
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Learning better representations is essential in medical image analysis for computer-aided diagnosis. However, learning discriminative semantic features is a major challenge due to the lack of large-scale well-annotated datasets. Thus, how can we learn a well-structured categorizable embedding space in limited-scale and unlabeled datasets? In this paper, we proposed a novel clustering-guided twin-contrastive learning framework (CTCL) that learns the discriminative representations of probe-based confocal laser endomicroscopy (pCLE) images for gastrointestinal (GI) tumor classification. Compared with traditional contrastive learning, in which only two randomly augmented views of the same instance are considered, the proposed CTCL aligns more semantically related and class-consistent samples by clustering, which improved intra-class tightness and inter-class variability to produce more informative representations. Furthermore, based on the inherent properties of CLE (geometric invariance and intrinsic noise), we proposed to regard CLE images with any angle rotation and CLE images with different noises as the same instance, respectively, for increased variability and diversity of samples. By optimizing CTCL in an end-to-end expectation-maximization framework, comprehensive experimental results demonstrated that CTCL-based visual representations achieved competitive performance on each downstream task as well as more robustness and transferability compared with existing state-of-the-art SSL and supervised methods. Notably, CTCL achieved 75.60%/78.45% and 64.12%/77.37% top-1 accuracy on the linear evaluation protocol and few-shot classification downstream tasks, respectively, which outperformed the previous best results by 1.27%/1.63% and 0.5%/3%, respectively. The proposed method holds great potential to assist pathologists in achieving an automated, fast, and high-precision diagnosis of GI tumors and accurately determining different stages of tumor development based on CLE images.
引用
收藏
页码:2879 / 2890
页数:12
相关论文
共 50 条
  • [1] RetCCL: Clustering-guided contrastive learning for whole-slide image retrieval
    Wang, Xiyue
    Du, Yuexi
    Yang, Sen
    Zhang, Jun
    Wang, Minghui
    Zhang, Jing
    Yang, Wei
    Huang, Junzhou
    Han, Xiao
    MEDICAL IMAGE ANALYSIS, 2023, 83
  • [2] A Clustering-Guided Contrastive Fusion for Multi-View Representation Learning
    Ke, Guanzhou
    Chao, Guoqing
    Wang, Xiaoli
    Xu, Chenyang
    Zhu, Yongqi
    Yu, Yang
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (04) : 2056 - 2069
  • [3] Clustering-Guided SMT(LRA) Learning
    Meywerk, Tim
    Walter, Marcel
    Grosse, Daniel
    Drechsler, Rolf
    INTEGRATED FORMAL METHODS, IFM 2020, 2020, 12546 : 41 - 59
  • [4] Artifact-Tolerant Clustering-Guided Contrastive Embedding Learning for Ophthalmic Images in Glaucoma
    Shi, Min
    Lokhande, Anagha
    Fazli, Mojtaba S.
    Sharma, Vishal
    Tian, Yu
    Luo, Yan
    Pasquale, Louis R.
    Elze, Tobias
    Boland, Michael V.
    Zebardast, Nazlee
    Friedman, David S.
    Shen, Lucy Q.
    Wang, Mengyu
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2023, 27 (09) : 4329 - 4340
  • [5] Clustering-Guided Incremental Learning of Tasks
    Kim, Yoonhee
    Kim, Eunwoo
    35TH INTERNATIONAL CONFERENCE ON INFORMATION NETWORKING (ICOIN 2021), 2021, : 417 - 421
  • [6] Nonparametric Clustering-Guided Cross-View Contrastive Learning for Partially View-Aligned Representation Learning
    Qian, Shengsheng
    Xue, Dizhan
    Hu, Jun
    Zhang, Huaiwen
    Xu, Changsheng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 6158 - 6172
  • [7] Twin Contrastive Learning for Online Clustering
    Yunfan Li
    Mouxing Yang
    Dezhong Peng
    Taihao Li
    Jiantao Huang
    Xi Peng
    International Journal of Computer Vision, 2022, 130 : 2205 - 2221
  • [8] Twin Contrastive Learning for Online Clustering
    Li, Yunfan
    Yang, Mouxing
    Peng, Dezhong
    Li, Taihao
    Huang, Jiantao
    Peng, Xi
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2022, 130 (09) : 2205 - 2221
  • [9] Clustering-Guided Sparse Structural Learning for Unsupervised Feature Selection
    Li, Zechao
    Liu, Jing
    Yang, Yi
    Zhou, Xiaofang
    Lu, Hanqing
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2014, 26 (09) : 2138 - 2150
  • [10] Label contrastive learning for image classification
    Han Yang
    Jun Li
    Soft Computing, 2023, 27 : 13477 - 13486