Self supervised contrastive learning for digital histopathology

被引:162
|
作者
Ciga, Ozan [1 ,4 ]
Xu, Tony [2 ]
Martel, Anne Louise [1 ,3 ]
机构
[1] Univ Toronto, Dept Med Biophys, Toronto, ON, Canada
[2] Univ British Columbia, Dept Elect & Comp Engn, 5500-2332 Main Mall, Vancouver, BC V6T 1Z4, Canada
[3] Sunnybrook Res Inst, Phys Sci, Toronto, ON, Canada
[4] Sunnybrook Hlth Sci Ctr, 2075 Bayview Ave,M6 609, Toronto, ON M4N 3M5, Canada
来源
基金
加拿大自然科学与工程研究理事会;
关键词
Self supervised learning; Digital histopathology; Whole slide images; Unsupervised learning; SPARSE AUTOENCODER; CANCER; CLASSIFICATION; NUCLEI; IMAGES;
D O I
10.1016/j.mlwa.2021.100198
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised learning has been a long-standing goal of machine learning and is especially important for medical image analysis, where the learning can compensate for the scarcity of labeled datasets. A promising subclass of unsupervised learning is self -supervised learning, which aims to learn salient features using the raw input as the learning signal. In this work, we tackle the issue of learning domain -specific features without any supervision to improve multiple task performances that are of interest to the digital histopathology community. We apply a contrastive self -supervised learning method to digital histopathology by collecting and pretraining on 57 histopathology datasets without any labels. We find that combining multiple multi -organ datasets with different types of staining and resolution properties improves the quality of the learned features. Furthermore, we find using more images for pretraining leads to a better performance in multiple downstream tasks, albeit there are diminishing returns as more unlabeled images are incorporated into the pretraining. Linear classifiers trained on top of the learned features show that networks pretrained on digital histopathology datasets perform better than ImageNet pretrained networks, boosting task performances by more than 28% in F 1 scores on average. Interestingly, we did not observe a consistent correlation between the pretraining dataset site or the organ versus the downstream task (e.g., pretraining with only breast images does not necessarily lead to a superior downstream task performance for breast -related tasks). These findings may also be useful when applying newer contrastive techniques to histopathology data. Pretrained PyTorch models are made publicly available at https://github.com/ozanciga/self-supervised-histopathology.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] CONTRASTIVE SEPARATIVE CODING FOR SELF-SUPERVISED REPRESENTATION LEARNING
    Wang, Jun
    Lam, Max W. Y.
    Su, Dan
    Yu, Dong
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3865 - 3869
  • [42] Interactive Contrastive Learning for Self-Supervised Entity Alignment
    Zeng, Kaisheng
    Dong, Zhenhao
    Hou, Lei
    Cao, Yixin
    Hu, Minghao
    Yu, Jifan
    Lv, Xin
    Cao, Lei
    Wang, Xin
    Liu, Haozhuang
    Huang, Yi
    Feng, Junlan
    Wan, Jing
    Li, Juanzi
    Feng, Ling
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 2465 - 2475
  • [43] Memory Bank Clustering for Self-supervised Contrastive Learning
    Hao, Yiqing
    An, Gaoyun
    Ruan, Qiuqi
    IMAGE AND GRAPHICS TECHNOLOGIES AND APPLICATIONS, IGTA 2021, 2021, 1480 : 132 - 144
  • [44] Intermediate Layers Matter in Momentum Contrastive Self Supervised Learning
    Kaku, Aakash
    Upadhya, Sahana
    Razavian, Narges
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [45] Grouped Contrastive Learning of Self-Supervised Sentence Representation
    Wang, Qian
    Zhang, Weiqi
    Lei, Tianyi
    Peng, Dezhong
    APPLIED SCIENCES-BASEL, 2023, 13 (17):
  • [46] Contrastive Self-Supervised Learning for Optical Music Recognition
    Penarrubia, Carlos
    Valero-Mas, Jose J.
    Calvo-Zaragoza, Jorge
    DOCUMENT ANALYSIS SYSTEMS, DAS 2024, 2024, 14994 : 312 - 326
  • [47] Contrastive self-supervised learning for neurodegenerative disorder classification
    Gryshchuk, Vadym
    Singh, Devesh
    Teipel, Stefan
    Dyrba, Martin
    ADNI Study Grp
    AIBL Study Grp
    FTLDNI Study Grp
    FRONTIERS IN NEUROINFORMATICS, 2025, 19
  • [48] Boost Supervised Pretraining for Visual Transfer Learning: Implications of Self-Supervised Contrastive Representation Learning
    Sun, Jinghan
    Wei, Dong
    Ma, Kai
    Wang, Liansheng
    Zheng, Yefeng
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 2307 - 2315
  • [49] FundusNet, A self-supervised contrastive learning framework for Fundus Feature Learning
    Mojab, Nooshin
    Alam, Minhaj
    Hallak, Joelle
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2022, 63 (07)
  • [50] Toward Understanding the Feature Learning Process of Self-supervised Contrastive Learning
    Wen, Zixin
    Li, Yuanzhi
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139