Self supervised contrastive learning for digital histopathology

被引:162
|
作者
Ciga, Ozan [1 ,4 ]
Xu, Tony [2 ]
Martel, Anne Louise [1 ,3 ]
机构
[1] Univ Toronto, Dept Med Biophys, Toronto, ON, Canada
[2] Univ British Columbia, Dept Elect & Comp Engn, 5500-2332 Main Mall, Vancouver, BC V6T 1Z4, Canada
[3] Sunnybrook Res Inst, Phys Sci, Toronto, ON, Canada
[4] Sunnybrook Hlth Sci Ctr, 2075 Bayview Ave,M6 609, Toronto, ON M4N 3M5, Canada
来源
基金
加拿大自然科学与工程研究理事会;
关键词
Self supervised learning; Digital histopathology; Whole slide images; Unsupervised learning; SPARSE AUTOENCODER; CANCER; CLASSIFICATION; NUCLEI; IMAGES;
D O I
10.1016/j.mlwa.2021.100198
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised learning has been a long-standing goal of machine learning and is especially important for medical image analysis, where the learning can compensate for the scarcity of labeled datasets. A promising subclass of unsupervised learning is self -supervised learning, which aims to learn salient features using the raw input as the learning signal. In this work, we tackle the issue of learning domain -specific features without any supervision to improve multiple task performances that are of interest to the digital histopathology community. We apply a contrastive self -supervised learning method to digital histopathology by collecting and pretraining on 57 histopathology datasets without any labels. We find that combining multiple multi -organ datasets with different types of staining and resolution properties improves the quality of the learned features. Furthermore, we find using more images for pretraining leads to a better performance in multiple downstream tasks, albeit there are diminishing returns as more unlabeled images are incorporated into the pretraining. Linear classifiers trained on top of the learned features show that networks pretrained on digital histopathology datasets perform better than ImageNet pretrained networks, boosting task performances by more than 28% in F 1 scores on average. Interestingly, we did not observe a consistent correlation between the pretraining dataset site or the organ versus the downstream task (e.g., pretraining with only breast images does not necessarily lead to a superior downstream task performance for breast -related tasks). These findings may also be useful when applying newer contrastive techniques to histopathology data. Pretrained PyTorch models are made publicly available at https://github.com/ozanciga/self-supervised-histopathology.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Modulation Recognition of Digital Signals Based on Contrastive Self-Supervised Learning
    Liao, Yanping
    Gao, Yang
    Guo, Qiang
    2024 9TH INTERNATIONAL CONFERENCE ON ELECTRONIC TECHNOLOGY AND INFORMATION SCIENCE, ICETIS 2024, 2024, : 432 - 436
  • [2] Self-Supervised Learning: Generative or Contrastive
    Liu, Xiao
    Zhang, Fanjin
    Hou, Zhenyu
    Mian, Li
    Wang, Zhaoyu
    Zhang, Jing
    Tang, Jie
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (01) : 857 - 876
  • [3] Adversarial Self-Supervised Contrastive Learning
    Kim, Minseon
    Tack, Jihoon
    Hwang, Sung Ju
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020), 2020, 33
  • [4] A Survey on Contrastive Self-Supervised Learning
    Jaiswal, Ashish
    Babu, Ashwin Ramesh
    Zadeh, Mohammad Zaki
    Banerjee, Debapriya
    Makedon, Fillia
    TECHNOLOGIES, 2021, 9 (01)
  • [5] Similarity Contrastive Estimation for Self-Supervised Soft Contrastive Learning
    Denize, Julien
    Rabarisoa, Jaonary
    Orcesi, Astrid
    Herault, Romain
    Canu, Stephane
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 2705 - 2715
  • [6] JGCL: Joint Self-Supervised and Supervised Graph Contrastive Learning
    Akkas, Selahattin
    Azad, Ariful
    COMPANION PROCEEDINGS OF THE WEB CONFERENCE 2022, WWW 2022 COMPANION, 2022, : 1099 - 1105
  • [7] A comprehensive perspective of contrastive self-supervised learning
    Songcan CHEN
    Chuanxing GENG
    Frontiers of Computer Science, 2021, (04) : 102 - 104
  • [8] On Compositions of Transformations in Contrastive Self-Supervised Learning
    Patrick, Mandela
    Asano, Yuki M.
    Kuznetsova, Polina
    Fong, Ruth
    Henriques, Joao F.
    Zweig, Geoffrey
    Vedaldi, Andrea
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9557 - 9567
  • [9] Contrastive Self-supervised Learning for Graph Classification
    Zeng, Jiaqi
    Xie, Pengtao
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 10824 - 10832
  • [10] Group Contrastive Self-Supervised Learning on Graphs
    Xu, Xinyi
    Deng, Cheng
    Xie, Yaochen
    Ji, Shuiwang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (03) : 3169 - 3180