Self supervised contrastive learning for digital histopathology

被引:162
|
作者
Ciga, Ozan [1 ,4 ]
Xu, Tony [2 ]
Martel, Anne Louise [1 ,3 ]
机构
[1] Univ Toronto, Dept Med Biophys, Toronto, ON, Canada
[2] Univ British Columbia, Dept Elect & Comp Engn, 5500-2332 Main Mall, Vancouver, BC V6T 1Z4, Canada
[3] Sunnybrook Res Inst, Phys Sci, Toronto, ON, Canada
[4] Sunnybrook Hlth Sci Ctr, 2075 Bayview Ave,M6 609, Toronto, ON M4N 3M5, Canada
来源
基金
加拿大自然科学与工程研究理事会;
关键词
Self supervised learning; Digital histopathology; Whole slide images; Unsupervised learning; SPARSE AUTOENCODER; CANCER; CLASSIFICATION; NUCLEI; IMAGES;
D O I
10.1016/j.mlwa.2021.100198
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised learning has been a long-standing goal of machine learning and is especially important for medical image analysis, where the learning can compensate for the scarcity of labeled datasets. A promising subclass of unsupervised learning is self -supervised learning, which aims to learn salient features using the raw input as the learning signal. In this work, we tackle the issue of learning domain -specific features without any supervision to improve multiple task performances that are of interest to the digital histopathology community. We apply a contrastive self -supervised learning method to digital histopathology by collecting and pretraining on 57 histopathology datasets without any labels. We find that combining multiple multi -organ datasets with different types of staining and resolution properties improves the quality of the learned features. Furthermore, we find using more images for pretraining leads to a better performance in multiple downstream tasks, albeit there are diminishing returns as more unlabeled images are incorporated into the pretraining. Linear classifiers trained on top of the learned features show that networks pretrained on digital histopathology datasets perform better than ImageNet pretrained networks, boosting task performances by more than 28% in F 1 scores on average. Interestingly, we did not observe a consistent correlation between the pretraining dataset site or the organ versus the downstream task (e.g., pretraining with only breast images does not necessarily lead to a superior downstream task performance for breast -related tasks). These findings may also be useful when applying newer contrastive techniques to histopathology data. Pretrained PyTorch models are made publicly available at https://github.com/ozanciga/self-supervised-histopathology.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] Malicious Traffic Identification with Self-Supervised Contrastive Learning
    Yang, Jin
    Jiang, Xinyun
    Liang, Gang
    Li, Siyu
    Ma, Zicheng
    SENSORS, 2023, 23 (16)
  • [32] Self-Supervised Learning on Graphs: Contrastive, Generative, or Predictive
    Wu, Lirong
    Lin, Haitao
    Tan, Cheng
    Gao, Zhangyang
    Li, Stan Z.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (04) : 4216 - 4235
  • [33] Contrastive Self-Supervised Learning: A Survey on Different Architectures
    Khan, Adnan
    AlBarri, Sarah
    Manzoor, Muhammad Arslan
    PROCEEDINGS OF 2ND IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE (ICAI 2022), 2022, : 1 - 6
  • [34] Self-Supervised Contrastive Learning for Unsupervised Phoneme Segmentation
    Kreuk, Felix
    Keshet, Joseph
    Adi, Yossi
    INTERSPEECH 2020, 2020, : 3700 - 3704
  • [35] Self-supervised contrastive learning for implicit collaborative filtering
    Song, Shipeng
    Liu, Bin
    Teng, Fei
    Li, Tianrui
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 139
  • [36] Similarity contrastive estimation for image and video soft contrastive self-supervised learning
    Julien Denize
    Jaonary Rabarisoa
    Astrid Orcesi
    Romain Hérault
    Machine Vision and Applications, 2023, 34
  • [37] CONTRASTIVE SELF-SUPERVISED LEARNING FOR WIRELESS POWER CONTROL
    Naderializadeh, Navid
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 4965 - 4969
  • [38] Contrastive Self-Supervised Learning for Skeleton Action Recognition
    Gao, Xuehao
    Yang, Yang
    Du, Shaoyi
    NEURIPS 2020 WORKSHOP ON PRE-REGISTRATION IN MACHINE LEARNING, VOL 148, 2020, 148 : 51 - 61
  • [39] Contrastive Self-supervised Learning in Recommender Systems: A Survey
    Jing, Mengyuan
    Zhu, Yanmin
    Zang, Tianzi
    Wang, Ke
    ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2024, 42 (02)
  • [40] Self-supervised contrastive representation learning for semantic segmentation
    Liu B.
    Cai H.
    Wang Y.
    Chen X.
    Xi'an Dianzi Keji Daxue Xuebao/Journal of Xidian University, 2024, 51 (01): : 125 - 134