More from Less: Self-supervised Knowledge Distillation for Routine Histopathology Data

被引:0
|
作者
Farndale, Lucas [1 ,2 ,3 ,4 ]
Insall, Robert [1 ,2 ,5 ]
Yuan, Ke [1 ,2 ,3 ]
机构
[1] Univ Glasgow, Sch Canc Sci, Glasgow, Lanark, Scotland
[2] Canc Res UK Beatson Inst, Glasgow, Lanark, Scotland
[3] Univ Glasgow, Sch Comp Sci, Glasgow, Lanark, Scotland
[4] Univ Glasgow, Sch Math & Stat, Glasgow, Lanark, Scotland
[5] UCL, Div Biosci, London, England
基金
英国惠康基金; 英国工程与自然科学研究理事会;
关键词
Representation Learning; Colon Cancer; Multi-Modality;
D O I
10.1007/978-3-031-45673-2_45
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Medical imaging technologies are generating increasingly large amounts of high-quality, information-dense data. Despite the progress, practical use of advanced imaging technologies for research and diagnosis remains limited by cost and availability, so more information-sparse data such as H&E stains are relied on in practice. The study of diseased tissue would greatly benefit from methods which can leverage these information-dense data to extract more value from routine, information-sparse data. Using self-supervised learning (SSL), we demonstrate that it is possible to distil knowledge during training from information-dense data into models which only require information-sparse data for inference. This improves downstream classification accuracy on information-sparse data, making it comparable with the fully-supervised baseline. We find substantial effects on the learned representations, and pairing with relevant data can be used to extract desirable features without the arduous process of manual labelling. This approach enables the design of models which require only routine images, but contain insights from state-of-the-art data, allowing better use of the available resources.
引用
收藏
页码:454 / 463
页数:10
相关论文
共 50 条
  • [31] Self-supervised Learning from Semantically Imprecise Data
    Brust, Clemens-Alexander
    Barz, Bjoern
    Denzler, Joachim
    PROCEEDINGS OF THE 17TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISAPP), VOL 5, 2022, : 27 - 35
  • [32] Auxiliary Learning for Self-Supervised Video Representation via Similarity-based Knowledge Distillation
    Dadashzadeh, Amirhossein
    Whone, Alan
    Mirmehdi, Majid
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 4230 - 4239
  • [33] AeroRec: An Efficient On-Device Recommendation Framework using Federated Self-Supervised Knowledge Distillation
    Xia, Tengxi
    Ren, Ju
    Rao, Wei
    Zu, Qin
    Wang, Wenjie
    Chen, Shuai
    Zhang, Yaoxue
    IEEE INFOCOM 2024-IEEE CONFERENCE ON COMPUTER COMMUNICATIONS, 2024, : 121 - 130
  • [34] Knowledge distillation of multi-scale dense prediction transformer for self-supervised depth estimation
    Song, Jimin
    Lee, Sang Jun
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [35] Mitigating Backdoor Attacks in Pre-Trained Encoders via Self-Supervised Knowledge Distillation
    Bie, Rongfang
    Jiang, Jinxiu
    Xie, Hongcheng
    Guo, Yu
    Miao, Yinbin
    Jia, Xiaohua
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (05) : 2613 - 2625
  • [36] Knowledge distillation of multi-scale dense prediction transformer for self-supervised depth estimation
    Jimin Song
    Sang Jun Lee
    Scientific Reports, 13
  • [37] Self-supervised Anomaly Detection by Self-distillation and Negative Sampling
    Rafiee, Nima
    Gholamipoor, Rahil
    Adaloglou, Nikolas
    Jaxy, Simon
    Ramakers, Julius
    Kollmann, Markus
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT IV, 2022, 13532 : 459 - 470
  • [38] HistoSSL: Self-Supervised Representation Learning for Classifying Histopathology Images
    Jin, Xu
    Huang, Teng
    Wen, Ke
    Chi, Mengxian
    An, Hong
    MATHEMATICS, 2023, 11 (01)
  • [39] Monocular Depth Estimation via Self-Supervised Self-Distillation
    Hu, Haifeng
    Feng, Yuyang
    Li, Dapeng
    Zhang, Suofei
    Zhao, Haitao
    SENSORS, 2024, 24 (13)
  • [40] Learn from restoration: exploiting task-oriented knowledge distillation in self-supervised person re-identification
    Yang, Enze
    Liu, Yuxin
    Zhao, Shitao
    Liu, Yiran
    Liu, Shuoyan
    VISUAL COMPUTER, 2025,