More from Less: Self-supervised Knowledge Distillation for Routine Histopathology Data

被引:0
|
作者
Farndale, Lucas [1 ,2 ,3 ,4 ]
Insall, Robert [1 ,2 ,5 ]
Yuan, Ke [1 ,2 ,3 ]
机构
[1] Univ Glasgow, Sch Canc Sci, Glasgow, Lanark, Scotland
[2] Canc Res UK Beatson Inst, Glasgow, Lanark, Scotland
[3] Univ Glasgow, Sch Comp Sci, Glasgow, Lanark, Scotland
[4] Univ Glasgow, Sch Math & Stat, Glasgow, Lanark, Scotland
[5] UCL, Div Biosci, London, England
基金
英国惠康基金; 英国工程与自然科学研究理事会;
关键词
Representation Learning; Colon Cancer; Multi-Modality;
D O I
10.1007/978-3-031-45673-2_45
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Medical imaging technologies are generating increasingly large amounts of high-quality, information-dense data. Despite the progress, practical use of advanced imaging technologies for research and diagnosis remains limited by cost and availability, so more information-sparse data such as H&E stains are relied on in practice. The study of diseased tissue would greatly benefit from methods which can leverage these information-dense data to extract more value from routine, information-sparse data. Using self-supervised learning (SSL), we demonstrate that it is possible to distil knowledge during training from information-dense data into models which only require information-sparse data for inference. This improves downstream classification accuracy on information-sparse data, making it comparable with the fully-supervised baseline. We find substantial effects on the learned representations, and pairing with relevant data can be used to extract desirable features without the arduous process of manual labelling. This approach enables the design of models which require only routine images, but contain insights from state-of-the-art data, allowing better use of the available resources.
引用
收藏
页码:454 / 463
页数:10
相关论文
共 50 条
  • [41] Better and Faster: Knowledge Transfer from Multiple Self-supervised Learning Tasks via Graph Distillation for Video Classification
    Zhang, Chenrui
    Peng, Yuxin
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 1135 - 1141
  • [42] Knowledge Graph Self-Supervised Rationalization for Recommendation
    Yang, Yuhao
    Huang, Chao
    Xia, Lianghao
    Huang, Chunzhen
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 3046 - 3056
  • [43] A self-supervised vision transformer to predict survival from histopathology in renal cell carcinoma
    Wessels, Frederik
    Schmitt, Max
    Krieghoff-Henning, Eva
    Nientiedt, Malin
    Waldbillig, Frank
    Neuberger, Manuel
    Kriegmair, Maximilian C.
    Kowalewski, Karl-Friedrich
    Worst, Thomas S.
    Steeg, Matthias
    Popovic, Zoran V.
    Gaiser, Timo
    von Kalle, Christof
    Utikal, Jochen S.
    Frohling, Stefan
    Michel, Maurice S.
    Nuhn, Philipp
    Brinker, Titus J.
    WORLD JOURNAL OF UROLOGY, 2023, 41 (08) : 2233 - 2241
  • [44] A self-supervised vision transformer to predict survival from histopathology in renal cell carcinoma
    Frederik Wessels
    Max Schmitt
    Eva Krieghoff-Henning
    Malin Nientiedt
    Frank Waldbillig
    Manuel Neuberger
    Maximilian C. Kriegmair
    Karl-Friedrich Kowalewski
    Thomas S. Worst
    Matthias Steeg
    Zoran V. Popovic
    Timo Gaiser
    Christof von Kalle
    Jochen S. Utikal
    Stefan Fröhling
    Maurice S. Michel
    Philipp Nuhn
    Titus J. Brinker
    World Journal of Urology, 2023, 41 : 2233 - 2241
  • [45] Self-supervised heterogeneous graph learning with iterative similarity distillation
    Wang, Tianfeng
    Pan, Zhisong
    Hu, Guyu
    Xu, Kun
    Zhang, Yao
    KNOWLEDGE-BASED SYSTEMS, 2023, 276
  • [46] A COMPREHENSIVE STUDY ON SELF-SUPERVISED DISTILLATION FOR SPEAKER REPRESENTATION LEARNING
    Chen, Zhengyang
    Qian, Yao
    Han, Bing
    Qian, Yanmin
    Zeng, Michael
    2022 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP, SLT, 2022, : 599 - 604
  • [47] Self-supervised Character-to-Character Distillation for Text Recognition
    Guan, Tongkun
    Shen, Wei
    Yang, Xue
    Feng, Qi
    Jiang, Zekun
    Yang, Xiaokang
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 19416 - 19427
  • [48] Self-supervised Image Hash Retrieval Based On Adversarial Distillation
    Feng, Ping
    Zhang, Hanyun
    2022 ASIA CONFERENCE ON ALGORITHMS, COMPUTING AND MACHINE LEARNING (CACML 2022), 2022, : 732 - 737
  • [49] DPHuBERT: Joint Distillation and Pruning of Self-Supervised Speech Models
    Peng, Yifan
    Sudo, Yui
    Muhammad, Shakeel
    Watanabe, Shinji
    INTERSPEECH 2023, 2023, : 62 - 66
  • [50] Self-Supervised Learning With Adaptive Distillation for Hyperspectral Image Classification
    Yue, Jun
    Fang, Leyuan
    Rahmani, Hossein
    Ghamisi, Pedram
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60