MoMA: Momentum contrastive learning with multi-head attention-based knowledge distillation for histopathology image analysis

被引:0
|
作者
Le Vuong, Trinh Thi [1 ]
Kwak, Jin Tae [1 ]
机构
[1] School of Electrical Engineering, Korea University, Seoul, Korea, Republic of
关键词
Contrastive Learning;
D O I
10.1016/j.media.2024.103421
中图分类号
学科分类号
摘要
There is no doubt that advanced artificial intelligence models and high quality data are the keys to success in developing computational pathology tools. Although the overall volume of pathology data keeps increasing, a lack of quality data is a common issue when it comes to a specific task due to several reasons including privacy and ethical issues with patient data. In this work, we propose to exploit knowledge distillation, i.e., utilize the existing model to learn a new, target model, to overcome such issues in computational pathology. Specifically, we employ a student–teacher framework to learn a target model from a pre-trained, teacher model without direct access to source data and distill relevant knowledge via momentum contrastive learning with multi-head attention mechanism, which provides consistent and context-aware feature representations. This enables the target model to assimilate informative representations of the teacher model while seamlessly adapting to the unique nuances of the target data. The proposed method is rigorously evaluated across different scenarios where the teacher model was trained on the same, relevant, and irrelevant classification tasks with the target model. Experimental results demonstrate the accuracy and robustness of our approach in transferring knowledge to different domains and tasks, outperforming other related methods. Moreover, the results provide a guideline on the learning strategy for different types of tasks and scenarios in computational pathology. © 2024 Elsevier B.V.
引用
收藏
相关论文
共 50 条
  • [1] Enhancing Recommendation Capabilities Using Multi-Head Attention-Based Federated Knowledge Distillation
    Wu, Aming
    Kwon, Young-Woo
    [J]. IEEE ACCESS, 2023, 11 : 45850 - 45861
  • [2] Multi-Head Attention-Based Spectrum Sensing for Radio
    Devarakonda, B. V. Ravisankar
    Nandanavam, Venkateswararao
    [J]. INTERNATIONAL JOURNAL OF ELECTRICAL AND COMPUTER ENGINEERING SYSTEMS, 2023, 14 (02) : 135 - 143
  • [3] A Multi-scale Graph Network with Multi-head Attention for Histopathology Image Diagnosis
    Xing, Xiaodan
    Ma, Yixin
    Jin, Lei
    Sun, Tianyang
    Xue, Zhong
    Shi, Feng
    Wu, Jinsong
    Shen, Dinggang
    [J]. MICCAI WORKSHOP ON COMPUTATIONAL PATHOLOGY, VOL 156, 2021, 156 : 227 - 235
  • [4] Multi-scale Contrastive Learning with Attention for Histopathology Image Classification
    Tan, Jing Wei
    Khoa Tuan Nguyen
    Lee, Kyoungbun
    Jeong, Won-Ki
    [J]. MEDICAL IMAGING 2023, 2023, 12471
  • [5] Federated learning based multi-head attention framework for medical image classification
    Firdaus, Naima
    Raza, Zahid
    [J]. CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2024,
  • [6] DEDUCE: Multi-head attention decoupled contrastive learning to discover cancer subtypes based on multi-omics data
    Pan, Liangrui
    Wang, Xiang
    Liang, Qingchun
    Shang, Jiandong
    Liu, Wenjuan
    Xu, Liwen
    Peng, Shaoliang
    [J]. Computer Methods and Programs in Biomedicine, 2024, 257
  • [7] Multi-head attention-based two-stream EfficientNet for action recognition
    Zhou, Aihua
    Ma, Yujun
    Ji, Wanting
    Zong, Ming
    Yang, Pei
    Wu, Min
    Liu, Mingzhe
    [J]. MULTIMEDIA SYSTEMS, 2023, 29 (02) : 487 - 498
  • [8] Multi-head attention-based two-stream EfficientNet for action recognition
    Aihua Zhou
    Yujun Ma
    Wanting Ji
    Ming Zong
    Pei Yang
    Min Wu
    Mingzhe Liu
    [J]. Multimedia Systems, 2023, 29 : 487 - 498
  • [9] Intelligent Bearing Fault Diagnosis Using Multi-Head Attention-Based CNN
    Wang, Hui
    Xu, Jiawen
    Yan, Ruqiang
    Sun, Chuang
    Chen, Xuefeng
    [J]. PROCEEDINGS OF THE 8TH INTERNATIONAL CONFERENCE ON THROUGH-LIFE ENGINEERING SERVICES (TESCONF 2019), 2020, 49 : 112 - 118
  • [10] Personalized federated learning based on multi-head attention algorithm
    Jiang, Shanshan
    Lu, Meixia
    Hu, Kai
    Wu, Jiasheng
    Li, Yaogen
    Weng, Liguo
    Xia, Min
    Lin, Haifeng
    [J]. INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2023, 14 (11) : 3783 - 3798