Leveraging different learning styles for improved knowledge distillation in biomedical imaging

被引:1
|
作者
Niyaz, Usma [1 ]
Sambyal, Abhishek Singh [1 ]
Bathula, Deepti R. [1 ]
机构
[1] Indian Inst Technol Ropar, Dept Comp Sci & Engn, Rupnagar 140001, Punjab, India
关键词
Feature sharing; Model compression; Learning styles; Knowledge distillation; Online distillation; Mutual learning; Teacher-student network; Multi-student network;
D O I
10.1016/j.compbiomed.2023.107764
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Learning style refers to a type of training mechanism adopted by an individual to gain new knowledge. As suggested by the VARK model, humans have different learning preferences, like Visual (V), Auditory (A), Read/Write (R), and Kinesthetic (K), for acquiring and effectively processing information. Our work endeavors to leverage this concept of knowledge diversification to improve the performance of model compression techniques like Knowledge Distillation (KD) and Mutual Learning (ML). Consequently, we use a single-teacher and two-student network in a unified framework that not only allows for the transfer of knowledge from teacher to students (KD) but also encourages collaborative learning between students (ML). Unlike the conventional approach, where the teacher shares the same knowledge in the form of predictions or feature representations with the student network, our proposed approach employs a more diversified strategy by training one student with predictions and the other with feature maps from the teacher. We further extend this knowledge diversification by facilitating the exchange of predictions and feature maps between the two student networks, enriching their learning experiences. We have conducted comprehensive experiments with three benchmark datasets for both classification and segmentation tasks using two different network architecture combinations. These experimental results demonstrate that knowledge diversification in a combined KD and ML framework outperforms conventional KD or ML techniques (with similar network configuration) that only use predictions with an average improvement of 2%. Furthermore, consistent improvement in performance across different tasks, with various network architectures, and over state-of-the-art techniques establishes the robustness and generalizability of the proposed model.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] Continual Learning With Knowledge Distillation: A Survey
    Li, Songze
    Su, Tonghua
    Zhang, Xuyao
    Wang, Zhongjie
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [22] KNOWLEDGE DISTILLATION FOR WIRELESS EDGE LEARNING
    Mohamed, Ahmed P.
    Fameel, Abu Shafin Mohammad Mandee
    El Gamal, Aly
    2021 IEEE STATISTICAL SIGNAL PROCESSING WORKSHOP (SSP), 2021, : 600 - 604
  • [23] Noise as a Resource for Learning in Knowledge Distillation
    Arani, Elahe
    Sarfraz, Fahad
    Zonooz, Bahram
    2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WACV 2021, 2021, : 3128 - 3137
  • [24] Improved Knowledge Distillation via Teacher Assistant
    Mirzadeh, Seyed Iman
    Farajtabar, Mehrdad
    Li, Ang
    Levine, Nir
    Matsukawa, Akihiro
    Ghasemzadeh, Hassan
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 5191 - 5198
  • [25] Adversarial Knowledge Distillation Based Biomedical Factoid Question Answering
    Bai, Jun
    Yin, Chuantao
    Zhang, Jianfei
    Wang, Yanmeng
    Dong, Yi
    Rong, Wenge
    Xiong, Zhang
    IEEE-ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, 2023, 20 (01) : 106 - 118
  • [26] Learning Interpretation with Explainable Knowledge Distillation
    Alharbi, Raed
    Vu, Minh N.
    Thai, My T.
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 705 - 714
  • [27] A Survey of Knowledge Distillation in Deep Learning
    Shao R.-R.
    Liu Y.-A.
    Zhang W.
    Wang J.
    Jisuanji Xuebao/Chinese Journal of Computers, 2022, 45 (08): : 1638 - 1673
  • [28] Skill enhancement learning with knowledge distillation
    Naijun LIU
    Fuchun SUN
    Bin FANG
    Huaping LIU
    Science China(Information Sciences), 2024, 67 (08) : 206 - 220
  • [29] BookKD: A novel knowledge distillation for reducing distillation costs by decoupling knowledge generation and learning
    Zhu, Songling
    Shang, Ronghua
    Tang, Ke
    Xu, Songhua
    Li, Yangyang
    KNOWLEDGE-BASED SYSTEMS, 2023, 279
  • [30] Continual Learning Based on Knowledge Distillation and Representation Learning
    Chen, Xiu-Yan
    Liu, Jian-Wei
    Li, Wen-Tao
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT IV, 2022, 13532 : 27 - 38