Leveraging different learning styles for improved knowledge distillation in biomedical imaging

被引:1
|
作者
Niyaz, Usma [1 ]
Sambyal, Abhishek Singh [1 ]
Bathula, Deepti R. [1 ]
机构
[1] Indian Inst Technol Ropar, Dept Comp Sci & Engn, Rupnagar 140001, Punjab, India
关键词
Feature sharing; Model compression; Learning styles; Knowledge distillation; Online distillation; Mutual learning; Teacher-student network; Multi-student network;
D O I
10.1016/j.compbiomed.2023.107764
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Learning style refers to a type of training mechanism adopted by an individual to gain new knowledge. As suggested by the VARK model, humans have different learning preferences, like Visual (V), Auditory (A), Read/Write (R), and Kinesthetic (K), for acquiring and effectively processing information. Our work endeavors to leverage this concept of knowledge diversification to improve the performance of model compression techniques like Knowledge Distillation (KD) and Mutual Learning (ML). Consequently, we use a single-teacher and two-student network in a unified framework that not only allows for the transfer of knowledge from teacher to students (KD) but also encourages collaborative learning between students (ML). Unlike the conventional approach, where the teacher shares the same knowledge in the form of predictions or feature representations with the student network, our proposed approach employs a more diversified strategy by training one student with predictions and the other with feature maps from the teacher. We further extend this knowledge diversification by facilitating the exchange of predictions and feature maps between the two student networks, enriching their learning experiences. We have conducted comprehensive experiments with three benchmark datasets for both classification and segmentation tasks using two different network architecture combinations. These experimental results demonstrate that knowledge diversification in a combined KD and ML framework outperforms conventional KD or ML techniques (with similar network configuration) that only use predictions with an average improvement of 2%. Furthermore, consistent improvement in performance across different tasks, with various network architectures, and over state-of-the-art techniques establishes the robustness and generalizability of the proposed model.
引用
收藏
页数:10
相关论文
共 50 条
  • [41] Multimodal fusion and knowledge distillation for improved anomaly detection
    Lu, Meichen
    Chai, Yi
    Xu, Kaixiong
    Chen, Weiqing
    Ao, Fei
    Ji, Wen
    VISUAL COMPUTER, 2024,
  • [42] Improved Knowledge Distillation for Crowd Counting on IoT Devices
    Huang, Zuo
    Sinnott, Richard O.
    2023 IEEE INTERNATIONAL CONFERENCE ON EDGE COMPUTING AND COMMUNICATIONS, EDGE, 2023, : 207 - 214
  • [43] Personalized Decentralized Federated Learning with Knowledge Distillation
    Jeong, Eunjeong
    Kountouris, Marios
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 1982 - 1987
  • [44] Heterogeneous Knowledge Distillation Using Conceptual Learning
    Yu, Yerin
    Kim, Namgyu
    IEEE ACCESS, 2024, 12 : 52803 - 52814
  • [45] KNOWLEDGE DISTILLATION FOR IMPROVED ACCURACY IN SPOKEN QUESTION ANSWERING
    You, Chenyu
    Chen, Nuo
    Zou, Yuexian
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 7793 - 7797
  • [46] Boosting Contrastive Learning with Relation Knowledge Distillation
    Zheng, Kai
    Wang, Yuanjiang
    Yuan, Ye
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 3508 - 3516
  • [47] Multimodal Learning with Incomplete Modalities by Knowledge Distillation
    Wang, Qi
    Zhan, Liang
    Thompson, Paul
    Zhou, Jiayu
    KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 1828 - 1838
  • [48] WHEN FEDERATED LEARNING MEETS KNOWLEDGE DISTILLATION
    Pang, Xiaoyi
    Hu, Jiahui
    Sun, Peng
    Ren, Ju
    Wang, Zhibo
    IEEE WIRELESS COMMUNICATIONS, 2024, 31 (05) : 208 - 214
  • [49] Knowledge distillation in deep learning and its applications
    Alkhulaifi, Abdolmaged
    Alsahli, Fahad
    Ahmad, Irfan
    PEERJ COMPUTER SCIENCE, 2021, PeerJ Inc. (07) : 1 - 24
  • [50] Elastic Knowledge Distillation by Learning From Recollection
    Fu, Yongjian
    Li, Songyuan
    Zhao, Hanbin
    Wang, Wenfu
    Fang, Weihao
    Zhuang, Yueting
    Pan, Zhijie
    Li, Xi
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (05) : 2647 - 2658