共 50 条
- [1] Adaptive Multi-Teacher Knowledge Distillation with Meta-Learning [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 1943 - 1948
- [2] Learning Lightweight Object Detectors via Multi-Teacher Progressive Distillation [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
- [3] Let All Be Whitened: Multi-Teacher Distillation for Efficient Visual Retrieval [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 5, 2024, : 4126 - 4135
- [4] Reinforced Multi-Teacher Selection for Knowledge Distillation [J]. THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 14284 - 14291
- [5] Correlation Guided Multi-teacher Knowledge Distillation [J]. NEURAL INFORMATION PROCESSING, ICONIP 2023, PT IV, 2024, 14450 : 562 - 574
- [7] CONFIDENCE-AWARE MULTI-TEACHER KNOWLEDGE DISTILLATION [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 4498 - 4502
- [8] Adaptive multi-teacher multi-level knowledge distillation [J]. NEUROCOMPUTING, 2020, 415 : 106 - 113
- [9] Adaptive multi-teacher multi-level knowledge distillation [J]. Neurocomputing, 2021, 415 : 106 - 113
- [10] Knowledge Distillation via Multi-Teacher Feature Ensemble [J]. IEEE Signal Processing Letters, 2024, 31 : 566 - 570