共 50 条
- [22] SELF-KNOWLEDGE DISTILLATION VIA FEATURE ENHANCEMENT FOR SPEAKER VERIFICATION 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 7542 - 7546
- [23] MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition COMPUTER VISION, ECCV 2022, PT XXIV, 2022, 13684 : 534 - 551
- [24] A Novel Small Target Detection Strategy: Location Feature Extraction in the Case of Self-Knowledge Distillation APPLIED SCIENCES-BASEL, 2023, 13 (06):
- [26] Self-Knowledge Distillation for First Trimester Ultrasound Saliency Prediction SIMPLIFYING MEDICAL ULTRASOUND, ASMUS 2022, 2022, 13565 : 117 - 127
- [28] Decoupled Feature and Self-Knowledge Distillation for Speech Emotion Recognition IEEE ACCESS, 2025, 13 : 33275 - 33285
- [29] Teaching Yourself: A Self-Knowledge Distillation Approach to Action Recognition IEEE ACCESS, 2021, 9 : 105711 - 105723
- [30] Active Learning for Lane Detection: A Knowledge Distillation Approach 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 15132 - 15141