共 50 条
- [1] PQK: Model Compression via Pruning, Quantization, and Knowledge Distillation [J]. INTERSPEECH 2021, 2021, : 4568 - 4572
- [3] Model compression via pruning and knowledge distillation for person re-identification [J]. Journal of Ambient Intelligence and Humanized Computing, 2021, 12 : 2149 - 2161
- [4] Private Model Compression via Knowledge Distillation [J]. THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 1190 - +
- [6] Model Compression by Iterative Pruning with Knowledge Distillation and Its Application to Speech Enhancement [J]. INTERSPEECH 2022, 2022, : 941 - 945
- [8] Combining Weight Pruning and Knowledge Distillation For CNN Compression [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 3185 - 3192