共 30 条
- [1] Cross-Architecture Knowledge Distillation [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (08) : 2798 - 2824
- [2] Cross-Architecture Knowledge Distillation [J]. COMPUTER VISION - ACCV 2022, PT V, 2023, 13845 : 179 - 195
- [3] Cross-Architecture Distillation for Face Recognition [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 8076 - 8085
- [4] Adaptive Cross-architecture Mutual Knowledge Distillation [J]. 2024 IEEE 18TH INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION, FG 2024, 2024,
- [5] FeatureMix: A General Adversarial Defense Method for Pretrained Language Models [J]. IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 3415 - 3420
- [6] ElitePLM: An Empirical Study on General Language Ability Evaluation of Pretrained Language Models [J]. NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 3519 - 3539
- [8] LEVERAGING ACOUSTIC AND LINGUISTIC EMBEDDINGS FROM PRETRAINED SPEECH AND LANGUAGE MODELS FOR INTENT CLASSIFICATION [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 7498 - 7502
- [9] Expanding Language-Image Pretrained Models for General Video Recognition [J]. COMPUTER VISION - ECCV 2022, PT IV, 2022, 13664 : 1 - 18