共 50 条
- [1] Speech to Text Adaptation: Towards an Efficient Cross-Modal Distillation [J]. INTERSPEECH 2020, 2020, : 896 - 900
- [2] Cross-Modal Knowledge Distillation Method for Automatic Cued Speech Recognition [J]. INTERSPEECH 2021, 2021, : 2986 - 2990
- [4] Cross-Modal Effects in Speech Perception [J]. ANNUAL REVIEW OF LINGUISTICS, VOL 5, 2019, 5 : 49 - 66
- [5] Cross-Modal Dual Learning for Sentence-to-Video Generation [J]. PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 1239 - 1247
- [7] Electroglottograph-Based Speech Emotion Recognition via Cross-Modal Distillation [J]. APPLIED SCIENCES-BASEL, 2022, 12 (09):
- [8] Speech Emotion Recognition via Multi-Level Cross-Modal Distillation [J]. INTERSPEECH 2021, 2021, : 4488 - 4492
- [9] XKD: Cross-Modal Knowledge Distillation with Domain Alignment for Video Representation Learning [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 13, 2024, : 14875 - 14885