SKILL: SIMILARITY-AWARE KNOWLEDGE DISTILLATION FOR SPEECH SELF-SUPERVISED LEARNING

被引:0
|
作者
Zampierin, Luca [1 ,2 ]
Hacene, Ghouthi Boukli [1 ,5 ]
Nguyen, Bac [1 ]
Ravanelli, Mirco [3 ,4 ,5 ]
机构
[1] Sony Europe BV, Stuttgart Lab 1, Stuttgart, Germany
[2] Ecole Polytech Fed Lausanne, Lausanne, Switzerland
[3] Concordia Univ, Montreal, PQ, Canada
[4] Univ Montreal, Montreal, PQ, Canada
[5] Mila Quebec AI Inst, Montreal, PQ, Canada
关键词
Model compression; self-supervised learning; knowledge distillation;
D O I
10.1109/ICASSPW62465.2024.10626978
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Self-supervised learning (SSL) has achieved remarkable success across various speech-processing tasks. To enhance its efficiency, previous works often leverage the use of compression techniques. A notable recent attempt is DPHuBERT, which applies joint knowledge distillation (KD) and structured pruning to learn a significantly smaller SSL model. In this paper, we contribute to this research domain by introducing SKILL, a novel method that conducts distillation across groups of layers instead of distilling individual arbitrarily selected layers within the teacher network. The identification of the layers to distill is achieved through a hierarchical clustering procedure applied to layer similarity measures. Extensive experiments demonstrate that our distilled version ofWavLM Base+ not only outperforms DPHuBERT but also achieves state-of-the-art results in the 30M parameters model class across several SUPERB tasks.
引用
收藏
页码:675 / 679
页数:5
相关论文
共 50 条
  • [1] SimAN: Exploring Self-Supervised Representation Learning of Scene Text via Similarity-Aware Normalization
    Luo, Canjie
    Jin, Lianwen
    Chen, Jingdong
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 1029 - 1038
  • [2] Self-Supervised Quantization-Aware Knowledge Distillation
    Zhao, Kaiqi
    Zhao, Ming
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [3] ISD: Self-Supervised Learning by Iterative Similarity Distillation
    Tejankar, Ajinkya
    Koohpayegani, Soroush Abbasi
    Pillai, Vipin
    Favaro, Paolo
    Pirsiavash, Hamed
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9589 - 9598
  • [4] FitHuBERT: Going Thinner and Deeper for Knowledge Distillation of Speech Self-Supervised Learning
    Lee, Yeonghyeon
    Jang, Kangwook
    Goo, Jahyun
    Jung, Youngmoon
    Kim, Hoirin
    INTERSPEECH 2022, 2022, : 3588 - 3592
  • [5] Poster: Self-Supervised Quantization-Aware Knowledge Distillation
    Zhao, Kaiqi
    Zhao, Ming
    2023 IEEE/ACM SYMPOSIUM ON EDGE COMPUTING, SEC 2023, 2023, : 250 - 252
  • [6] Emotion-Aware Speech Self-Supervised Representation Learning with Intensity Knowledge
    Liu, Rui
    Ma, Zening
    INTERSPEECH 2024, 2024, : 3180 - 3184
  • [7] Emotion-Aware Speech Self-Supervised Representation Learning with Intensity Knowledge
    Liu, Rui
    Ma, Zening
    arXiv,
  • [8] Self-supervised knowledge distillation in counterfactual learning for VQA
    Bi, Yandong
    Jiang, Huajie
    Zhang, Hanfu
    Hu, Yongli
    Yin, Baocai
    PATTERN RECOGNITION LETTERS, 2024, 177 : 33 - 39
  • [9] Self-supervised knowledge distillation for complementary label learning
    Liu, Jiabin
    Li, Biao
    Lei, Minglong
    Shi, Yong
    NEURAL NETWORKS, 2022, 155 : 318 - 327
  • [10] Self-supervised heterogeneous graph learning with iterative similarity distillation
    Wang, Tianfeng
    Pan, Zhisong
    Hu, Guyu
    Xu, Kun
    Zhang, Yao
    KNOWLEDGE-BASED SYSTEMS, 2023, 276