SELF-KNOWLEDGE DISTILLATION VIA FEATURE ENHANCEMENT FOR SPEAKER VERIFICATION

被引:14
|
作者
Liu, Bei [1 ]
Wang, Haoyu [1 ]
Chen, Zhengyang [1 ]
Wang, Shuai [1 ]
Qian, Yanmin [1 ]
机构
[1] Shanghai Jiao Tong Univ, AI Inst, Dept Comp Sci & Engn, MoE Key Lab Artificial Intelligence,X Lance Lab, Shanghai, Peoples R China
关键词
speaker verification; deep embedding learning; model compression; self-knowledge distillation; RECOGNITION;
D O I
10.1109/ICASSP43922.2022.9746529
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
As the most widely used technique, deep speaker embedding learning has become predominant in speaker verification task recently. Very large neural networks such as ECAPA-TDNN and ResNet can achieve the state-of-the-art performance. However, large models are computationally unfriendly in general, which require massive storage and computation resources. Model compression has been a hot research topic. Parameter quantization usually results in significant performance degradation. Knowledge distillation demands a pretrained complex teacher model. In this paper, we introduce a novel self-knowledge distillation method, namely Self-Knowledge Distillation via Feature Enhancement (SKDFE). It utilizes an auxiliary self-teacher network to distill its own refined knowledge without the need of a pretrained teacher network. Additionally, we apply the self-knowledge distillation at two different levels: label level and feature level. Experiments on Voxceleb dataset show that our proposed self-knowledge distillation method can make small models have comparable or even better performance than large ones. Large models can also be further improved when applying our method.
引用
收藏
页码:7542 / 7546
页数:5
相关论文
共 50 条
  • [1] Self-knowledge distillation via dropout
    Lee, Hyoje
    Park, Yeachan
    Seo, Hyun
    Kang, Myungjoo
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 2023, 233
  • [2] Enhancing deep feature representation in self-knowledge distillation via pyramid feature refinement
    Yu, Hao
    Feng, Xin
    Wang, Yunlong
    [J]. PATTERN RECOGNITION LETTERS, 2024, 178 : 35 - 42
  • [3] Refine Myself by Teaching Myself : Feature Refinement via Self-Knowledge Distillation
    Ji, Mingi
    Shin, Seungjae
    Hwang, Seunghyun
    Park, Gibeom
    Moon, Il-Chul
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 10659 - 10668
  • [4] Self-Knowledge Distillation via Progressive Associative Learning
    Zhao, Haoran
    Bi, Yanxian
    Tian, Shuwen
    Wang, Jian
    Zhang, Peiying
    Deng, Zhaopeng
    Liu, Kai
    [J]. ELECTRONICS, 2024, 13 (11)
  • [5] Neighbor self-knowledge distillation
    Liang, Peng
    Zhang, Weiwei
    Wang, Junhuang
    Guo, Yufeng
    [J]. INFORMATION SCIENCES, 2024, 654
  • [6] ROBUST AND ACCURATE OBJECT DETECTION VIA SELF-KNOWLEDGE DISTILLATION
    Xu, Weipeng
    Chu, Pengzhi
    Xie, Renhao
    Xiao, Xiongziyan
    Huang, Hongcheng
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 91 - 95
  • [7] Personalized Edge Intelligence via Federated Self-Knowledge Distillation
    Jin, Hai
    Bai, Dongshan
    Yao, Dezhong
    Dai, Yutong
    Gu, Lin
    Yu, Chen
    Sun, Lichao
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2023, 34 (02) : 567 - 580
  • [8] Automatic Diabetic Retinopathy Grading via Self-Knowledge Distillation
    Luo, Ling
    Xue, Dingyu
    Feng, Xinglong
    [J]. ELECTRONICS, 2020, 9 (09) : 1 - 13
  • [9] UNSUPERVISED FEATURE ENHANCEMENT FOR SPEAKER VERIFICATION
    Nidadavolu, Phani Sankar
    Kataria, Saurabh
    Villalba, Jesus
    Garcia-Perera, Paola
    Dehak, Najim
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 7599 - 7603
  • [10] Dual teachers for self-knowledge distillation
    Li, Zheng
    Li, Xiang
    Yang, Lingfeng
    Song, Renjie
    Yang, Jian
    Pan, Zhigeng
    [J]. PATTERN RECOGNITION, 2024, 151