Extract then Distill: Efficient and Effective Task-Agnostic BERT Distillation

被引:1
|
作者
Chen, Cheng [1 ]
Yin, Yichun [2 ]
Shang, Lifeng [2 ]
Wang, Zhi [3 ,4 ]
Jiang, Xin [2 ]
Chen, Xiao [2 ]
Liu, Qun [2 ]
机构
[1] Tsinghua Univ, Dept Comp Sci & Technol, Beijing, Peoples R China
[2] Huawei Noahs Ark Lab, Shenzhen, Peoples R China
[3] Tsinghua Univ, Tsinghua Shenzhen Int Grad Sch, Shenzhen, Peoples R China
[4] Peng Cheng Lab, Shenzhen, Peoples R China
关键词
BERT; Knowledge distillation; Structured pruning;
D O I
10.1007/978-3-030-86365-4_46
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Task-agnostic knowledge distillation, a teacher-student framework, has been proved effective for BERT compression. Although achieving promising results on NLP tasks, it requires enormous computational resources. In this paper, we propose Extract Then Distill (ETD), a generic and flexible strategy to reuse the teacher's parameters for efficient and effective task-agnostic distillation, which can be applied to students of any size. Specifically, we introduce two variants of ETD, ETD-R and and ETD-Impt, which extract the teacher's parameters in a random manner and by following an importance metric, respectively. In this way, the student has already acquired some knowledge at the beginning of the distillation process, which makes the distillation process converge faster. We demonstrate the effectiveness of ETD on the GLUE benchmark and SQuAD. The experimental results show that: (1) compared with the baseline without an ETD strategy, ETD can save 70% of computation cost. Moreover, it achieves better results than the baseline when using the same computing resource. (2) ETD is generic and has been proven effective for different distillation methods (e.g., TinyBERT and MiniLM) and students of different sizes. Code is available at https://github.com/huawei-noah/Pretrained-Language-Model.
引用
收藏
页码:570 / 581
页数:12
相关论文
共 50 条
  • [31] Task-Agnostic Evolution of Diverse Repertoires of Swarm Behaviours
    Gomes, Jorge
    Christensen, Anders Lyhne
    SWARM INTELLIGENCE (ANTS 2018), 2018, 11172 : 225 - 238
  • [32] Learning Task-Agnostic Action Spaces for Movement Optimization
    Babadi, Amin
    van de Panne, Michiel
    Liu, C. Karen
    Hamalainen, Perttu
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2022, 28 (12) : 4700 - 4712
  • [33] P-Distill: Efficient and Effective Prompt Tuning Using Knowledge Distillation
    Won, Hyun-Sik
    Choi, Joon-Young
    Zaman, Namrah
    Aliyeva, Dinara
    Kim, Kang-Min
    APPLIED SCIENCES-BASEL, 2025, 15 (05):
  • [34] Latent Plans for Task-Agnostic Offline Reinforcement Learning
    Rosete-Beas, Erick
    Mees, Oier
    Kalweit, Gabriel
    Boedecker, Joschka
    Burgard, Wolfram
    CONFERENCE ON ROBOT LEARNING, VOL 205, 2022, 205 : 1838 - 1849
  • [35] TAPE: Task-Agnostic Prior Embedding for Image Restoration
    Liu, Lin
    Xie, Lingxi
    Zhang, Xiaopeng
    Yuan, Shanxin
    Chen, Xiangyu
    Zhou, Wengang
    Li, Houqiang
    Tian, Qi
    COMPUTER VISION - ECCV 2022, PT XVIII, 2022, 13678 : 447 - 464
  • [36] EViLBERT: Learning Task-Agnostic Multimodal Sense Embeddings
    Calabrese, Agostina
    Bevilacqua, Michele
    Navigli, Roberto
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 481 - 487
  • [37] Task-agnostic feature extractors for incremental learning at the edge
    Loomis, Lisa
    Wise, David
    Inkawhich, Nathan
    Thiem, Clare
    McDonald, Nathan
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS VI, 2024, 13051
  • [38] Task-Agnostic Dynamics Priors for Deep Reinforcement Learning
    Du, Yilun
    Narasimhan, Karthik
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [39] Task-Agnostic Amortized Inference of Gaussian Process Hyperparameters
    Liu, Sulin
    Sun, Xingyuan
    Ramadge, Peter J.
    Adams, Ryan P.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [40] COSMIC: Mutual Information for Task-Agnostic Summarization Evaluation
    Darrin, Maxime
    Formont, Philippe
    CilEuNG, Jackie Chi Kit
    Piantanida, Pablo
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 12696 - 12717