Extract then Distill: Efficient and Effective Task-Agnostic BERT Distillation

被引:1
|
作者
Chen, Cheng [1 ]
Yin, Yichun [2 ]
Shang, Lifeng [2 ]
Wang, Zhi [3 ,4 ]
Jiang, Xin [2 ]
Chen, Xiao [2 ]
Liu, Qun [2 ]
机构
[1] Tsinghua Univ, Dept Comp Sci & Technol, Beijing, Peoples R China
[2] Huawei Noahs Ark Lab, Shenzhen, Peoples R China
[3] Tsinghua Univ, Tsinghua Shenzhen Int Grad Sch, Shenzhen, Peoples R China
[4] Peng Cheng Lab, Shenzhen, Peoples R China
关键词
BERT; Knowledge distillation; Structured pruning;
D O I
10.1007/978-3-030-86365-4_46
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Task-agnostic knowledge distillation, a teacher-student framework, has been proved effective for BERT compression. Although achieving promising results on NLP tasks, it requires enormous computational resources. In this paper, we propose Extract Then Distill (ETD), a generic and flexible strategy to reuse the teacher's parameters for efficient and effective task-agnostic distillation, which can be applied to students of any size. Specifically, we introduce two variants of ETD, ETD-R and and ETD-Impt, which extract the teacher's parameters in a random manner and by following an importance metric, respectively. In this way, the student has already acquired some knowledge at the beginning of the distillation process, which makes the distillation process converge faster. We demonstrate the effectiveness of ETD on the GLUE benchmark and SQuAD. The experimental results show that: (1) compared with the baseline without an ETD strategy, ETD can save 70% of computation cost. Moreover, it achieves better results than the baseline when using the same computing resource. (2) ETD is generic and has been proven effective for different distillation methods (e.g., TinyBERT and MiniLM) and students of different sizes. Code is available at https://github.com/huawei-noah/Pretrained-Language-Model.
引用
收藏
页码:570 / 581
页数:12
相关论文
共 50 条
  • [21] Loss Decoupling for Task-Agnostic Continual Learning
    Liang, Yan-Shuo
    Li, Wu-Jun
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [22] TADA: Task-Agnostic Dialect Adapters for English
    Held, William
    Ziems, Caleb
    Yang, Diyi
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 813 - 824
  • [23] Hierarchically structured task-agnostic continual learning
    Heinke Hihn
    Daniel A. Braun
    Machine Learning, 2023, 112 : 655 - 686
  • [24] Hierarchically structured task-agnostic continual learning
    Hihn, Heinke
    Braun, Daniel A.
    MACHINE LEARNING, 2023, 112 (02) : 655 - 686
  • [25] Mimic and Fool: A Task-Agnostic Adversarial Attack
    Chaturvedi, Akshay
    Garain, Utpal
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (04) : 1801 - 1808
  • [26] DexBERT: Effective, Task-Agnostic and Fine-Grained Representation Learning of Android Bytecode
    Sun T.
    Allix K.
    Kim K.
    Zhou X.
    Kim D.
    Lo D.
    Bissyande T.F.
    Klein J.
    IEEE Transactions on Software Engineering, 2023, 49 (10) : 4691 - 4706
  • [27] TAFA: A Task-Agnostic Fingerprinting Algorithm for Neural Networks
    Pan, Xudong
    Zhang, Mi
    Lu, Yifan
    Yang, Min
    COMPUTER SECURITY - ESORICS 2021, PT I, 2021, 12972 : 542 - 562
  • [28] Task-Agnostic Adaptive Activation Scaling Network for LLMs
    Jia, Ni
    Liu, Tong
    Chen, Jiadi
    Zhang, Ying
    Han, Song
    IEEE ACCESS, 2025, 13 : 31774 - 31784
  • [29] Task-Agnostic Structured Pruning of Speech Representation Models
    Wang, Haoyu
    Wang, Siyuan
    Zhang, Wei-Qiang
    Suo, Hongbin
    Wan, Yulong
    INTERSPEECH 2023, 2023, : 231 - 235
  • [30] MINILM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers
    Wang, Wenhui
    Wei, Furu
    Dong, Li
    Bao, Hangbo
    Yang, Nan
    Zhou, Ming
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33