Self-Supervised Quantization-Aware Knowledge Distillation

被引:0
|
作者
Zhao, Kaiqi [1 ]
Zhao, Ming [1 ]
机构
[1] Arizona State Univ, Tempe, AZ 85287 USA
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Quantization-aware training (QAT) and Knowledge Distillation (KD) are combined to achieve competitive performance in creating low-bit deep learning models. However, existing works applying KD to QAT require tedious hyper-parameter tuning to balance the weights of different loss terms, assume the availability of labeled training data, and require complex, computationally intensive training procedures for good performance. To address these limitations, this paper proposes a novel Self-Supervised Quantization-Aware Knowledge Distillation (SQAKD) framework. SQAKD first unifies the forward and backward dynamics of various quantization functions, making it flexible for incorporating various QAT works. Then it formulates QAT as a co-optimization problem that simultaneously minimizes the KL-Loss between the full-precision and low-bit models for KD and the discretization error for quantization, without supervision from labels. A comprehensive evaluation shows that SQAKD substantially outperforms the state-of-the-art QAT and KD works for a variety of model architectures. Our code is at: https: //github.com/kaiqi123/SQAKD.git.
引用
收藏
页数:16
相关论文
共 50 条
  • [41] Quantization-Aware Pruning Criterion for Industrial Applications
    Gil, Yoonhee
    Park, Jong-Hyeok
    Baek, Jongchan
    Han, Soohee
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2022, 69 (03) : 3203 - 3213
  • [42] ISD: Self-Supervised Learning by Iterative Similarity Distillation
    Tejankar, Ajinkya
    Koohpayegani, Soroush Abbasi
    Pillai, Vipin
    Favaro, Paolo
    Pirsiavash, Hamed
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9589 - 9598
  • [43] Self-Supervised Reinforcement Learning with dual-reward for knowledge-aware recommendation
    Zhang, Wei
    Lin, Yuanguo
    Liu, Yong
    You, Huanyu
    Wu, Pengcheng
    Lin, Fan
    Zhou, Xiuze
    APPLIED SOFT COMPUTING, 2022, 131
  • [44] Knowledge-aware reasoning with self-supervised reinforcement learning for explainable recommendation in MOOCs
    Lin, Yuanguo
    Zhang, Wei
    Lin, Fan
    Zeng, Wenhua
    Zhou, Xiuze
    Wu, Pengcheng
    NEURAL COMPUTING & APPLICATIONS, 2024, 36 (08): : 4115 - 4132
  • [45] Knowledge-aware reasoning with self-supervised reinforcement learning for explainable recommendation in MOOCs
    Yuanguo Lin
    Wei Zhang
    Fan Lin
    Wenhua Zeng
    Xiuze Zhou
    Pengcheng Wu
    Neural Computing and Applications, 2024, 36 : 4115 - 4132
  • [46] Auxiliary Learning for Self-Supervised Video Representation via Similarity-based Knowledge Distillation
    Dadashzadeh, Amirhossein
    Whone, Alan
    Mirmehdi, Majid
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 4230 - 4239
  • [47] AeroRec: An Efficient On-Device Recommendation Framework using Federated Self-Supervised Knowledge Distillation
    Xia, Tengxi
    Ren, Ju
    Rao, Wei
    Zu, Qin
    Wang, Wenjie
    Chen, Shuai
    Zhang, Yaoxue
    IEEE INFOCOM 2024-IEEE CONFERENCE ON COMPUTER COMMUNICATIONS, 2024, : 121 - 130
  • [48] Knowledge distillation of multi-scale dense prediction transformer for self-supervised depth estimation
    Song, Jimin
    Lee, Sang Jun
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [49] Mitigating Backdoor Attacks in Pre-Trained Encoders via Self-Supervised Knowledge Distillation
    Bie, Rongfang
    Jiang, Jinxiu
    Xie, Hongcheng
    Guo, Yu
    Miao, Yinbin
    Jia, Xiaohua
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (05) : 2613 - 2625
  • [50] KNOWLEDGE DISTILLATION FOR NEURAL TRANSDUCERS FROM LARGE SELF-SUPERVISED PRE-TRAINED MODELS
    Yang, Xiaoyu
    Li, Qiujia
    Woodland, Philip C.
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 8527 - 8531