Quantune: Post-training quantization of convolutional neural networks using extreme gradient boosting for fast deployment

被引:12
|
作者
Lee, Jemin [1 ]
Yu, Misun [1 ]
Kwon, Yongin [1 ]
Kim, Taeho [1 ]
机构
[1] Elect & Telecommun Res Inst ETRI, Artificial Intelligence Res Lab, Daejeon 34129, South Korea
关键词
Quantization; Neural networks; Model compression; Deep learning compiler;
D O I
10.1016/j.future.2022.02.005
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
To adopt convolutional neural networks (CNN) for a range of resource-constrained targets, it is necessary to compress the CNN models by performing quantization, whereby precision representation is converted to a lower bit representation. To overcome problems such as sensitivity of the training dataset, high computational requirements, and large time consumption, post-training quantization methods that do not require retraining have been proposed. In addition, to compensate for the accuracy drop without retraining, previous studies on post-training quantization have proposed several complementary methods: calibration, schemes, clipping, granularity, and mixed-precision. To generate a quantized model with minimal error, it is necessary to study all possible combinations of the methods because each of them is complementary and the CNN models have different characteristics. However, an exhaustive or a heuristic search is either too time-consuming or suboptimal. To overcome this challenge, we propose an auto-tuner known as Quantune, which builds a gradient tree boosting model to accelerate the search for the configurations of quantization and reduce the quantization error. We evaluate and compare Quantune with the random, grid, and genetic algorithms. The experimental results show that Quantune reduces the search time for quantization by approximately 36.5x with an accuracy loss of 0.07-0.65% across six CNN models, including the fragile ones (MobileNet, SqueezeNet, and ShuffleNet). To support multiple targets and adopt continuously evolving quantization works, Quantune is implemented on a full-fledged compiler for deep learning as an open-sourced project.(c) 2022 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
引用
收藏
页码:124 / 135
页数:12
相关论文
共 50 条
  • [1] Normalized Post-training Quantization for Photonic Neural Networks
    Kirtas, M.
    Passalis, N.
    Oikonomou, A.
    Mourgias-Alexandris, G.
    Moralis-Pegios, M.
    Pleros, N.
    Tefas, A.
    2022 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2022, : 657 - 663
  • [2] Post-training Quantization for Neural Networks with Provable Guarantees*
    Zhang, Jinjie
    Zhou, Yixuan
    Saab, Rayan
    SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE, 2023, 5 (02): : 373 - 399
  • [3] PTMQ: Post-training Multi-Bit Quantization of Neural Networks
    Xu, Ke
    Li, Zhongcheng
    Wang, Shanshan
    Zhang, Xingyi
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 14, 2024, : 16193 - 16201
  • [4] VLCQ: Post-training quantization for deep neural networks using variable length coding
    Abdel-Salam, Reem
    Abdel-Gawad, Ahmed H.
    Wassal, Amr G.
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2025, 166
  • [5] A Gradient Boosting Approach for Training Convolutional and Deep Neural Networks
    Emami, Seyedsaman
    Martinez-Munoz, Gonzalo
    IEEE OPEN JOURNAL OF SIGNAL PROCESSING, 2023, 4 : 313 - 321
  • [6] Post-Training Quantization for Energy Efficient Realization of Deep Neural Networks
    Latotzke, Cecilia
    Balim, Batuhan
    Gemmeke, Tobias
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 1559 - 1566
  • [7] A novel framework for deployment of CNN models using post-training quantization on microcontroller
    Sailesh, M.
    Selvakumar, K.
    Prasanth, Narayanan
    MICROPROCESSORS AND MICROSYSTEMS, 2022, 94
  • [8] Post-training approach for mitigating overfitting in quantum convolutional neural networks
    Shinde, Aakash Ravindra
    Jain, Charu
    Kalev, Amir
    PHYSICAL REVIEW A, 2024, 110 (04)
  • [9] Lost-minimum post-training parameter quantization method for convolutional neural network
    Zhang F.
    Huang Y.
    Fang Z.
    Guo W.
    Tongxin Xuebao/Journal on Communications, 2022, 43 (04): : 114 - 122
  • [10] Effective Post-Training Quantization Of Neural Networks For Inference on Low Power Neural Accelerator
    Demidovskij, Alexander
    Smirnov, Eugene
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,