Clipping-Based Post Training 8-Bit Quantization of Convolution Neural Networks for Object Detection

被引:2
|
作者
Chen, Leisheng [1 ]
Lou, Peihuang [1 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Coll Mech & Elect Engn, Nanjing 210016, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2022年 / 12卷 / 23期
关键词
object detection; quantization; clipping; post-training quantization; accuracy loss;
D O I
10.3390/app122312405
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Fueled by the development of deep neural networks, breakthroughs have been achieved in plenty of computer vision problems, such as image classification, segmentation, and object detection. These models usually have handers and millions of parameters, which makes them both computational and memory expensive. Motivated by this, this paper proposes a post-training quantization method based on the clipping operation for neural network compression. By quantizing parameters of a model to 8-bit using our proposed methods, its memory consumption is reduced, its computational speed is increased, and its performance is maintained. This method exploits the clipping operation during training so that it saves a large computational cost during quantization. After training, this method quantizes the parameters to 8-bit based on the clipping value. In addition, a fully connected layer compression is conducted using singular value decomposition (SVD), and a novel loss function term is leveraged to further diminish the performance drop caused by quantization. The proposed method is validated on two widely used models, Yolo V3 and Faster R-CNN, for object detection on the PASCAL VOC, COCO, and ImageNet datasets. Performances show it effectively reduces the storage consumption at 18.84% and accelerates the model at 381%, meanwhile avoiding the performance drop (drop < 0.02% in VOC).
引用
收藏
页数:18
相关论文
共 50 条
  • [1] Scalable Methods for 8-bit Training of Neural Networks
    Banner, Ron
    Hubara, Itay
    Hoffer, Elad
    Soudry, Daniel
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [2] QUANTIZATION AND TRAINING OF LOW BIT-WIDTH CONVOLUTIONAL NEURAL NETWORKS FOR OBJECT DETECTION
    Yin, Penghang
    Zhang, Shuai
    Qi, Yingyong
    Xin, Jack
    JOURNAL OF COMPUTATIONAL MATHEMATICS, 2019, 37 (03) : 349 - 360
  • [3] Exploring 8-bit Arithmetic for Training Spiking Neural Networks
    Fernandez-Hart, T.
    Kalganova, T.
    Knight, James C.
    2024 IEEE INTERNATIONAL CONFERENCE ON OMNI-LAYER INTELLIGENT SYSTEMS, COINS 2024, 2024, : 380 - 385
  • [4] Training Deep Neural Networks with 8-bit Floating Point Numbers
    Wang, Naigang
    Choi, Jungwook
    Brand, Daniel
    Chen, Chia-Yu
    Gopalakrishnan, Kailash
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [5] DC-MPQ: Distributional Clipping-based Mixed-Precision Quantization for Convolutional Neural Networks
    Lee, Seungjin
    Kim, Hyun
    2022 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS 2022): INTELLIGENT TECHNOLOGY IN THE POST-PANDEMIC ERA, 2022, : 130 - 133
  • [6] Sub-8-Bit Quantization Aware Training for 8-Bit Neural Network Accelerator with On-Device Speech Recognition
    Zhen, Kai
    Nguyen, Hieu Duy
    Chinta, Raviteja
    Susanj, Nathan
    Mouchtaris, Athanasios
    Afzal, Tariq
    Rastrow, Ariya
    INTERSPEECH 2022, 2022, : 3033 - 3037
  • [7] PTMQ: Post-training Multi-Bit Quantization of Neural Networks
    Xu, Ke
    Li, Zhongcheng
    Wang, Shanshan
    Zhang, Xingyi
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 14, 2024, : 16193 - 16201
  • [8] Hybrid 8-bit Floating Point (HFP8) Training and Inference for Deep Neural Networks
    Sun, Xiao
    Choi, Jungwook
    Chen, Chia-Yu
    Wang, Naigang
    Venkataramani, Swagath
    Srinivasan, Vijayalakshmi
    Cui, Xiaodong
    Zhang, Wei
    Gopalakrishnan, Kailash
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [9] O-2A: Outlier-Aware Compression for 8-bit Post-Training Quantization Model
    Ho, Nguyen-Dong
    Chang, Ik-Joon
    IEEE ACCESS, 2023, 11 : 95467 - 95480
  • [10] Training Deep Neural Networks in 8-bit Fixed Point with Dynamic Shared Exponent Management
    Yamaguchi, Hisakatsu
    Ito, Makiko
    Yoda, Katsu
    Ike, Atsushi
    PROCEEDINGS OF THE 2021 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2021), 2021, : 1536 - 1541