Clipping-Based Post Training 8-Bit Quantization of Convolution Neural Networks for Object Detection

被引:2
|
作者
Chen, Leisheng [1 ]
Lou, Peihuang [1 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Coll Mech & Elect Engn, Nanjing 210016, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2022年 / 12卷 / 23期
关键词
object detection; quantization; clipping; post-training quantization; accuracy loss;
D O I
10.3390/app122312405
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Fueled by the development of deep neural networks, breakthroughs have been achieved in plenty of computer vision problems, such as image classification, segmentation, and object detection. These models usually have handers and millions of parameters, which makes them both computational and memory expensive. Motivated by this, this paper proposes a post-training quantization method based on the clipping operation for neural network compression. By quantizing parameters of a model to 8-bit using our proposed methods, its memory consumption is reduced, its computational speed is increased, and its performance is maintained. This method exploits the clipping operation during training so that it saves a large computational cost during quantization. After training, this method quantizes the parameters to 8-bit based on the clipping value. In addition, a fully connected layer compression is conducted using singular value decomposition (SVD), and a novel loss function term is leveraged to further diminish the performance drop caused by quantization. The proposed method is validated on two widely used models, Yolo V3 and Faster R-CNN, for object detection on the PASCAL VOC, COCO, and ImageNet datasets. Performances show it effectively reduces the storage consumption at 18.84% and accelerates the model at 381%, meanwhile avoiding the performance drop (drop < 0.02% in VOC).
引用
收藏
页数:18
相关论文
共 50 条
  • [21] A deep convolution neural network for object detection based
    Yue Q.
    Ma C.
    Yue, Qi (yueqi6@163.com), 1600, Harbin Institute of Technology (49): : 159 - 164
  • [22] Highly Efficient 8-bit Low Precision Inference of Convolutional Neural Networks with IntelCaffe
    Gong, Jiong
    Shen, Haihao
    Zhang, Guoming
    Liu, Xiaoli
    Li, Shane
    Jin, Ge
    Maheshwari, Niharika
    Fomenko, Evarist
    Segal, Eden
    1ST ACM REQUEST WORKSHOP/TOURNAMENT ON REPRODUCIBLE SOFTWARE/HARDWARE CO-DESIGN OF PARETO-EFFICIENT DEEP LEARNING, 2018,
  • [23] A Winograd-Based Highly-Parallel Convolution Engine for 8-bit CNN Acceleration
    Chen, Yong-Tai
    Ou, Yu-Feng
    Huang, Chao-Tsung
    2022 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS 2022): INTELLIGENT TECHNOLOGY IN THE POST-PANDEMIC ERA, 2022, : 395 - 398
  • [24] Post training 4-bit quantization of convolutional networks for rapid-deployment
    Banner, Ron
    Nahshan, Yury
    Soudry, Daniel
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [25] A Survey on Vehicle Detection Based on Convolution Neural Networks
    Manana, Mduduzi
    Tu, Chunling
    Owolawi, Pius Adewale
    PROCEEDINGS OF 2017 3RD IEEE INTERNATIONAL CONFERENCE ON COMPUTER AND COMMUNICATIONS (ICCC), 2017, : 1751 - 1755
  • [26] Quantization-Based Optimization Algorithm for Hardware Implementation of Convolution Neural Networks
    Mohd, Bassam J.
    Yousef, Khalil M. Ahmad
    AlMajali, Anas
    Hayajneh, Thaier
    ELECTRONICS, 2024, 13 (09)
  • [27] Quantization and training of object detection networks with low-precision weights and activations
    Yang, Bo
    Liu, Jian
    Zhou, Li
    Wang, Yun
    Chen, Jie
    JOURNAL OF ELECTRONIC IMAGING, 2018, 27 (01)
  • [28] BASQ: Branch-wise Activation-clipping Search Quantization for Sub-4-bit Neural Networks
    Kim, Han-Byul
    Park, Eunhyeok
    Yoo, Sungjoo
    COMPUTER VISION, ECCV 2022, PT XII, 2022, 13672 : 17 - 33
  • [29] Post-Training Quantization for Energy Efficient Realization of Deep Neural Networks
    Latotzke, Cecilia
    Balim, Batuhan
    Gemmeke, Tobias
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 1559 - 1566
  • [30] Direct Quantization for Training Highly Accurate Low Bit-width Deep Neural Networks
    Tuan Hoang
    Thanh-Toan Do
    Nguyen, Tam, V
    Cheung, Ngai-Man
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 2111 - 2118