Traffic sign classification algorithm based on compressed convolutional neural network

被引:0
|
作者
Zhang J. [1 ]
Wang W. [1 ]
Lu C. [1 ]
Li X. [1 ]
机构
[1] Hunan Provincial Key Laboratory of Intelligent Processing of Big Data on Transportation, Changsha University of Science and Technology, Changsha
关键词
Channel pruning; Convolutional neural network; Model compression; Quantization; Traffic sign classification;
D O I
10.13245/j.hust.190119
中图分类号
学科分类号
摘要
Aiming at the problem that the automotive system can hardly meet the requirements of large convolutional neural networks for computing resources and storage space, a traffic-sign classification algorithm based on compressed convolutional neural network was proposed.First, a network was trained on the GTSRB, VGG-16 and AlexNet were selected comprehensively.Then, channels were pruned based on Taylor expansion to delete redundant feature map channels for the network, and ternary quantized parameters were trained.Finally, the experimental results of channel pruned, ternary quantized parameter and combined compression for networks were compared respectively.The experimental results show that the proposed algorithm effectively compresses the network and reduces the number of operations.The storage size of the final combined compression for VGG-16 is reduced by half, and the number of parameters is 9% of the original model.The floating-point operations per second of the proposed model is reduced to one-fifth of the original one, with five times faster model loading time, two times faster testing time, and accuracy of 97%. © 2019, Editorial Board of Journal of Huazhong University of Science and Technology. All right reserved.
引用
收藏
页码:103 / 108
页数:5
相关论文
共 15 条
  • [1] Krizhevsky A., Sutskever I., Hinton G.E., ImageNet classification with deep convolutional neural networks, Proc of Advances in Neural Information Processing Systems, pp. 1097-1105, (2012)
  • [2] Simonyan K., Zisserman A., Very deep convolutional networks for large-scale image recognition, Proc of International Conference on Learning Representations, pp. 1-14, (2015)
  • [3] Ren S., He K., Ross G., Et al., Faster R-CNN: towords real-time object detection with region proposal networks, Proc of Conference on Advances in Neural Information Processing Systems, pp. 91-99, (2015)
  • [4] Hinton G., Vinyals O., Dean J., Distilling the knowledge in a neural network, Proc of Conference on Advances in Neural Information Processing Systems, pp. 2644-2652, (2014)
  • [5] Molchanov P., Tyree S., Karras T., Et al., Pruning convolutional neural networks for resource efficient transfer learning, Proc of International Conference on Learning Representations, pp. 324-332, (2017)
  • [6] Hu H., Peng R., Tai Y.W., Et al., Network trimming: a data-driven neuron pruning approach towards efficient deep architectures, Proc of International Conference on Learning Representations, pp. 214-222, (2017)
  • [7] Tai C., Xiao T., Wang X., Et al., Convolutional neural networks with low-rank regularization
  • [8] Novikov A., Podoprikhin D., Osokin A., Et al., Tensorizing neural networks, Proc of Conference on Advances in Neural Information Processing Systems, pp. 442-450, (2015)
  • [9] Vanhoucke V., Senior A., Mao M.Z., Improving the speed of neural networks on CPUs, Proc of Deep Learning and Unsupervised Feature Learning NIPS Work-shop, pp. 1-8, (2011)
  • [10] Zhu C., Han S., Mao H., Et al., Trained ternary quanquantization