Robustness analysis for compact neural networks

被引:0
|
作者
Chen G. [1 ]
Peng P. [1 ,2 ]
Tian Y. [1 ,2 ]
机构
[1] Department of Computer Science and Technology, Peking University, Beijing
[2] Peng Cheng Laboratory, Shenzhen
关键词
Knowledge distillation; Neural networks; Quantization and pruning; Robustness analysis;
D O I
10.1360/SST-2021-0233
中图分类号
学科分类号
摘要
Deep neural networks (DNNs) have achieved comparable performance to humans on many tasks. However, there are two problems in deploying a DDN to terminal devices of a real scene. First, DNNs consume huge computing resources and memory; therefore, it is difficult to directly apply DNNs to resource-constrained terminal devices. Second, real-world data are often affected by noise; accordingly, the model deployed in the real scene needs to have good robustness. Recently, many compression methods for DNNs have been adapted to resource-constrained terminal devices, and robustness analyses of these compression models have increasingly received attention. In this paper, the adversarial and corruption robustness of several compression algorithms, such as pruning, quantization, and knowledge distillation, is first analyzed. Then, several studies of compact networks with improved robustness are summarized. Finally, several challenges are discussed, and possible research directions for compact robustness networks are proposed. © 2022, Science China Press. All right reserved.
引用
收藏
页码:689 / 703
页数:14
相关论文
共 135 条
  • [1] Carrere V, Conel J E, Recovery of atmospheric water vapor total column abundance from imaging spectrometer data around 940 nm-Sensitivity analysis and application to airborne visible/infrared imaging spectrometer (AVIRIS) data, Remote Sens Environ, 44, pp. 179-204, (1993)
  • [2] Esteva A, Robicquet A, Ramsundar B, Et al., A guide to deep learning in healthcare, Nat Med, 25, pp. 24-29, (2019)
  • [3] Hinton G, Deng L, Yu D, Et al., Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups, IEEE Signal Process Mag, 29, pp. 82-97, (2012)
  • [4] Long J, Shelhamer E, Darrell T., Fully convolutional networks for semantic segmentation, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431-3440, (2015)
  • [5] Bojarski M, Del Testa D, Dworakowski D, Et al., End to end learning for self-driving cars, (2016)
  • [6] Taigman Y, Yang M, Ranzato M, Et al., DeepFace: Closing the gap to human-level performance in face verification, Proceedings of the IEEE Computer Vision and Pattern Recognition, (2014)
  • [7] Canziani A, Paszke A, Culurciello E., An analysis of deep neural network models for practical applications, (2016)
  • [8] Han S, Pool J, Tran J, Et al., Learning both weights and connections for efficient neural networks, (2015)
  • [9] Parashar A, Rhu M, Mukkara A, Et al., SCNN: An accelerator for compressed-sparse convolutional neural networks, SIGARCH Comput Archit News, 45, pp. 27-40, (2017)
  • [10] Yao S, Zhao Y, Zhang A, Et al., Deepiot: Compressing deep neural network structures for sensing systems with a compressor-critic framework, Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems, pp. 1-14, (2017)