Redundancy-Aware Pruning of Convolutional Neural Networks

被引:3
|
作者
Xie, Guotian [1 ,2 ]
机构
[1] Sun Yat Sen Univ, Sch Data & Comp Sci, Guangzhou 510006, Guangdong, Peoples R China
[2] Sun Yat Sen Univ, Guangdong Key Lab Informat Secur Technol, Guangzhou 510006, Guangdong, Peoples R China
关键词
Neurons;
D O I
10.1162/neco_a_01330
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Pruning is an effective way to slim and speed up convolutional neural networks. Generally previous work directly pruned neural networks in the original feature space without considering the correlation of neurons. We argue that such a way of pruning still keeps some redundancy in the pruned networks. In this letter, we proposed to prune in the intermediate space in which the correlation of neurons is eliminated. To achieve this goal, the input and output of a convolutional layer are first mapped to an intermediate space by orthogonal transformation. Then neurons are evaluated and pruned in the intermediate space. Extensive experiments have shown that our redundancy-aware pruning method surpasses state-of-the-art pruning methods on both efficiency and accuracy. Notably, using our redundancy-aware pruning method, ResNet models with three times the speed-up could achieve competitive performance with fewer floating point operations per second even compared to DenseNet.
引用
收藏
页码:2482 / 2506
页数:25
相关论文
共 50 条
  • [1] Fast CNN Pruning via Redundancy-Aware Training
    Dong, Xiao
    Liu, Lei
    Li, Guangli
    Zhao, Peng
    Feng, Xiaobing
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2018, PT I, 2018, 11139 : 3 - 13
  • [3] Redundancy-aware topology management in wireless sensor networks
    Al-Omari, Safwan
    Shi, Weisong
    [J]. 2006 INTERNATIONAL CONFERENCE ON COLLABORATIVE COMPUTING: NETWORKING, APPLICATIONS AND WORKSHARING, 2006, : 29 - +
  • [4] A Redundancy-Aware Face Structure for Wireless Sensor Networks
    Razzaq, Ammara
    Khedr, Ahmed M.
    Al Aghbari, Zaher
    [J]. 2018 8TH INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND INFORMATION TECHNOLOGY (CSIT), 2018, : 38 - 42
  • [5] Redundancy-Aware Maximal Cliques
    Wang, Jia
    Cheng, James
    Fu, Ada Wai-Chee
    [J]. 19TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING (KDD'13), 2013, : 122 - 130
  • [6] RCFL: Redundancy-Aware Collaborative Federated Learning in Vehicular Networks
    Hui, Yilong
    Hu, Jie
    Cheng, Nan
    Zhao, Gaosheng
    Chen, Rui
    Luan, Tom H.
    Aldubaikhy, Khalid
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (06) : 5539 - 5553
  • [7] RACE: An Efficient Redundancy-aware Accelerator for Dynamic Graph Neural Network
    Yu, Hui
    Zhang, Yu
    Zhao, Jin
    Liao, Yujian
    Huang, Zhiying
    He, Donghao
    Gu, Lin
    Jin, Hai
    Liao, Xiaofei
    Liu, Haikun
    He, Bingsheng
    Yue, Jianhui
    [J]. ACM TRANSACTIONS ON ARCHITECTURE AND CODE OPTIMIZATION, 2023, 20 (04)
  • [8] Convolutional Neural Network Pruning with Structural Redundancy Reduction
    Wang, Zi
    Li, Chengcheng
    Wang, Xiangyang
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 14908 - 14917
  • [9] Semantic redundancy-aware implicit neural compression for multidimensional biomedical image data
    Ma, Yifan
    Yi, Chengqiang
    Zhou, Yao
    Wang, Zhaofei
    Zhao, Yuxuan
    Zhu, Lanxin
    Wang, Jie
    Gao, Shimeng
    Liu, Jianchao
    Yuan, Xinyue
    Wang, Zhaoqiang
    Liu, Binbing
    Fei, Peng
    [J]. COMMUNICATIONS BIOLOGY, 2024, 7 (01)
  • [10] Hardware-Aware Evolutionary Explainable Filter Pruning for Convolutional Neural Networks
    Christian Heidorn
    Muhammad Sabih
    Nicolai Meyerhöfer
    Christian Schinabeck
    Jürgen Teich
    Frank Hannig
    [J]. International Journal of Parallel Programming, 2024, 52 : 40 - 58