Adaptive Channel Pruning for Trainability Protection

被引:0
|
作者
Liu, Jiaxin [1 ,2 ]
Zhang, Dazong [4 ]
Liu, Wei [1 ,2 ]
Li, Yongming [3 ]
Hu, Jun [2 ]
Cheng, Shuai [2 ]
Yang, Wenxing [1 ,2 ]
机构
[1] Northeastern Univ, Sch Comp Sci & Engn, Shenyang 110167, Liaoning, Peoples R China
[2] Neusoft Reach Automot Technol Co, Shenyang 110179, Liaoning, Peoples R China
[3] Liaoning Univ Technol, Coll Sci, Liaoing 121001, Peoples R China
[4] BYD Auto Ind Co Ltd, Shenzhen 518118, Peoples R China
基金
中国国家自然科学基金;
关键词
Convolutional neural networks; Trainability preservation; Model compression; Pruning;
D O I
10.1007/978-981-99-8549-4_12
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Pruning is a widely used method for compressing neural networks, reducing their computational requirements by removing unimportant connections. However, many existing pruning methods prune pre-trained models by using the same pruning rate for each layer, neglecting the protection of model trainability and damaging accuracy. Additionally, the number of redundant parameters per layer in complex models varies, necessitating adjustment of the pruning rate according to model structure and training data. To overcome these issues, we propose a trainability-preserving adaptive channel pruning method that prunes during training. Our approach utilizes a model weight-based similarity calculation module to eliminate unnecessary channels while protecting model trainability and correcting output feature maps. An adaptive sparsity control module assigns pruning rates for each layer according to a preset target and aids network training. We performed experiments on CIFAR-10 and Imagenet classification datasets using networks of various structures. Our technique outperformed comparison methods at different pruning rates. Additionally, we confirmed the effectiveness of our technique on the object detection datasets VOC and COCO.
引用
收藏
页码:137 / 148
页数:12
相关论文
共 50 条
  • [41] Channel dependent tree pruning for the sphere decoder
    Artés, H
    2004 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY, PROCEEDINGS, 2004, : 538 - 538
  • [42] Backward Coverability with Pruning for Lossy Channel Systems
    Geffroy, Thomas
    Leroux, Jerome
    Sutre, Gregoire
    SPIN'17: PROCEEDINGS OF THE 24TH ACM SIGSOFT INTERNATIONAL SPIN SYMPOSIUM ON MODEL CHECKING OF SOFTWARE, 2017, : 132 - 141
  • [43] Mining frequent closed patterns by adaptive pruning
    Liu, Jun-Qiang
    Sun, Xiao-Ying
    Zhuang, Yue-Ting
    Pan, Yun-He
    Ruan Jian Xue Bao/Journal of Software, 2004, 15 (01): : 94 - 102
  • [44] Network Pruning Using Adaptive Exemplar Filters
    Lin, Mingbao
    Ji, Rongrong
    Li, Shaojie
    Wang, Yan
    Wu, Yongjian
    Huang, Feiyue
    Ye, Qixiang
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (12) : 7357 - 7366
  • [45] Adaptive Data Pruning for Support Vector Machines
    Fujiwara, Yasuhiro
    Arai, Junya
    Kanai, Sekitoshi
    Ida, Yasutoshi
    Ueda, Naonori
    2018 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2018, : 683 - 692
  • [46] Adaptive Network Pruning for Wireless Federated Learning
    Liu, Shengli
    Yu, Guanding
    Yin, Rui
    Yuan, Jiantao
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2021, 10 (07) : 1572 - 1576
  • [47] Adaptive Federated Pruning in Hierarchical Wireless Networks
    Liu, Xiaonan
    Wang, Shiqiang
    Deng, Yansha
    Nallanathan, Arumugam
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (06) : 5985 - 5999
  • [48] An adaptive joint optimization framework for pruning and quantization
    Li, Xiaohai
    Yang, Xiaodong
    Zhang, Yingwei
    Yang, Jianrong
    Chen, Yiqiang
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 15 (11) : 5199 - 5215
  • [49] Adaptive Filter Pruning via Sensitivity Feedback
    Zhang, Yuyao
    Freris, Nikolaos M.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (08) : 10996 - 11008
  • [50] Towards Robust Pruning: An Adaptive Knowledge-Retention Pruning Strategy for Language Models
    Li, Jianwei
    Lei, Qi
    Cheng, Wei
    Xu, Dongkuan
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 1229 - 1247