Feature flow regularization: Improving structured sparsity in deep neural networks

被引:6
|
作者
Wu, Yue [1 ]
Lan, Yuan [1 ]
Zhang, Luchan [2 ]
Xiang, Yang [1 ,3 ]
机构
[1] Hong Kong Univ Sci & Technol, Dept Math, Kowloon, Clear Water Bay, Hong Kong, Peoples R China
[2] Shenzhen Univ, Coll Math & Stat, Shenzhen 518060, Peoples R China
[3] HKUST Shenzhen Hong Kong Collaborat Innovat Res In, Algorithms Machine Learning & Autonomous Driving R, Shenzhen, Peoples R China
关键词
Deep neural networks; Structured pruning; Image classification; Regularization;
D O I
10.1016/j.neunet.2023.02.013
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Pruning is a model compression method that removes redundant parameters and accelerates the inference speed of deep neural networks (DNNs) while maintaining accuracy. Most available pruning methods impose various conditions on parameters or features directly. In this paper, we propose a simple and effective regularization strategy to improve the structured sparsity and structured pruning in DNNs from a new perspective of evolution of features. In particular, we consider the trajectories connecting features of adjacent hidden layers, namely feature flow. We propose feature flow regularization (FFR) to penalize the length and the total absolute curvature of the trajectories, which implicitly increases the structured sparsity of the parameters. The principle behind FFR is that short and straight trajectories will lead to an efficient network that avoids redundant parameters. Experiments on CIFAR-10 and ImageNet datasets show that FFR improves structured sparsity and achieves pruning results comparable to or even better than those state-of-the-art methods. (c) 2023 Elsevier Ltd. All rights reserved.
引用
收藏
页码:598 / 613
页数:16
相关论文
共 50 条
  • [1] Structured Pruning for Deep Convolutional Neural Networks via Adaptive Sparsity Regularization
    Shao, Tuanjie
    Shin, Dongkun
    2022 IEEE 46TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE (COMPSAC 2022), 2022, : 982 - 987
  • [2] Deep Neural Networks Regularization Using a Combination of Sparsity Inducing Feature Selection Methods
    Farokhmanesh, Fatemeh
    Sadeghi, Mohammad Taghi
    NEURAL PROCESSING LETTERS, 2021, 53 (01) : 701 - 720
  • [3] Deep Neural Networks Regularization Using a Combination of Sparsity Inducing Feature Selection Methods
    Fatemeh Farokhmanesh
    Mohammad Taghi Sadeghi
    Neural Processing Letters, 2021, 53 : 701 - 720
  • [4] Learning Structured Sparsity in Deep Neural Networks
    Wen, Wei
    Wu, Chunpeng
    Wang, Yandan
    Chen, Yiran
    Li, Hai
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [5] Improving Deep Neural Network Sparsity through Decorrelation Regularization
    Zhu, Xiaotian
    Zhou, Wengang
    Li, Houqiang
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 3264 - 3270
  • [6] Deep neural networks regularization for structured output prediction
    Belharbi, Soufiane
    Herault, Romain
    Chatelain, Clement
    Adam, Sebastien
    NEUROCOMPUTING, 2018, 281 : 169 - 177
  • [7] SeReNe: Sensitivity-Based Regularization of Neurons for Structured Sparsity in Neural Networks
    Tartaglione, Enzo
    Bragagnolo, Andrea
    Odierna, Francesco
    Fiandrotti, Attilio
    Grangetto, Marco
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (12) : 7237 - 7250
  • [8] Variance-Guided Structured Sparsity in Deep Neural Networks
    Pandit M.K.
    Banday M.
    IEEE Transactions on Artificial Intelligence, 2023, 4 (06): : 1714 - 1723
  • [9] Structured Sparsity of Convolutional Neural Networks via Nonconvex Sparse Group Regularization
    Bui, Kevin
    Park, Fredrick
    Zhang, Shuai
    Qi, Yingyong
    Xin, Jack
    FRONTIERS IN APPLIED MATHEMATICS AND STATISTICS, 2021, 6
  • [10] Deep Neural Networks Pruning via the Structured Perspective Regularization
    Cacciola, Matteo
    Frangioni, Antonio
    Li, Xinlin
    Lodi, Andrea
    SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE, 2023, 5 (04): : 1051 - 1077