Feature flow regularization: Improving structured sparsity in deep neural networks

被引:6
|
作者
Wu, Yue [1 ]
Lan, Yuan [1 ]
Zhang, Luchan [2 ]
Xiang, Yang [1 ,3 ]
机构
[1] Hong Kong Univ Sci & Technol, Dept Math, Kowloon, Clear Water Bay, Hong Kong, Peoples R China
[2] Shenzhen Univ, Coll Math & Stat, Shenzhen 518060, Peoples R China
[3] HKUST Shenzhen Hong Kong Collaborat Innovat Res In, Algorithms Machine Learning & Autonomous Driving R, Shenzhen, Peoples R China
关键词
Deep neural networks; Structured pruning; Image classification; Regularization;
D O I
10.1016/j.neunet.2023.02.013
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Pruning is a model compression method that removes redundant parameters and accelerates the inference speed of deep neural networks (DNNs) while maintaining accuracy. Most available pruning methods impose various conditions on parameters or features directly. In this paper, we propose a simple and effective regularization strategy to improve the structured sparsity and structured pruning in DNNs from a new perspective of evolution of features. In particular, we consider the trajectories connecting features of adjacent hidden layers, namely feature flow. We propose feature flow regularization (FFR) to penalize the length and the total absolute curvature of the trajectories, which implicitly increases the structured sparsity of the parameters. The principle behind FFR is that short and straight trajectories will lead to an efficient network that avoids redundant parameters. Experiments on CIFAR-10 and ImageNet datasets show that FFR improves structured sparsity and achieves pruning results comparable to or even better than those state-of-the-art methods. (c) 2023 Elsevier Ltd. All rights reserved.
引用
收藏
页码:598 / 613
页数:16
相关论文
共 50 条
  • [31] Implicit Regularization in Deep Tucker Factorization: Low-Rankness via Structured Sparsity
    Hariz, Kais
    Kadri, Hachem
    Ayache, Stephane
    Moakher, Maher
    Artieres, Thierry
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [32] Towards Stochasticity of Regularization in Deep Neural Networks
    Sandjakoska, Ljubinka
    Bogdanova, Ana Madevska
    2018 14TH SYMPOSIUM ON NEURAL NETWORKS AND APPLICATIONS (NEUREL), 2018,
  • [33] Regularization of deep neural networks with spectral dropout
    Khan, Salman H.
    Hayat, Munawar
    Porikli, Fatih
    NEURAL NETWORKS, 2019, 110 : 82 - 90
  • [34] Sparse synthesis regularization with deep neural networks
    Obmann, Daniel
    Schwab, Johannes
    Haltmeier, Markus
    2019 13TH INTERNATIONAL CONFERENCE ON SAMPLING THEORY AND APPLICATIONS (SAMPTA), 2019,
  • [35] Group sparse regularization for deep neural networks
    Scardapane, Simone
    Comminiello, Danilo
    Hussain, Amir
    Uncini, Aurelio
    NEUROCOMPUTING, 2017, 241 : 81 - 89
  • [36] LocalDrop: A Hybrid Regularization for Deep Neural Networks
    Lu, Ziqing
    Xu, Chang
    Du, Bo
    Ishida, Takashi
    Zhang, Lefei
    Sugiyama, Masashi
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (07) : 3590 - 3601
  • [37] A Comparison of Regularization Techniques in Deep Neural Networks
    Nusrat, Ismoilov
    Jang, Sung-Bong
    SYMMETRY-BASEL, 2018, 10 (11):
  • [38] TRANSFER KNOWLEDGE FOR HIGH SPARSITY IN DEEP NEURAL NETWORKS
    Liu, Wenran
    Chen, Xiaogang
    Ji, Xiangyang
    2017 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP 2017), 2017, : 1354 - 1358
  • [39] Combined Group and Exclusive Sparsity for Deep Neural Networks
    Yoon, Jaehong
    Hwang, Sung Ju
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [40] Sparsity Regularization Discriminant Projection for Feature Extraction
    Yuan, Sen
    Mao, Xia
    Chen, Lijiang
    NEURAL PROCESSING LETTERS, 2019, 49 (02) : 539 - 553