Deep Neural Network Pruning Using Persistent Homology

被引:4
|
作者
Watanabe, Satoru [1 ]
Yamana, Hayato [1 ]
机构
[1] Waseda Univ, Grad Sch Fundamental Sci & Engn, Shinjuku Ku, Tokyo, Japan
关键词
deep neural network; network pruning; persistent homology; topological data analysis;
D O I
10.1109/AIKE48582.2020.00030
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks (DNNs) have improved the performance of artificial intelligence systems in various fields including image analysis, speech recognition, and text classification. However, the consumption of enormous computation resources prevents DNNs from operating on small computers such as edge sensors and handheld devices. Network pruning (NP), which removes parameters from trained DNNs, is one of the prominent methods of reducing the resource consumption of DNNs. In this paper, we propose a novel method of NP, hereafter referred to as PHPM, using persistent homology (PH). PH investigates the inner representation of knowledge in DNNs, and PHPM utilizes the investigation in NP to improve the efficiency of pruning. PHPM prunes DNNs in ascending order of magnitudes of the combinational effects among neurons, which are calculated using the one-dimensional PH, to prevent the deterioration of the accuracy. We compared PHPM with global magnitude pruning method (GMP), which is one of the common baselines to evaluate pruning methods. Evaluation results show that the classification accuracy of DNNs pruned by PHPM outperforms that pruned by GMP.
引用
收藏
页码:153 / 156
页数:4
相关论文
共 50 条
  • [1] Topological measurement of deep neural networks using persistent homology
    Satoru Watanabe
    Hayato Yamana
    [J]. Annals of Mathematics and Artificial Intelligence, 2022, 90 : 75 - 92
  • [2] Topological measurement of deep neural networks using persistent homology
    Watanabe, Satoru
    Yamana, Hayato
    [J]. ANNALS OF MATHEMATICS AND ARTIFICIAL INTELLIGENCE, 2022, 90 (01) : 75 - 92
  • [3] DeepMark: Embedding Watermarks into Deep Neural Network Using Pruning
    Xie, Chenqi
    Yi, Ping
    Zhang, Baowen
    Zou, Futai
    [J]. 2021 IEEE 33RD INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2021), 2021, : 169 - 175
  • [4] Pruning by explaining: A novel criterion for deep neural network pruning
    Yeom, Seul-Ki
    Seegerer, Philipp
    Lapuschkin, Sebastian
    Binder, Alexander
    Wiedemann, Simon
    Mueller, Klaus-Robert
    Samek, Wojciech
    [J]. PATTERN RECOGNITION, 2021, 115
  • [5] Pruning the deep neural network by similar function
    Liu, Hanqing
    Xin, Bo
    Mu, Senlin
    Zhu, Zhangqing
    [J]. 2018 INTERNATIONAL SYMPOSIUM ON POWER ELECTRONICS AND CONTROL ENGINEERING (ISPECE 2018), 2019, 1187
  • [6] Automated Pruning for Deep Neural Network Compression
    Manessi, Franco
    Rozza, Alessandro
    Bianco, Simone
    Napoletano, Paolo
    Schettini, Raimondo
    [J]. 2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 657 - 664
  • [7] Overview of Deep Convolutional Neural Network Pruning
    Li, Guang
    Liu, Fang
    Xia, Yuping
    [J]. 2020 INTERNATIONAL CONFERENCE ON IMAGE, VIDEO PROCESSING AND ARTIFICIAL INTELLIGENCE, 2020, 11584
  • [8] An FPGA Realization of a Deep Convolutional Neural Network Using a Threshold Neuron Pruning
    Fujii, Tomoya
    Sato, Simpei
    Nakahara, Hiroki
    Motomura, Masato
    [J]. APPLIED RECONFIGURABLE COMPUTING, 2017, 10216 : 268 - 280
  • [9] A Discriminant Information Approach to Deep Neural Network Pruning
    Hou, Zejiang
    Kung, Sun-Yuan
    [J]. 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 9553 - 9560
  • [10] Pruning and quantization for deep neural network acceleration: A survey
    Liang, Tailin
    Glossner, John
    Wang, Lei
    Shi, Shaobo
    Zhang, Xiaotong
    [J]. NEUROCOMPUTING, 2021, 461 : 370 - 403