Post-training discriminative pruning for RBMs

被引:3
|
作者
Sanchez-Gutierrez, Maximo [1 ]
Albornoz, Enrique M. [2 ]
Rufiner, Hugo L. [2 ,3 ]
Goddard Close, John [1 ]
机构
[1] Univ Autonoma Metropolitana, Dept Ingn Elect, Iztapalapa, Mexico
[2] UNL, CONICET, Inst Invest Senales Sistemas & Inteligencia Compu, FICH,Sinc I, Ciudad Univ,S3000, Paraje El Pozo, Santa Fe, Argentina
[3] UNER, Fac Ingn, Lab Cibernet, Oro Verde, Entre Rios, Argentina
关键词
Restricted Boltzmann machines; Pruning; Discriminative information; Phoneme classification; Emotion classification; BOLTZMANN MACHINES; NEURAL-NETWORKS; DEEP; ALGORITHM; SPEECH;
D O I
10.1007/s00500-017-2784-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
One of the major challenges in the area of artificial neural networks is the identification of a suitable architecture for a specific problem. Choosing an unsuitable topology can exponentially increase the training cost, and even hinder network convergence. On the other hand, recent research indicates that larger or deeper nets can map the problem features into a more appropriate space, and thereby improve the classification process, thus leading to an apparent dichotomy. In this regard, it is interesting to inquire whether independent measures, such as mutual information, could provide a clue to finding the most discriminative neurons in a network. In the present work, we explore this question in the context of Restricted Boltzmann machines, by employing different measures to realize post-training pruning. The neurons which are determined by each measure to be the most discriminative, are combined and a classifier is applied to the ensuing network to determine its usefulness. We find that two measures in particular seem to be good indicators of the most discriminative neurons, producing savings of generally more than 50% of the neurons, while maintaining an acceptable error rate. Further, it is borne out that starting with a larger network architecture and then pruning is more advantageous than using a smaller network to begin with. Finally, a quantitative index is introduced which can provide information on choosing a suitable pruned network.
引用
收藏
页码:767 / 781
页数:15
相关论文
共 50 条
  • [1] Post-training discriminative pruning for RBMs
    Máximo Sánchez-Gutiérrez
    Enrique M. Albornoz
    Hugo L. Rufiner
    John Goddard Close
    [J]. Soft Computing, 2019, 23 : 767 - 781
  • [2] A Fast Post-Training Pruning Framework for Transformers
    Kwon, Woosuk
    Kim, Sehoon
    Mahoney, Michael W.
    Hassoun, Joseph
    Keutzer, Kurt
    Gholami, Amir
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [3] Effects of post-training modafinil administration in a discriminative avoidance task in mice
    Fernandes, Helaine Arrais
    Zanin, Karina Agustini
    Patti, Camilla de Lima
    Lopes-Silva, Leonardo Brito
    Bizerra, Carolina Souza
    Azeredo Bittencourt, Lia Rita
    Tufik, Sergio
    Frussa-Filho, Roberto
    [J]. ACTA NEUROPSYCHIATRICA, 2015, 27 (04): : 235 - 241
  • [4] Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning
    Frantar, Elias
    Singh, Sidak Pal
    Alistarh, Dan
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [5] Post-training deep neural network pruning via layer-wise calibration
    Lazarevich, Ivan
    Kozlov, Alexander
    Malinin, Nikita
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 798 - 805
  • [6] WITH POST-TRAINING FOR THE QUALITY OF FOODS
    SZALOCZY, P
    [J]. ELELMEZESI IPAR, 1980, 34 (04): : B4 - B4
  • [7] EDUCATION AND POST-TRAINING IN BIOTECHNOLOGY
    HOLLO, J
    BIACS, P
    [J]. ELELMEZESI IPAR, 1981, 35 (04): : 126 - 128
  • [8] SUCCESSFUL POST-TRAINING SKILL APPLICATION
    FELDMAN, M
    [J]. TRAINING AND DEVELOPMENT JOURNAL, 1981, 35 (09): : 72 - &
  • [9] Loss aware post-training quantization
    Yury Nahshan
    Brian Chmiel
    Chaim Baskin
    Evgenii Zheltonozhskii
    Ron Banner
    Alex M. Bronstein
    Avi Mendelson
    [J]. Machine Learning, 2021, 110 : 3245 - 3262
  • [10] Post-Training Quantization for Vision Transformer
    Liu, Zhenhua
    Wang, Yunhe
    Han, Kai
    Zhang, Wei
    Ma, Siwei
    Gao, Wen
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34