Post-training discriminative pruning for RBMs

被引:3
|
作者
Sanchez-Gutierrez, Maximo [1 ]
Albornoz, Enrique M. [2 ]
Rufiner, Hugo L. [2 ,3 ]
Goddard Close, John [1 ]
机构
[1] Univ Autonoma Metropolitana, Dept Ingn Elect, Iztapalapa, Mexico
[2] UNL, CONICET, Inst Invest Senales Sistemas & Inteligencia Compu, FICH,Sinc I, Ciudad Univ,S3000, Paraje El Pozo, Santa Fe, Argentina
[3] UNER, Fac Ingn, Lab Cibernet, Oro Verde, Entre Rios, Argentina
关键词
Restricted Boltzmann machines; Pruning; Discriminative information; Phoneme classification; Emotion classification; BOLTZMANN MACHINES; NEURAL-NETWORKS; DEEP; ALGORITHM; SPEECH;
D O I
10.1007/s00500-017-2784-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
One of the major challenges in the area of artificial neural networks is the identification of a suitable architecture for a specific problem. Choosing an unsuitable topology can exponentially increase the training cost, and even hinder network convergence. On the other hand, recent research indicates that larger or deeper nets can map the problem features into a more appropriate space, and thereby improve the classification process, thus leading to an apparent dichotomy. In this regard, it is interesting to inquire whether independent measures, such as mutual information, could provide a clue to finding the most discriminative neurons in a network. In the present work, we explore this question in the context of Restricted Boltzmann machines, by employing different measures to realize post-training pruning. The neurons which are determined by each measure to be the most discriminative, are combined and a classifier is applied to the ensuing network to determine its usefulness. We find that two measures in particular seem to be good indicators of the most discriminative neurons, producing savings of generally more than 50% of the neurons, while maintaining an acceptable error rate. Further, it is borne out that starting with a larger network architecture and then pruning is more advantageous than using a smaller network to begin with. Finally, a quantitative index is introduced which can provide information on choosing a suitable pruned network.
引用
下载
收藏
页码:767 / 781
页数:15
相关论文
共 50 条
  • [41] PRETRAINING AND POST-TRAINING SWIMMING ENDURANCE OF SMOKERS AND NONSMOKERS
    PLEASANTS, F
    RESEARCH QUARTERLY, 1969, 40 (04): : 779 - 782
  • [42] Enhancing Learning Transfer Through Post-Training Activities
    Scott, W. S.
    McGraw, L.
    Sauer, T.
    Belton, H.
    Bittner, S.
    Lynn, D.
    Sharrer, J.
    Speicher, T.
    TRANSFUSION, 2011, 51 : 270A - 271A
  • [43] PoF: Post-Training of Feature Extractor for Improving Generalization
    Sato, Ikuro
    Yamada, Ryota
    Tanaka, Masayuki
    Inoue, Nakamasa
    Kawakami, Rei
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022, : 19221 - 19230
  • [44] Employee post-training behaviour and performance: evaluating the results of the training process
    Diamantidis, Anastasios
    Chatzoglou, Prodromos
    INTERNATIONAL JOURNAL OF TRAINING AND DEVELOPMENT, 2014, 18 (03) : 149 - 170
  • [45] EFFECTS OF POST-TRAINING EPINEPHRINE INJECTIONS ON RETENTION OF AVOIDANCE TRAINING IN MICE
    GOLD, PE
    VANBUSKIRK, R
    HAYCOCK, JW
    BEHAVIORAL BIOLOGY, 1977, 20 (02): : 197 - 204
  • [46] Post-training Quantization of Deep Neural Network Weights
    Khayrov, E. M.
    Malsagov, M. Yu.
    Karandashev, I. M.
    ADVANCES IN NEURAL COMPUTATION, MACHINE LEARNING, AND COGNITIVE RESEARCH III, 2020, 856 : 230 - 238
  • [47] Normalized Post-training Quantization for Photonic Neural Networks
    Kirtas, M.
    Passalis, N.
    Oikonomou, A.
    Mourgias-Alexandris, G.
    Moralis-Pegios, M.
    Pleros, N.
    Tefas, A.
    2022 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2022, : 657 - 663
  • [48] Investigating the Post-Training Persistence of Expert Interaction Techniques
    Lafreniere, Benjamin
    Gutwin, Carl
    Cockburn, Andy
    ACM TRANSACTIONS ON COMPUTER-HUMAN INTERACTION, 2017, 24 (04)
  • [49] Post-training Quantization for Neural Networks with Provable Guarantees*
    Zhang, Jinjie
    Zhou, Yixuan
    Saab, Rayan
    SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE, 2023, 5 (02): : 373 - 399
  • [50] Post-training Meditation Promotes Motor Memory Consolidation
    Immink, Maarten A.
    FRONTIERS IN PSYCHOLOGY, 2016, 7