Potential Layer-Wise Supervised Learning for Training Multi-Layered Neural Networks

被引:0
|
作者
Kamimura, Ryotaro [1 ,2 ]
机构
[1] Tokai Univ, IT Educ Ctr, 4-1-1 Kitakaname, Hiratsuka, Kanagawa 2591292, Japan
[2] Tokai Univ, Grad Sch Sci & Technol, 4-1-1 Kitakaname, Hiratsuka, Kanagawa 2591292, Japan
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The present paper tries to show that the greedy layer-wise supervised learning becomes effective enough to improve generalization and interpretation by the help of potential learning. It has been observed that unsupervised pre-training has a shortcoming of vanishing information as is the case of simple multi-layered network training. When the layer becomes higher, valuable information becomes smaller. Information through many different layers tends to diminish considerably and naturally from the information-theoretic point of view. For this, we use the layer-wise supervised training to prevent information from diminishing. The supervised learning has been said to be not good for pre-training for multi-layered neural networks. However, we have found that the new potential learning can be effectively used to extract valuable information through supervised pre-training. With the help of important components extracted by the potential learning, the supervised pre-training becomes effective for training multi-layered neural networks. We applied the method to two data sets, namely, an artificial and banknote data sets. In both cases, the potential learning proved to be effective in increasing generalization performance. In addition, we could show a possibility that final representation by this method could be clearly understood.
引用
收藏
页码:2568 / 2575
页数:8
相关论文
共 50 条
  • [1] Selective Information Control and Layer-Wise Partial Collective Compression for Multi-Layered Neural Networks
    Kamimura, Ryotaro
    INTELLIGENT SYSTEMS DESIGN AND APPLICATIONS, ISDA 2021, 2022, 418 : 121 - 131
  • [2] Push-pull separability objective for supervised layer-wise training of neural networks
    Szymanski, Lech
    McCane, Brendan
    2012 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2012,
  • [3] Layer-Wise Compressive Training for Convolutional Neural Networks
    Grimaldi, Matteo
    Tenace, Valerio
    Calimera, Andrea
    FUTURE INTERNET, 2019, 11 (01)
  • [4] Supervised Semi-Autoencoder Learning for Multi-Layered Neural Networks
    Kamimura, Ryotaro
    Takeuchi, Haruhiko
    2017 JOINT 17TH WORLD CONGRESS OF INTERNATIONAL FUZZY SYSTEMS ASSOCIATION AND 9TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING AND INTELLIGENT SYSTEMS (IFSA-SCIS), 2017,
  • [5] Layer-Wise Training to Create Efficient Convolutional Neural Networks
    Zeng, Linghua
    Tian, Xinmei
    NEURAL INFORMATION PROCESSING (ICONIP 2017), PT II, 2017, 10635 : 631 - 641
  • [6] Collaborative Layer-Wise Discriminative Learning in Deep Neural Networks
    Jin, Xiaojie
    Chen, Yunpeng
    Dong, Jian
    Feng, Jiashi
    Yan, Shuicheng
    COMPUTER VISION - ECCV 2016, PT VII, 2016, 9911 : 733 - 749
  • [7] Supervised Greedy Layer-Wise Training for Deep Convolutional Networks with Small Datasets
    Rueda-Plata, Diego
    Ramos-Pollan, Raul
    Gonzalez, Fabio A.
    COMPUTATIONAL COLLECTIVE INTELLIGENCE (ICCCI 2015), PT I, 2015, 9329 : 275 - 284
  • [8] SPSA for Layer-Wise Training of Deep Networks
    Wulff, Benjamin
    Schuecker, Jannis
    Bauckhage, Christian
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2018, PT III, 2018, 11141 : 564 - 573
  • [9] A Layer-Wise Theoretical Framework for Deep Learning of Convolutional Neural Networks
    Huu-Thiet Nguyen
    Li, Sitan
    Cheah, Chien Chern
    IEEE ACCESS, 2022, 10 : 14270 - 14287
  • [10] Voice Conversion Using Deep Neural Networks With Layer-Wise Generative Training
    Chen, Ling-Hui
    Ling, Zhen-Hua
    Liu, Li-Juan
    Dai, Li-Rong
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2014, 22 (12) : 1859 - 1872