Hybrid pre training algorithm of Deep Neural Networks

被引:1
|
作者
Drokin, I. S. [1 ]
机构
[1] St Petersburg State Univ, Fac Appl Math & Control Proc, St Petersburg 199034, Russia
关键词
D O I
10.1051/itmconf/20160602007
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper proposes a hybrid algorithm of pre training deep networks, using both marked and unmarked data. The algorithm combines and extends the ideas of Self-Taught learning and pre training of neural networks approaches on the one hand, as well as supervised learning and transfer learning on the other. Thus, the algorithm tries to integrate in itself the advantages of each approach. The article gives some examples of applying of the algorithm, as well as its comparison with the classical approach to pre training of neural networks. These examples show the effectiveness of the proposed algorithm.
引用
收藏
页数:4
相关论文
共 50 条
  • [31] Roles of pre-training in deep neural networks from information theoretical perspective
    Furusho, Yasutaka
    Kubo, Takatomi
    Ikeda, Kazushi
    NEUROCOMPUTING, 2017, 248 : 76 - 79
  • [32] Multi-Task Pre-Training of Deep Neural Networks for Digital Pathology
    Mormont, Romain
    Geurts, Pierre
    Maree, Raphael
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2021, 25 (02) : 412 - 421
  • [33] A consensus-based decentralized training algorithm for deep neural networks with communication compression
    Liu, Bo
    Ding, Zhengtao
    NEUROCOMPUTING, 2021, 440 : 287 - 296
  • [34] A Communication-Efficient Distributed Gradient Clipping Algorithm for Training Deep Neural Networks
    Liu, Mingrui
    Zhuang, Zhenxun
    Lei, Yunwen
    Liao, Chunyang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [35] Deep Neural Network Training with iPSO Algorithm
    Kosten, Mehmet Muzaffer
    Barut, Murat
    Acir, Nurettin
    2018 26TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2018,
  • [36] A hybrid algorithm for artificial neural network training
    Yaghini, Masoud
    Khoshraftar, Mohammad M.
    Fallahi, Mehdi
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2013, 26 (01) : 293 - 301
  • [37] Is normalization indispensable for training deep neural networks?
    Shao, Jie
    Hu, Kai
    Wang, Changhu
    Xue, Xiangyang
    Raj, Bhiksha
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [38] On Calibration of Mixup Training for Deep Neural Networks
    Maronas, Juan
    Ramos, Daniel
    Paredes, Roberto
    STRUCTURAL, SYNTACTIC, AND STATISTICAL PATTERN RECOGNITION, S+SSPR 2020, 2021, 12644 : 67 - 76
  • [39] Exploiting Invariance in Training Deep Neural Networks
    Ye, Chengxi
    Zhou, Xiong
    McKinney, Tristan
    Liu, Yanfeng
    Zhou, Qinggang
    Zhdanov, Fedor
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 8849 - 8856
  • [40] Exploring strategies for training deep neural networks
    Larochelle, Hugo
    Bengio, Yoshua
    Louradour, Jérôme
    Lamblin, Pascal
    Journal of Machine Learning Research, 2009, 10 : 1 - 40