Direct Error Driven Learning for Deep Neural Networks with Applications to Bigdata

被引:5
|
作者
Krishnan, R. [1 ]
Jagannathan, S. [1 ]
Samaranayake, V. A. [2 ]
机构
[1] Missouri Univ Sci & Technol, Dept Elect & Comp Engn, Rolla, MO 65409 USA
[2] Missouri Univ Sci & Technol, Dept Math & Stat, Rolla, MO 65409 USA
关键词
generalization error; vanishing gradients; bigdata; heterogeneity; noise;
D O I
10.1016/j.procs.2018.10.508
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
In this paper, generalization error for traditional learning regimes-based classification is demonstrated to increase in the presence of bigdata challenges such as noise and heterogeneity. To reduce this error while mitigating vanishing gradients, a deep neural network (NN)-based framework with a direct error-driven learning scheme is proposed. To reduce the impact of heterogeneity, an overall cost comprised of the learning error and approximate generalization error is defined where two NNs are utilized to estimate the costs respectively. To mitigate the issue of vanishing gradients, a direct error-driven learning regime is proposed where the error is directly utilized for learning. It is demonstrated that the proposed approach improves accuracy by 7 % over traditional learning regimes. The proposed approach mitigated the vanishing gradient problem and improved generalization by 6%. (C) 2018 The Authors. Published by Elsevier Ltd.
引用
收藏
页码:89 / 95
页数:7
相关论文
共 50 条
  • [41] Bidirectional Joint Representation Learning with Symmetrical Deep Neural Networks for Multimodal and Crossmodal Applications
    Vukotic, Vedran
    Raymond, Christian
    Gravier, Guillaume
    ICMR'16: PROCEEDINGS OF THE 2016 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, 2016, : 343 - 346
  • [42] A Theoretical Framework for End-to-End Learning of Deep Neural Networks With Applications to Robotics
    Li, Sitan
    Nguyen, Huu-Thiet
    Cheah, Chien Chern
    IEEE ACCESS, 2023, 11 : 21992 - 22006
  • [43] Adaptive Knowledge Driven Regularization for Deep Neural Networks
    Luo, Zhaojing
    Cai, Shaofeng
    Cui, Can
    Ooi, Beng Chin
    Yang, Yang
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 8810 - 8818
  • [44] An Optimal Control Approach to Deep Learning and Applications to Discrete-Weight Neural Networks
    Li, Qianxiao
    Hao, Shuji
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [45] Data driven articulatory synthesis with deep neural networks
    Aryal, Sandesh
    Gutierrez-Osuna, Ricardo
    COMPUTER SPEECH AND LANGUAGE, 2016, 36 : 260 - 273
  • [46] MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels
    Lu Jiang
    Zhou, Zhengyuan
    Leung, Thomas
    Li, Li-Jia
    Li Fei-Fei
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [47] Inspecting the behaviour of Deep Learning Neural Networks
    Duer, Alexander
    Filzmoser, Peter
    Rauber, Andreas
    ERCIM NEWS, 2019, (116): : 18 - 19
  • [48] Piecewise linear neural networks and deep learning
    Qinghua Tao
    Li Li
    Xiaolin Huang
    Xiangming Xi
    Shuning Wang
    Johan A. K. Suykens
    Nature Reviews Methods Primers, 2
  • [49] Learning deep neural networks for node classification
    Li, Bentian
    Pi, Dechang
    EXPERT SYSTEMS WITH APPLICATIONS, 2019, 137 : 324 - 334
  • [50] Abstraction Hierarchy in Deep Learning Neural Networks
    Ilin, Roman
    Watson, Thomas
    Kozma, Robert
    2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 768 - 774