Accretionary Learning With Deep Neural Networks With Applications

被引:0
|
作者
Wei, Xinyu [1 ]
Juang, Biing-Hwang [1 ]
Wang, Ouya [2 ]
Zhou, Shenglong [3 ]
Li, Geoffrey Ye [2 ]
机构
[1] Georgia Inst Technol, Sch Elect & Comp Engn, Atlanta, GA 30332 USA
[2] Imperial Coll London, Dept Elect & Elect Engn, London SW7 2BX, England
[3] Beijing Jiaotong Univ, Sch Math & Stat, Beijing, Peoples R China
关键词
Artificial neural networks; Data models; Knowledge engineering; Task analysis; Training; Speech recognition; Learning systems; Deep learning; accretion learning; deep neural networks; pattern recognition; wireless communications; CLASSIFICATION;
D O I
10.1109/TCCN.2023.3342454
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
One of the fundamental limitations of Deep Neural Networks (DNN) is their inability to acquire and accumulate new cognitive capabilities in an incremental or progressive manner. When data appear from object classes not among the learned ones, a conventional DNN would not be able to recognize them due to the fundamental formulation that it assumes. A typical solution is to re-design and re-learn a new network, most likely an expanded one, for the expanded set of object classes. This process is quite different from that of a human learner. In this paper, we propose a new learning method named Accretionary Learning (AL) to emulate human learning, in that the set of object classes to be recognized need not be fixed, meaning it can grow as the situation arises without requiring an entire redesign of the system. The proposed learning structure is modularized, and can dynamically expand to learn and register new knowledge, as the set of objects grows in size. AL does not forget previous knowledge when learning new data classes. We show that the structure and its learning methodology lead to a system that can grow to cope with increased cognitive complexity while providing stable and superior overall performance.
引用
收藏
页码:660 / 673
页数:14
相关论文
共 50 条
  • [31] An Optimal Control Approach to Deep Learning and Applications to Discrete-Weight Neural Networks
    Li, Qianxiao
    Hao, Shuji
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [32] Inspecting the behaviour of Deep Learning Neural Networks
    Duer, Alexander
    Filzmoser, Peter
    Rauber, Andreas
    ERCIM NEWS, 2019, (116): : 18 - 19
  • [33] Piecewise linear neural networks and deep learning
    Qinghua Tao
    Li Li
    Xiaolin Huang
    Xiangming Xi
    Shuning Wang
    Johan A. K. Suykens
    Nature Reviews Methods Primers, 2
  • [34] Learning deep neural networks for node classification
    Li, Bentian
    Pi, Dechang
    EXPERT SYSTEMS WITH APPLICATIONS, 2019, 137 : 324 - 334
  • [35] Abstraction Hierarchy in Deep Learning Neural Networks
    Ilin, Roman
    Watson, Thomas
    Kozma, Robert
    2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 768 - 774
  • [36] Deep Neural Networks for Learning Graph Representations
    Cao, Shaosheng
    Lu, Wei
    Xu, Qiongkai
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 1145 - 1152
  • [37] Advances in Machine Learning and Deep Neural Networks
    Chellappa, Rama
    Theodoridis, Sergios
    van Schaik, Andre
    PROCEEDINGS OF THE IEEE, 2021, 109 (05) : 607 - 611
  • [38] Deep learning with coherent VCSEL neural networks
    Zaijun Chen
    Alexander Sludds
    Ronald Davis
    Ian Christen
    Liane Bernstein
    Lamia Ateshian
    Tobias Heuser
    Niels Heermeier
    James A. Lott
    Stephan Reitzenstein
    Ryan Hamerly
    Dirk Englund
    Nature Photonics, 2023, 17 : 723 - 730
  • [39] Deep learning with coherent VCSEL neural networks
    Chen, Zaijun
    Sludds, Alexander
    Davis III, Ronald
    Christen, Ian
    Bernstein, Liane
    Ateshian, Lamia
    Heuser, Tobias
    Heermeier, Niels
    Lott, James A.
    Reitzenstein, Stephan
    Hamerly, Ryan
    Englund, Dirk
    NATURE PHOTONICS, 2023, 17 (08) : 723 - +
  • [40] Learning hidden chemistry with deep neural networks
    Nguyen, Tien-Cuong
    Nguyen, Van-Quyen
    Ngo, Van-Linh
    Than, Quang-Khoat
    Pham, Tien-Lam
    COMPUTATIONAL MATERIALS SCIENCE, 2021, 200