Accretionary Learning With Deep Neural Networks With Applications

被引:0
|
作者
Wei, Xinyu [1 ]
Juang, Biing-Hwang [1 ]
Wang, Ouya [2 ]
Zhou, Shenglong [3 ]
Li, Geoffrey Ye [2 ]
机构
[1] Georgia Inst Technol, Sch Elect & Comp Engn, Atlanta, GA 30332 USA
[2] Imperial Coll London, Dept Elect & Elect Engn, London SW7 2BX, England
[3] Beijing Jiaotong Univ, Sch Math & Stat, Beijing, Peoples R China
关键词
Artificial neural networks; Data models; Knowledge engineering; Task analysis; Training; Speech recognition; Learning systems; Deep learning; accretion learning; deep neural networks; pattern recognition; wireless communications; CLASSIFICATION;
D O I
10.1109/TCCN.2023.3342454
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
One of the fundamental limitations of Deep Neural Networks (DNN) is their inability to acquire and accumulate new cognitive capabilities in an incremental or progressive manner. When data appear from object classes not among the learned ones, a conventional DNN would not be able to recognize them due to the fundamental formulation that it assumes. A typical solution is to re-design and re-learn a new network, most likely an expanded one, for the expanded set of object classes. This process is quite different from that of a human learner. In this paper, we propose a new learning method named Accretionary Learning (AL) to emulate human learning, in that the set of object classes to be recognized need not be fixed, meaning it can grow as the situation arises without requiring an entire redesign of the system. The proposed learning structure is modularized, and can dynamically expand to learn and register new knowledge, as the set of objects grows in size. AL does not forget previous knowledge when learning new data classes. We show that the structure and its learning methodology lead to a system that can grow to cope with increased cognitive complexity while providing stable and superior overall performance.
引用
收藏
页码:660 / 673
页数:14
相关论文
共 50 条
  • [41] Learning Structured Sparsity in Deep Neural Networks
    Wen, Wei
    Wu, Chunpeng
    Wang, Yandan
    Chen, Yiran
    Li, Hai
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [42] Evolving Deep Neural Networks for Continuous Learning
    Atamanczuk, Bruna
    Karadas, Kurt Arve Skipenes
    Agrawal, Bikash
    Chakravorty, Antorweep
    FRONTIERS OF ARTIFICIAL INTELLIGENCE, ETHICS, AND MULTIDISCIPLINARY APPLICATIONS, FAIEMA 2023, 2024, : 3 - 16
  • [43] Neural networks and deep learning: a brief introduction
    Georgevici, Adrian Iustin
    Terblanche, Marius
    INTENSIVE CARE MEDICINE, 2019, 45 (05) : 712 - 714
  • [44] Representational Distance Learning for Deep Neural Networks
    McClure, Patrick
    Kriegeskorte, Nikolaus
    FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2016, 10
  • [45] Learning hidden elasticity with deep neural networks
    Chen, Chun-Teh
    Gu, Grace X.
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2021, 118 (31)
  • [46] Evolutionary neural networks for deep learning: a review
    Ma, Yongjie
    Xie, Yirong
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2022, 13 (10) : 3001 - 3018
  • [47] Deep Learning for Epidemiologists: An Introduction to Neural Networks
    Serghiou, Stylianos
    Rough, Kathryn
    AMERICAN JOURNAL OF EPIDEMIOLOGY, 2023, 192 (11) : 1904 - 1916
  • [48] Learning Sparse Patterns in Deep Neural Networks
    Wen, Weijing
    Yang, Fan
    Su, Yangfeng
    Zhou, Dian
    Zeng, Xuan
    2019 IEEE 13TH INTERNATIONAL CONFERENCE ON ASIC (ASICON), 2019,
  • [49] Variational tensor neural networks for deep learning
    Jahromi, Saeed S.
    Orus, Roman
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [50] Piecewise linear neural networks and deep learning
    Tao, Qinghua
    Li, Li
    Huang, Xiaolin
    Xi, Xiangming
    Wang, Shuning
    Suykens, Johan A. K.
    NATURE REVIEWS METHODS PRIMERS, 2022, 2 (01):