Universal structural patterns in sparse recurrent neural networks

被引:0
|
作者
Zhang, Xin-Jie [1 ,2 ,3 ]
Moore, Jack Murdoch [1 ,2 ,3 ]
Yan, Gang [1 ,2 ,3 ,4 ]
Li, Xiang [3 ,5 ]
机构
[1] Tongji Univ, MOE Key Lab Adv Microstruct Mat, Shanghai, Peoples R China
[2] Tongji Univ, Sch Phys Sci & Engn, Shanghai, Peoples R China
[3] Tongji Univ, MOE Frontiers Sci Ctr Intelligent Autonomous Syst, Natl Key Lab Autonomous Intelligent Unmanned Syst, Shanghai, Peoples R China
[4] Chinese Acad Sci, CAS Ctr Excellence Brain Sci & Intelligence Techno, Shanghai, Peoples R China
[5] Tongji Univ, Coll Elect & Informat Engn, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
MOTIFS;
D O I
10.1038/s42005-023-01364-0
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Sparse neural networks can achieve performance comparable to fully connected networks but need less energy and memory, showing great promise for deploying artificial intelligence in resource-limited devices. While significant progress has been made in recent years in developing approaches to sparsify neural networks, artificial neural networks are notorious as black boxes, and it remains an open question whether well-performing neural networks have common structural features. Here, we analyze the evolution of recurrent neural networks (RNNs) trained by different sparsification strategies and for different tasks, and explore the topological regularities of these sparsified networks. We find that the optimized sparse topologies share a universal pattern of signed motifs, RNNs evolve towards structurally balanced configurations during sparsification, and structural balance can improve the performance of sparse RNNs in a variety of tasks. Such structural balance patterns also emerge in other state-of-the-art models, including neural ordinary differential equation networks and continuous-time RNNs. Taken together, our findings not only reveal universal structural features accompanying optimized network sparsification but also offer an avenue for optimal architecture searching. Deep neural networks have shown remarkable success in application areas across physical sciences and engineering science and finding such networks that can work efficiently with less connections (weight parameters) without sacrificing performance is thus of great interest. In this work the authors show that a large number of such efficient recurrent neural networks display certain connectivity patterns in their structure.
引用
收藏
页数:10
相关论文
共 50 条
  • [41] Robust timing and motor patterns by taming chaos in recurrent neural networks
    Rodrigo Laje
    Dean V Buonomano
    Nature Neuroscience, 2013, 16 : 925 - 933
  • [42] Robust timing and motor patterns by taming chaos in recurrent neural networks
    Laje, Rodrigo
    Buonomano, Dean V.
    NATURE NEUROSCIENCE, 2013, 16 (07) : 925 - U196
  • [43] Dynamic reconstruction from noise contaminated data with sparse Bayesian recurrent neural networks
    Mirikitani, Derrick T.
    Park, Incheon
    Daoudi, Mohammed
    AMS 2007: FIRST ASIA INTERNATIONAL CONFERENCE ON MODELLING & SIMULATION ASIA MODELLING SYMPOSIUM, PROCEEDINGS, 2007, : 409 - +
  • [44] Classification by Sparse Neural Networks
    Kurkova, Vera
    Sanguineti, Marcello
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (09) : 2746 - 2754
  • [45] Training Sparse Neural Networks
    Srinivas, Suraj
    Subramanya, Akshayvarun
    Babu, R. Venkatesh
    2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, : 455 - 462
  • [46] Sparse Convolutional Neural Networks
    Lu, Haoyuan
    Wang, Min
    Foroosh, Hassan
    Tappen, Marshall
    Penksy, Marianna
    2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2015, : 806 - 814
  • [47] Recurrent neural networks
    Siegelmann, HT
    COMPUTER SCIENCE TODAY, 1995, 1000 : 29 - 45
  • [48] Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks
    Sebastian Bitzer
    Stefan J. Kiebel
    Biological Cybernetics, 2012, 106 : 201 - 217
  • [49] Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks
    Bitzer, Sebastian
    Kiebel, Stefan J.
    BIOLOGICAL CYBERNETICS, 2012, 106 (4-5) : 201 - 217
  • [50] Recurrent Neural Networks for Uncertain Time-Dependent Structural Behavior
    Graf, W.
    Freitag, S.
    Kaliske, M.
    Sickert, J-U
    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, 2010, 25 (05) : 322 - 333