Universal structural patterns in sparse recurrent neural networks

被引:0
|
作者
Zhang, Xin-Jie [1 ,2 ,3 ]
Moore, Jack Murdoch [1 ,2 ,3 ]
Yan, Gang [1 ,2 ,3 ,4 ]
Li, Xiang [3 ,5 ]
机构
[1] Tongji Univ, MOE Key Lab Adv Microstruct Mat, Shanghai, Peoples R China
[2] Tongji Univ, Sch Phys Sci & Engn, Shanghai, Peoples R China
[3] Tongji Univ, MOE Frontiers Sci Ctr Intelligent Autonomous Syst, Natl Key Lab Autonomous Intelligent Unmanned Syst, Shanghai, Peoples R China
[4] Chinese Acad Sci, CAS Ctr Excellence Brain Sci & Intelligence Techno, Shanghai, Peoples R China
[5] Tongji Univ, Coll Elect & Informat Engn, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
MOTIFS;
D O I
10.1038/s42005-023-01364-0
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Sparse neural networks can achieve performance comparable to fully connected networks but need less energy and memory, showing great promise for deploying artificial intelligence in resource-limited devices. While significant progress has been made in recent years in developing approaches to sparsify neural networks, artificial neural networks are notorious as black boxes, and it remains an open question whether well-performing neural networks have common structural features. Here, we analyze the evolution of recurrent neural networks (RNNs) trained by different sparsification strategies and for different tasks, and explore the topological regularities of these sparsified networks. We find that the optimized sparse topologies share a universal pattern of signed motifs, RNNs evolve towards structurally balanced configurations during sparsification, and structural balance can improve the performance of sparse RNNs in a variety of tasks. Such structural balance patterns also emerge in other state-of-the-art models, including neural ordinary differential equation networks and continuous-time RNNs. Taken together, our findings not only reveal universal structural features accompanying optimized network sparsification but also offer an avenue for optimal architecture searching. Deep neural networks have shown remarkable success in application areas across physical sciences and engineering science and finding such networks that can work efficiently with less connections (weight parameters) without sacrificing performance is thus of great interest. In this work the authors show that a large number of such efficient recurrent neural networks display certain connectivity patterns in their structure.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Universal structural patterns in sparse recurrent neural networks
    Xin-Jie Zhang
    Jack Murdoch Moore
    Gang Yan
    Xiang Li
    [J]. Communications Physics, 6
  • [2] Sparse Bayesian Recurrent Neural Networks
    Chatzis, Sotirios P.
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2015, PT II, 2015, 9285 : 359 - 372
  • [3] Recurrent neural networks are universal approximators
    Schaefer, Anton Maximilian
    Zimmermann, Hans-Georg
    [J]. INTERNATIONAL JOURNAL OF NEURAL SYSTEMS, 2007, 17 (04) : 253 - 263
  • [4] Recurrent neural networks are universal approximators
    Schaefer, Anton Maximilian
    Zimmermann, Hans Georg
    [J]. ARTIFICIAL NEURAL NETWORKS - ICANN 2006, PT 1, 2006, 4131 : 632 - 640
  • [5] Structural Analysis of Sparse Neural Networks
    Stier, Julian
    Granitzer, Michael
    [J]. KNOWLEDGE-BASED AND INTELLIGENT INFORMATION & ENGINEERING SYSTEMS (KES 2019), 2019, 159 : 107 - 116
  • [6] Characterizing Sparse Connectivity Patterns in Neural Networks
    Dey, Sourya
    Huang, Kuan-Wen
    Beerel, Peter A.
    Chugg, Keith M.
    [J]. 2018 INFORMATION THEORY AND APPLICATIONS WORKSHOP (ITA), 2018,
  • [7] Learning Sparse Patterns in Deep Neural Networks
    Wen, Weijing
    Yang, Fan
    Su, Yangfeng
    Zhou, Dian
    Zeng, Xuan
    [J]. 2019 IEEE 13TH INTERNATIONAL CONFERENCE ON ASIC (ASICON), 2019,
  • [8] Efficient and effective training of sparse recurrent neural networks
    Shiwei Liu
    Iftitahu Ni’mah
    Vlado Menkovski
    Decebal Constantin Mocanu
    Mykola Pechenizkiy
    [J]. Neural Computing and Applications, 2021, 33 : 9625 - 9636
  • [9] Efficient and effective training of sparse recurrent neural networks
    Liu, Shiwei
    Ni'mah, Iftitahu
    Menkovski, Vlado
    Mocanu, Decebal Constantin
    Pechenizkiy, Mykola
    [J]. NEURAL COMPUTING & APPLICATIONS, 2021, 33 (15): : 9625 - 9636
  • [10] Recurrent neural networks for structural optimization
    Parvin, Azadeh
    Serpen, Gürsel
    [J]. Computer-Aided Civil and Infrastructure Engineering, 1999, 14 (06): : 445 - 451