Sparse connectivity enables efficient information processing in cortex-like artificial neural networks

被引:0
|
作者
Fruengel, Rieke [1 ,2 ]
Oberlaender, Marcel [1 ,3 ]
机构
[1] Max Planck Inst Neurobiol Behav Caesar, Silico Brain Sci Grp, Bonn, Germany
[2] Int Max Planck Res Sch IMPRS Brain & Behav, Bonn, Germany
[3] Vrije Univ Amsterdam, Ctr Neurogenom & Cognit Res, Dept Integrat Neurophysiol, Amsterdam, Netherlands
基金
欧洲研究理事会;
关键词
connectivity; structure-function; cortex; artificial neural networks; recurrent; sparse; INHIBITION;
D O I
10.3389/fncir.2025.1528309
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Neurons in cortical networks are very sparsely connected; even neurons whose axons and dendrites overlap are highly unlikely to form a synaptic connection. What is the relevance of such sparse connectivity for a network's function? Surprisingly, it has been shown that sparse connectivity impairs information processing in artificial neural networks (ANNs). Does this imply that sparse connectivity also impairs information processing in biological neural networks? Although ANNs were originally inspired by the brain, conventional ANNs differ substantially in their structural network architecture from cortical networks. To disentangle the relevance of these structural properties for information processing in networks, we systematically constructed ANNs constrained by interpretable features of cortical networks. We find that in large and recurrently connected networks, as are found in the cortex, sparse connectivity facilitates time- and data-efficient information processing. We explore the origins of these surprising findings and show that conventional dense ANNs distribute information across only a very small fraction of nodes, whereas sparse ANNs distribute information across more nodes. We show that sparsity is most critical in networks with fixed excitatory and inhibitory nodes, mirroring neuronal cell types in cortex. This constraint causes a large learning delay in densely connected networks which is eliminated by sparse connectivity. Taken together, our findings show that sparse connectivity enables efficient information processing given key constraints from cortical networks, setting the stage for further investigation into higher-order features of cortical connectivity.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] An efficient and accurate solver for large, sparse neural networks
    Roman M Stolyarov
    Andrea K Barreiro
    Scott Norris
    BMC Neuroscience, 16 (Suppl 1)
  • [42] Efficient and effective training of sparse recurrent neural networks
    Liu, Shiwei
    Ni'mah, Iftitahu
    Menkovski, Vlado
    Mocanu, Decebal Constantin
    Pechenizkiy, Mykola
    NEURAL COMPUTING & APPLICATIONS, 2021, 33 (15): : 9625 - 9636
  • [43] An Efficient Hardware Accelerator for Sparse Transformer Neural Networks
    Fang, Chao
    Guo, Shouliang
    Wu, Wei
    Lin, Jun
    Wang, Zhongfeng
    Hsu, Ming Kai
    Liu, Lingzhi
    2022 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS 22), 2022, : 2670 - 2674
  • [44] Reconstruction of sparse connectivity in neural networks from spike train covariances
    Pernice, Volker
    Rotter, Stefan
    JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT, 2013,
  • [45] An Efficient Optimization Method for HCC Antennas Using Quantum Genes Based on Sparse Artificial Neural Networks
    Peng, Fengling
    Chen, Xing
    IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION, 2024, 72 (05) : 3987 - 3999
  • [46] Information reuse and integration in artificial neural networks
    Neville, RS
    Proceedings of the 2005 IEEE International Conference on Information Reuse and Integration, 2005, : 368 - 373
  • [47] Artificial Neural Networks in the Filtration of Radiolocation Information
    Jan, Matuszewski
    Pietrow, Dymitr
    15TH INTERNATIONAL CONFERENCE ON ADVANCED TRENDS IN RADIOELECTRONICS, TELECOMMUNICATIONS AND COMPUTER ENGINEERING (TCSET - 2020), 2020, : 680 - 685
  • [48] An efficient synaptic architecture for artificial neural networks
    Boybat, Irem
    Le Gallo, Manuel
    Nandakumar, S. R.
    Moraitis, Timoleon
    Tuma, Tomas
    Rajendran, Bipin
    Leblebici, Yusuf
    Sebastian, Abu
    Eleftheriou, Evangelos
    2017 17TH NON-VOLATILE MEMORY TECHNOLOGY SYMPOSIUM (NVMTS), 2017,
  • [49] Structural learning in artificial neural networks using sparse optimization
    Manngard, Mikael
    Kronqvist, Jan
    Boling, Jari M.
    NEUROCOMPUTING, 2018, 272 : 660 - 667
  • [50] Sparse Activations with Correlated Weights in Cortex-Inspired Neural Networks
    Chun, Chanwoo
    Lee, Daniel D.
    CONFERENCE ON PARSIMONY AND LEARNING, VOL 234, 2024, 234 : 248 - 268