Neural networks with optimized single-neuron adaptation uncover biologically plausible regularization

被引:0
|
作者
Geadah, Victor [1 ,2 ,3 ]
Horoi, Stefan [2 ,3 ]
Kerg, Giancarlo [2 ,4 ]
Wolf, Guy [2 ,3 ,5 ]
Lajoie, Guillaume [2 ,3 ,5 ]
机构
[1] Princeton Univ, Program Appl & Computat Math, Princeton, NJ 08544 USA
[2] Mila Quebec Artificial Intelligence Inst, Montreal, PQ, Canada
[3] Univ Montreal, Dept Math & Stat, Montreal, PQ, Canada
[4] Univ Montreal, Dept Informat & Rech Operationelle, Montreal, PQ, Canada
[5] Canada CIFAR Chair, Montreal, PQ, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
DIVERSITY; CHAOS; EDGE;
D O I
10.1371/journal.pcbi.1012567
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Neurons in the brain have rich and adaptive input-output properties. Features such as heterogeneous f-I curves and spike frequency adaptation are known to place single neurons in optimal coding regimes when facing changing stimuli. Yet, it is still unclear how brain circuits exploit single-neuron flexibility, and how network-level requirements may have shaped such cellular function. To answer this question, a multi-scaled approach is needed where the computations of single neurons and neural circuits must be considered as a complete system. In this work, we use artificial neural networks to systematically investigate single-neuron input-output adaptive mechanisms, optimized in an end-to-end fashion. Throughout the optimization process, each neuron has the liberty to modify its nonlinear activation function parametrized to mimic f-I curves of biological neurons, either by learning an individual static function or via a learned and shared adaptation mechanism to modify activation functions in real-time during a task. We find that such adaptive networks show much-improved robustness to noise and changes in input statistics. Using tools from dynamical systems theory, we analyze the role of these emergent single-neuron properties and argue that neural diversity and adaptation play an active regularization role, enabling neural circuits to optimally propagate information across time. Finally, we outline similarities between these optimized solutions and known coding strategies found in biological neurons, such as gain scaling and fractional order differentiation/integration.
引用
收藏
页数:23
相关论文
共 50 条
  • [41] Biologically plausible single-layer networks for nonnegative independent component analysis
    Lipshutz, David
    Pehlevan, Cengiz
    Chklovskii, Dmitri B.
    BIOLOGICAL CYBERNETICS, 2022, 116 (5-6) : 557 - 568
  • [42] Soft high-density neural probes enable stable single-neuron recordings
    Le Floch, Paul
    Liu, Jia
    NATURE NANOTECHNOLOGY, 2024, 19 (03) : 277 - 278
  • [43] Biologically plausible single-layer networks for nonnegative independent component analysis
    David Lipshutz
    Cengiz Pehlevan
    Dmitri B. Chklovskii
    Biological Cybernetics, 2022, 116 : 557 - 568
  • [44] Soft high-density neural probes enable stable single-neuron recordings
    Nature Nanotechnology, 2024, 19 : 277 - 278
  • [45] Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive tasks
    Miconi, Thomas
    ELIFE, 2017, 6
  • [46] Towards More Biologically Plausible Error-Driven Learning for Artificial Neural Networks
    Malinovska, Kristina
    Malinovsky, Ludovit
    Farkas, Igor
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2018, PT III, 2018, 11141 : 228 - 231
  • [47] Adaptive structure evolution and biologically plausible synaptic plasticity for recurrent spiking neural networks
    Pan, Wenxuan
    Zhao, Feifei
    Zeng, Yi
    Han, Bing
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [48] Biologically-Plausible Determinant Maximization Neural Networks for Blind Separation of Correlated Sources
    Bozkurt, Bariscan
    Pehlevan, Cengiz
    Erdogan, Alper T.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [49] Adaptive structure evolution and biologically plausible synaptic plasticity for recurrent spiking neural networks
    Wenxuan Pan
    Feifei Zhao
    Yi Zeng
    Bing Han
    Scientific Reports, 13
  • [50] Biologically plausible gated recurrent neural networks for working memory and learning-to-learn
    van den Berg, Alexandra R.
    Roelfsema, Pieter R.
    Bohte, Sander M.
    PLOS ONE, 2024, 19 (12):