Neural networks with optimized single-neuron adaptation uncover biologically plausible regularization

被引:0
|
作者
Geadah, Victor [1 ,2 ,3 ]
Horoi, Stefan [2 ,3 ]
Kerg, Giancarlo [2 ,4 ]
Wolf, Guy [2 ,3 ,5 ]
Lajoie, Guillaume [2 ,3 ,5 ]
机构
[1] Princeton Univ, Program Appl & Computat Math, Princeton, NJ 08544 USA
[2] Mila Quebec Artificial Intelligence Inst, Montreal, PQ, Canada
[3] Univ Montreal, Dept Math & Stat, Montreal, PQ, Canada
[4] Univ Montreal, Dept Informat & Rech Operationelle, Montreal, PQ, Canada
[5] Canada CIFAR Chair, Montreal, PQ, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
DIVERSITY; CHAOS; EDGE;
D O I
10.1371/journal.pcbi.1012567
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Neurons in the brain have rich and adaptive input-output properties. Features such as heterogeneous f-I curves and spike frequency adaptation are known to place single neurons in optimal coding regimes when facing changing stimuli. Yet, it is still unclear how brain circuits exploit single-neuron flexibility, and how network-level requirements may have shaped such cellular function. To answer this question, a multi-scaled approach is needed where the computations of single neurons and neural circuits must be considered as a complete system. In this work, we use artificial neural networks to systematically investigate single-neuron input-output adaptive mechanisms, optimized in an end-to-end fashion. Throughout the optimization process, each neuron has the liberty to modify its nonlinear activation function parametrized to mimic f-I curves of biological neurons, either by learning an individual static function or via a learned and shared adaptation mechanism to modify activation functions in real-time during a task. We find that such adaptive networks show much-improved robustness to noise and changes in input statistics. Using tools from dynamical systems theory, we analyze the role of these emergent single-neuron properties and argue that neural diversity and adaptation play an active regularization role, enabling neural circuits to optimally propagate information across time. Finally, we outline similarities between these optimized solutions and known coding strategies found in biological neurons, such as gain scaling and fractional order differentiation/integration.
引用
收藏
页数:23
相关论文
共 50 条
  • [1] Single-neuron mechanisms of neural adaptation in the human temporal lobe
    Reber, Thomas P.
    Mackay, Sina
    Bausch, Marcel
    Kehl, Marcel S.
    Borger, Valeri
    Surges, Rainer
    Mormann, Florian
    NATURE COMMUNICATIONS, 2023, 14 (01)
  • [2] Single-neuron mechanisms of neural adaptation in the human temporal lobe
    Thomas P. Reber
    Sina Mackay
    Marcel Bausch
    Marcel S. Kehl
    Valeri Borger
    Rainer Surges
    Florian Mormann
    Nature Communications, 14
  • [3] Temporal sequence learning via adaptation in biologically plausible spiking neural networks
    Renato Duarte
    Peggy Seriès
    Abigail Morrison
    BMC Neuroscience, 15 (Suppl 1)
  • [4] Towards biologically plausible learning in neural networks
    Fernandez, Jesus Garcia
    Hortal, Enrique
    Mehrkanoon, Siamak
    2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,
  • [5] Are Rule-based Neural Networks Biologically Plausible?
    De, Callatay, A.
    Connection Science, 8 (01):
  • [6] A review of learning in biologically plausible spiking neural networks
    Taherkhani, Aboozar
    Belatreche, Ammar
    Li, Yuhua
    Cosma, Georgina
    Maguire, Liam P.
    McGinnity, T. M.
    NEURAL NETWORKS, 2020, 122 : 253 - 272
  • [7] A MORE BIOLOGICALLY PLAUSIBLE LEARNING RULE FOR NEURAL NETWORKS
    MAZZONI, P
    ANDERSEN, RA
    JORDAN, MI
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 1991, 88 (10) : 4433 - 4437
  • [8] Biologically Plausible Sequence Learning with Spiking Neural Networks
    Liu, Zuozhu
    Chotibut, Thiparat
    Hillar, Christopher
    Lin, Shaowei
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 1316 - 1323
  • [9] Biologically plausible learning in neural networks with modulatory feedback
    Grant, W. Shane
    Tanner, James
    Itti, Laurent
    NEURAL NETWORKS, 2017, 88 : 32 - 48
  • [10] Predicting Single-Neuron Activity in Locally Connected Networks
    Azhar, Feraz
    Anderson, William S.
    NEURAL COMPUTATION, 2012, 24 (10) : 2655 - 2677