Variational Probability Flow for Biologically Plausible Training of Deep Neural Networks

被引:0
|
作者
Liu, Zuozhu [1 ]
Quek, Tony Q. S. [1 ]
Lin, Shaowei [1 ]
机构
[1] Singapore Univ Technol & Design, 8 Somapah Rd, Singapore 487372, Singapore
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The quest for biologically plausible deep learning is driven, not just by the desire to explain experimentally-observed properties of biological neural networks, but also by the hope of discovering more efficient methods for training artificial networks. In this paper, we propose a new algorithm named Variational Probably Flow (VPF), an extension of minimum probability flow for training binary Deep Boltzmann Machines (DBMs). We show that weight updates in VPF are local, depending only on the states and firing rates of the adjacent neurons. Unlike contrastive divergence, there is no need for Gibbs confabulations; and unlike backpropagation, alternating feedforward and feedback phases are not required. Moreover, the learning algorithm is effective for training DBMs with intra-layer connections between the hidden nodes. Experiments with MNIST and Fashion MNIST demonstrate that VPF learns reasonable features quickly, reconstructs corrupted images more accurately, and generates samples with a high estimated log-likelihood. Lastly, we note that, interestingly, if an asymmetric version of VPF exists, the weight updates directly explain experimental results in Spike-Timing-Dependent Plasticity (STDP).
引用
收藏
页码:3698 / 3705
页数:8
相关论文
共 50 条
  • [1] Backpropagation with biologically plausible spatiotemporal adjustment for training deep spiking neural networks
    Shen, Guobin
    Zhao, Dongcheng
    Zeng, Yi
    PATTERNS, 2022, 3 (06):
  • [2] An evolutionary strategy for supervised training of biologically plausible neural networks
    Belatreche, A
    Maguire, LP
    McGinnity, M
    Wu, QX
    PROCEEDINGS OF THE 7TH JOINT CONFERENCE ON INFORMATION SCIENCES, 2003, : 1524 - 1527
  • [3] Biologically Plausible Training Mechanisms for Self-Supervised Learning in Deep Networks
    Tang, Mufeng
    Yang, Yibo
    Amit, Yali
    FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2022, 16
  • [4] Towards biologically plausible learning in neural networks
    Fernandez, Jesus Garcia
    Hortal, Enrique
    Mehrkanoon, Siamak
    2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,
  • [5] Are Rule-based Neural Networks Biologically Plausible?
    De, Callatay, A.
    Connection Science, 8 (01):
  • [6] A review of learning in biologically plausible spiking neural networks
    Taherkhani, Aboozar
    Belatreche, Ammar
    Li, Yuhua
    Cosma, Georgina
    Maguire, Liam P.
    McGinnity, T. M.
    NEURAL NETWORKS, 2020, 122 : 253 - 272
  • [7] A MORE BIOLOGICALLY PLAUSIBLE LEARNING RULE FOR NEURAL NETWORKS
    MAZZONI, P
    ANDERSEN, RA
    JORDAN, MI
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 1991, 88 (10) : 4433 - 4437
  • [8] Biologically Plausible Sequence Learning with Spiking Neural Networks
    Liu, Zuozhu
    Chotibut, Thiparat
    Hillar, Christopher
    Lin, Shaowei
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 1316 - 1323
  • [9] Biologically plausible learning in neural networks with modulatory feedback
    Grant, W. Shane
    Tanner, James
    Itti, Laurent
    NEURAL NETWORKS, 2017, 88 : 32 - 48
  • [10] Training biologically plausible recurrent neural networks on cognitive tasks with long-term dependencies
    Soo, Wayne W. M.
    Goudar, Vishwa
    Wang, Xiao-Jing
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,