Estimating orientation in natural scenes: A spiking neural network model of the insect central complex

被引:0
|
作者
Stentiford, Rachael [1 ]
Knight, James C. [1 ]
Nowotny, Thomas [1 ]
Philippides, Andrew [1 ]
Graham, Paul [2 ]
机构
[1] Univ Sussex, Dept Informat, Brighton, England
[2] Univ Sussex, Sch Life Sci, Brighton, England
基金
英国工程与自然科学研究理事会;
关键词
DROSOPHILA CENTRAL COMPLEX; PATH-INTEGRATION; DIRECTION; DYNAMICS; VISION; REPRESENTATION; BRAIN;
D O I
10.1371/journal.pcbi.1011913
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
The central complex of insects contains cells, organised as a ring attractor, that encode head direction. The 'bump' of activity in the ring can be updated by idiothetic cues and external sensory information. Plasticity at the synapses between these cells and the ring neurons, that are responsible for bringing sensory information into the central complex, has been proposed to form a mapping between visual cues and the heading estimate which allows for more accurate tracking of the current heading, than if only idiothetic information were used. In Drosophila, ring neurons have well characterised non-linear receptive fields. In this work we produce synthetic versions of these visual receptive fields using a combination of excitatory inputs and mutual inhibition between ring neurons. We use these receptive fields to bring visual information into a spiking neural network model of the insect central complex based on the recently published Drosophila connectome. Previous modelling work has focused on how this circuit functions as a ring attractor using the same type of simple visual cues commonly used experimentally. While we initially test the model on these simple stimuli, we then go on to apply the model to complex natural scenes containing multiple conflicting cues. We show that this simple visual filtering provided by the ring neurons is sufficient to form a mapping between heading and visual features and maintain the heading estimate in the absence of angular velocity input. The network is successful at tracking heading even when presented with videos of natural scenes containing conflicting information from environmental changes and translation of the camera.
引用
收藏
页数:22
相关论文
共 50 条
  • [1] Production of adaptive movement patterns via an insect inspired spiking neural network central pattern generator
    Steinbeck, Fabian
    Nowotny, Thomas
    Philippides, Andy
    Graham, Paul
    FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2022, 16
  • [2] Control of spiking regularity in a noisy complex neural network
    Li, Qianshu
    Gao, Yang
    PHYSICAL REVIEW E, 2008, 77 (03):
  • [3] Stimulus Sensitivity of a Spiking Neural Network Model
    Chevallier, Julien
    JOURNAL OF STATISTICAL PHYSICS, 2018, 170 (04) : 800 - 808
  • [4] Stimulus Sensitivity of a Spiking Neural Network Model
    Julien Chevallier
    Journal of Statistical Physics, 2018, 170 : 800 - 808
  • [5] A Spiking Neural Network Model for Sound Recognition
    Xiao, Rong
    Yan, Rui
    Tang, Huajin
    Tan, Kay Chen
    COGNITIVE SYSTEMS AND SIGNAL PROCESSING, ICCSIP 2016, 2017, 710 : 584 - 594
  • [6] Temporal Conditioning Spiking Latent Variable Models of the Neural Response to Natural Visual Scenes
    Ma, Gehua
    Jiang, Runhao
    Yan, Rui
    Tang, Huajin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [7] Neuromorphic Tactile Edge Orientation Classification in an Unsupervised Spiking Neural Network
    Macdonald, Fraser L. A.
    Lepora, Nathan F.
    Conradt, Jorg
    Ward-Cherrier, Benjamin
    SENSORS, 2022, 22 (18)
  • [8] A spiking neural network model of midbrain visuomotor mechanisms that avoids objects by estimating size and distance monocularly
    Graham, Brett J.
    Northmore, David P. M.
    NEUROCOMPUTING, 2007, 70 (10-12) : 1983 - 1987
  • [9] Spiking Neural Network (SNN) Control of a Flapping Insect-scale Robot
    Clawson, Taylor S.
    Ferrari, Silvia
    Fuller, Sawyer B.
    Wood, Robert J.
    2016 IEEE 55TH CONFERENCE ON DECISION AND CONTROL (CDC), 2016, : 3381 - 3388
  • [10] Deep Neural Network for Estimating Value of Quality of Life in Driving Scenes
    Fukui, Shinji
    Watanabe, Naoki
    Iwahori, Yuji
    Kantavat, Pittipol
    Kijsirikul, Boonserm
    Takeshita, Hiroyuki
    Hayashi, Yoshitsugu
    Okazaki, Akihiko
    PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION APPLICATIONS AND METHODS (ICPRAM), 2021, : 616 - 621