SA-FlowNet: Event-based self-attention optical flow estimation with spiking-analogue neural networks

被引:3
|
作者
Yang, Fan [1 ]
Su, Li [1 ,3 ]
Zhao, Jinxiu [1 ]
Chen, Xuena [1 ]
Wang, Xiangyu [1 ]
Jiang, Na [1 ]
Hu, Quan [2 ]
机构
[1] Capital Normal Univ, Informat Engn Coll, Beijing, Peoples R China
[2] Beijing Inst Technol, Sch Aerosp Engn, Beijing, Peoples R China
[3] Capital Normal Univ, Informat Engn Coll, Beijing 100048, Peoples R China
基金
中国国家自然科学基金;
关键词
computer vision; feature extraction; motion estimation; optical tracking; INTELLIGENCE;
D O I
10.1049/cvi2.12206
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Inspired by biological vision mechanism, event-based cameras have been developed to capture continuous object motion and detect brightness changes independently and asynchronously, which overcome the limitations of traditional frame-based cameras. Complementarily, spiking neural networks (SNNs) offer asynchronous computations and exploit the inherent sparseness of spatio-temporal events. Notably, event-based pixel-wise optical flow estimations calculate the positions and relationships of objects in adjacent frames; however, as event camera outputs are sparse and uneven, dense scene information is difficult to generate and the local receptive fields of the neural network also lead to poor moving objects tracking. To address these issues, an improved event-based self-attention optical flow estimation network (SA-FlowNet) that independently uses criss-cross and temporal self-attention mechanisms, directly capturing long-range dependencies and efficiently extracting the temporal and spatial features from the event streams is proposed. In the former mechanism, a cross-domain attention scheme dynamically fusing the temporal-spatial features is introduced. The proposed network adopts a spiking-analogue neural network architecture using an end-to-end learning method and gains significant computational energy benefits especially for SNNs. The state-of-the-art results of the error rate for optical flow prediction on the Multi-Vehicle Stereo Event Camera (MVSEC) dataset compared with the current SNN-based approaches is demonstrated.
引用
下载
收藏
页码:925 / 935
页数:11
相关论文
共 50 条
  • [1] Optical flow estimation from event-based cameras and spiking neural networks
    Cuadrado, Javier
    Rancon, Ulysse
    Cottereau, Benoit R.
    Barranco, Francisco
    Masquelier, Timothee
    FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [2] Self-Supervised Learning of Event-Based Optical Flow with Spiking Neural Networks
    Hagenaars, Jesse J.
    Paredes-Valles, Federico
    de Croon, Guido C. H. E.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [3] EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras
    Zhu, Alex Zihao
    Yuan, Liangzhe
    Chaney, Kenneth
    Daniilidis, Kostas
    ROBOTICS: SCIENCE AND SYSTEMS XIV, 2018,
  • [4] Adaptive-SpikeNet: Event-based Optical Flow Estimation using Spiking Neural Networks with Learnable Neuronal Dynamics
    Kosta, Adarsh Kumar
    Roy, Kaushik
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA, 2023, : 6021 - 6027
  • [5] EAGAN: Event-based attention generative adversarial networks for optical flow and depth estimation
    Lin, Xiuhong
    Yang, Chenhui
    Bian, Xuesheng
    Liu, Weiquan
    Wang, Cheng
    IET COMPUTER VISION, 2022, 16 (07) : 581 - 595
  • [6] Event-Based Optical Flow Estimation with Spatio-Temporal Backpropagation Trained Spiking Neural Network
    Zhang, Yisa
    Lv, Hengyi
    Zhao, Yuchen
    Feng, Yang
    Liu, Hailong
    Bi, Guoling
    MICROMACHINES, 2023, 14 (01)
  • [7] Self-Supervised Optical Flow with Spiking Neural Networks and Event Based Cameras
    Chaney, Kenneth
    Panagopoulou, Artemis
    Lee, Chankyu
    Roy, Kaushik
    Daniilidis, Kostas
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 5892 - 5899
  • [8] 3D-FlowNet: Event-based optical flow estimation with 3D representation
    Sun, Haixin
    Dao, Minh-Quan
    Fremont, Vincent
    2022 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2022, : 1845 - 1850
  • [9] EVENT-BASED MULTIMODAL SPIKING NEURAL NETWORK WITH ATTENTION MECHANISM
    Liu, Qianhui
    Xing, Dong
    Feng, Lang
    Tang, Huajin
    Pan, Gang
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 8922 - 8926
  • [10] Optical Flow Estimation Using Dual Self-Attention Pyramid Networks
    Zhai, Mingliang
    Xiang, Xuezhi
    Zhang, Rongfang
    Lv, Ning
    El Saddik, Abdulmotaleb
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (10) : 3663 - 3674