Discrete time convolution for fast event-based stereo

被引:9
|
作者
Zhang, Kaixuan [1 ,3 ]
Che, Kaiwei [2 ,3 ]
Zhang, Jianguo [1 ,4 ]
Cheng, Jie [3 ]
Zhang, Ziyang [3 ]
Guo, Qinghai [3 ]
Leng, Luziwei [3 ]
机构
[1] Southern Univ Sci & Technol, Dept Comp Sci & Engn, Shenzhen, Peoples R China
[2] Southern Univ Sci & Technol, Dept Elect & Elect Engn, Shenzhen, Peoples R China
[3] Huawei Technol, ACS Lab, Shenzhen, Peoples R China
[4] Peng Cheng Lab, Shenzhen, Peoples R China
关键词
NETWORKS;
D O I
10.1109/CVPR52688.2022.00848
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Inspired by biological retina, dynamical vision sensor transmits events of instantaneous changes of pixel intensity, giving it a series of advantages over traditional frame-based camera, such as high dynamical range, high temporal resolution and low power consumption. However, extracting information from highly asynchronous event data is a challenging task. Inspired by continuous dynamics of biological neuron models, we propose a novel encoding method for sparse events - continuous time convolution (CTC) - which learns to model the spatial feature of the data with intrinsic dynamics. Adopting channel-wise parameterization, temporal dynamics of the model is synchronized on the same feature map and diverges across different ones, enabling it to embed data in a variety of temporal scales. Abstracted from CTC, we further develop discrete time convolution (DTC) which accelerates the process with lower computational cost. We apply these methods to event-based multi-view stereo matching where they surpass state-of-the-art methods on benchmark criteria of the MVSEC dataset. Spatially sparse event data often leads to inaccurate estimation of edges and local contours. To address this problem, we propose a dual-path architecture in which the feature map is complemented by underlying edge information from original events extracted with spatially-adaptive denormalization. We demonstrate the superiority of our model in terms of speed (up to 110 FPS), accuracy and robustness, showing a great potential for real-time fast depth estimation. Finally, we perform experiments on the recent DSEC dataset to demonstrate the general usage of our model.
引用
下载
收藏
页码:8666 / 8676
页数:11
相关论文
共 50 条
  • [11] Event-based stereo matching using semiglobal matching
    Xie, Zhen
    Zhang, Jianhua
    Wang, Pengfei
    INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2018, 15 (01):
  • [12] Fast Event-based Double Integral for Real-time Robotics
    Lin, Shijie
    Zhang, Yingqiang
    Huang, Dongyue
    Zhou, Bin
    Luo, Xiaowei
    Pan, Jia
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA, 2023, : 796 - 803
  • [13] Event-based Finite-time Boundedness of Discrete-time Network Systems
    Yingqi Zhang
    Miaojun Zhan
    Yan Shi
    Caixia Liu
    International Journal of Control, Automation and Systems, 2020, 18 : 2562 - 2571
  • [14] Event-based Finite-time Boundedness of Discrete-time Network Systems
    Zhang, Yingqi
    Zhan, Miaojun
    Shi, Yan
    Liu, Caixia
    INTERNATIONAL JOURNAL OF CONTROL AUTOMATION AND SYSTEMS, 2020, 18 (10) : 2562 - 2571
  • [15] Event-Based Fault Diagnosis of Networked Discrete Event Systems
    Ren, Kexin
    Zhang, Zhipeng
    Xia, Chengyi
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2022, 69 (03) : 1787 - 1791
  • [16] ESVIO: Event-Based Stereo Visual-Inertial Odometry
    Liu, Zhe
    Shi, Dianxi
    Li, Ruihao
    Yang, Shaowu
    SENSORS, 2023, 23 (04)
  • [17] EOMVS: Event-Based Omnidirectional Multi-View Stereo
    Cho, Hoonhee
    Jeong, Jaeseok
    Yoon, Kuk-Jin
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (04): : 6709 - 6716
  • [18] Event-Based Stereo Depth Estimation Using Belief Propagation
    Xie, Zhen
    Chen, Shengyong
    Orchard, Garrick
    FRONTIERS IN NEUROSCIENCE, 2017, 11
  • [19] Spiking Cooperative Network Implemented on FPGA for Real-Time Event-Based Stereo System
    Kim, Jung-Gyun
    Seo, Donghwan
    Lee, Byung-Geun
    IEEE ACCESS, 2022, 10 : 130806 - 130815
  • [20] Enumeration in time is irresistibly event-based
    Joan Danielle K. Ongchoco
    Brian J. Scholl
    Psychonomic Bulletin & Review, 2020, 27 : 307 - 314