Multi-Event-Camera Depth Estimation and Outlier Rejection by Refocused Events Fusion

被引:15
|
作者
Ghosh, Suman [1 ]
Gallego, Guillermo [1 ,2 ,3 ]
机构
[1] Tech Univ Berlin, Dept Elect Engn & Comp Sci, D-10623 Berlin, Germany
[2] Einstein Ctr Digital Future, D-10117 Berlin, Germany
[3] Sci Intelligence Excellence Cluster, D-10587 Berlin, Germany
关键词
event cameras; neuromorphic processing; robotics; spatial AI; stereo depth estimation; CONTRAST MAXIMIZATION; VISUAL ODOMETRY; STEREO VISION; DATASET; MOTION; SPACE;
D O I
10.1002/aisy.202200221
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Event cameras are bio-inspired sensors that offer advantages over traditional cameras. They operate asynchronously, sampling the scene at microsecond resolution and producing a stream of brightness changes. This unconventional output has sparked novel computer vision methods to unlock the camera's potential. Here, the problem of event-based stereo 3D reconstruction for SLAM is considered. Most event-based stereo methods attempt to exploit the high temporal resolution of the camera and the simultaneity of events across cameras to establish matches and estimate depth. By contrast, this work investigates how to estimate depth without explicit data association by fusing disparity space images (DSIs) originated in efficient monocular methods. Fusion theory is developed and applied to design multi-camera 3D reconstruction algorithms that produce state-of-the-art results, as confirmed by comparisons with four baseline methods and tests on a variety of available datasets.
引用
收藏
页数:21
相关论文
共 50 条
  • [31] Global depth estimation for multi-view video coding using camera parameters
    Zhang, Xiaoyun
    Zhu, Weile
    Yang, George
    VISAPP 2008: PROCEEDINGS OF THE THIRD INTERNATIONAL CONFERENCE ON COMPUTER VISION THEORY AND APPLICATIONS, VOL 2, 2008, : 631 - +
  • [32] Multi-focus image fusion based on depth estimation in HSV space
    Kuang, Tingna
    Zhou, Haiyang
    Yu, Feihong
    AOPC 2020: OPTICAL SENSING AND IMAGING TECHNOLOGY, 2020, 11567
  • [33] Monocular endoscopy images depth estimation with multi-scale residual fusion
    Liu, Shiyuan
    Fan, Jingfan
    Yang, Yun
    Xiao, Deqiang
    Ai, Danni
    Song, Hong
    Wang, Yongtian
    Yang, Jian
    COMPUTERS IN BIOLOGY AND MEDICINE, 2024, 169
  • [34] Multi-feature fusion enhanced monocular depth estimation with boundary awareness
    Song, Chao
    Chen, Qingjie
    Li, Frederick W. B.
    Jiang, Zhaoyi
    Zheng, Dong
    Shen, Yuliang
    Yang, Bailin
    VISUAL COMPUTER, 2024, 40 (07): : 4955 - 4967
  • [35] Monocular Depth Estimation Using Multi Scale Neural Network And Feature Fusion
    Sagar, Abhinav
    2022 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS (WACVW 2022), 2022, : 656 - 662
  • [36] Depth Estimation of Single Defocused Images Based on Multi-Feature Fusion
    Cao, Fengyun
    TRAITEMENT DU SIGNAL, 2021, 38 (05) : 1353 - 1360
  • [37] Indoor position estimation using angle of arrival measurements: An efficient multi-anchor approach with outlier rejection
    Boquet, Guillem
    Boquet-Pujadas, Aleix
    Pisa, Ivan
    Dabak, Anand
    Vilajosana, Xavier
    Martinez, Borja
    INTERNET OF THINGS, 2024, 26
  • [38] STATE ESTIMATION WITH EVENT SENSORS: OBSERVABILITY ANALYSIS AND MULTI-SENSOR FUSION
    Liu, Xinhui
    Zheng, Kaikai
    Shi, Dawei
    Chen, Tongwen
    SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 2024, 62 (01) : 167 - 190
  • [39] Multi-Camera Sensor Fusion for Visual Odometry using Deep Uncertainty Estimation
    Kaygusuz, Nirnet
    Mendez, Oscar
    Bowden, Richard
    2021 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2021, : 2944 - 2949
  • [40] SurroundDepth: Entangling Surrounding Views for Self-Supervised Multi-Camera Depth Estimation
    Wei, Yi
    Zhao, Linqing
    Zheng, Wenzhao
    Zhu, Zheng
    Rao, Yongming
    Huang, Guan
    Lu, Jiwen
    Zhou, Jie
    CONFERENCE ON ROBOT LEARNING, VOL 205, 2022, 205 : 539 - 549