Multi-Event-Camera Depth Estimation and Outlier Rejection by Refocused Events Fusion

被引:15
|
作者
Ghosh, Suman [1 ]
Gallego, Guillermo [1 ,2 ,3 ]
机构
[1] Tech Univ Berlin, Dept Elect Engn & Comp Sci, D-10623 Berlin, Germany
[2] Einstein Ctr Digital Future, D-10117 Berlin, Germany
[3] Sci Intelligence Excellence Cluster, D-10587 Berlin, Germany
关键词
event cameras; neuromorphic processing; robotics; spatial AI; stereo depth estimation; CONTRAST MAXIMIZATION; VISUAL ODOMETRY; STEREO VISION; DATASET; MOTION; SPACE;
D O I
10.1002/aisy.202200221
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Event cameras are bio-inspired sensors that offer advantages over traditional cameras. They operate asynchronously, sampling the scene at microsecond resolution and producing a stream of brightness changes. This unconventional output has sparked novel computer vision methods to unlock the camera's potential. Here, the problem of event-based stereo 3D reconstruction for SLAM is considered. Most event-based stereo methods attempt to exploit the high temporal resolution of the camera and the simultaneity of events across cameras to establish matches and estimate depth. By contrast, this work investigates how to estimate depth without explicit data association by fusing disparity space images (DSIs) originated in efficient monocular methods. Fusion theory is developed and applied to design multi-camera 3D reconstruction algorithms that produce state-of-the-art results, as confirmed by comparisons with four baseline methods and tests on a variety of available datasets.
引用
收藏
页数:21
相关论文
共 50 条
  • [21] Decentralized Multi-Camera Fusion for Robust and Accurate Pose Estimation
    Assa, Akbar
    Sharifi, Farrokh Janabi
    2013 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS (AIM): MECHATRONICS FOR HUMAN WELLBEING, 2013, : 1696 - 1701
  • [22] Multi-Camera Collaborative Depth Prediction via Consistent Structure Estimation
    Xu, Jialei
    Liu, Xianming
    Bai, Yuanchao
    Jiang, Junjun
    Wang, Kaixuan
    Chen, Xiaozhi
    Ji, Xiangyang
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 2730 - 2738
  • [23] Unsupervised Learning of Depth Estimation and Camera Pose With Multi-Scale GANs
    Xu, Yufan
    Wang, Yan
    Huang, Rui
    Lei, Zeyu
    Yang, Junyao
    Li, Zijian
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (10) : 17039 - 17047
  • [24] Multi-body Depth and Camera Pose Estimation from Multiple Views
    Dal Cin, Andrea Porfiri
    Boracchi, Giacomo
    Magri, Luca
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 17758 - 17768
  • [25] Depth map-based disparity estimation technique using multi-view and depth camera
    Um, Gi-Mun
    Kim, Seung-Man
    Hur, Namho
    Lee, Kwan Hang
    Lee, Soo In
    STEREOSCOPIC DISPLAYS AND VIRTUAL REALITY SYSTEMS XIII, 2006, 6055
  • [26] Monocular Depth and Velocity Estimation Based on Multi-Cue Fusion
    Qi, Chunyang
    Zhao, Hongxiang
    Song, Chuanxue
    Zhang, Naifu
    Song, Sinxin
    Xu, Haigang
    Xiao, Feng
    MACHINES, 2022, 10 (05)
  • [27] Rethinking Motion Estimation: An Outlier Removal Strategy in SORT for Multi-Object Tracking With Camera Moving
    Min, Zijian
    Hassan, Gundu Mohamed
    Jo, Geun-Sik
    IEEE ACCESS, 2024, 12 : 142819 - 142837
  • [28] DENSE DEPTH ESTIMATION FOR SURGICAL ENDOSCOPE ROBOT WITH MULTI-BASELINE DEPTH MAP FUSION
    Tan, Zhidong
    Song, Rihui
    Huang, Kai
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 2230 - 2234
  • [29] Multimodal Monocular Dense Depth Estimation with Event-Frame Fusion Using Transformer
    Xiao, Baihui
    Xu, Jingzehua
    Zhang, Zekai
    Xing, Tianyu
    Wang, Jingjing
    Ren, Yong
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT II, 2024, 15017 : 419 - 433
  • [30] EGA-Depth: Efficient Guided Attention for Self-Supervised Multi-Camera Depth Estimation
    Shi, Yunxiao
    Cai, Hong
    Ansari, Amin
    Porikli, Fatih
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW, 2023, : 119 - 129