A multi-camera dataset for depth estimation in an indoor scenario

被引:9
|
作者
Marin, Giulio [1 ]
Agresti, Gianluca [1 ]
Minto, Ludovico [1 ]
Zanuttigh, Pietro [1 ]
机构
[1] Univ Padua, Padua, Italy
来源
DATA IN BRIEF | 2019年 / 27卷
关键词
Time-of-Flight; Stereo vision; Active stereo; Data fusion; Depth estimation;
D O I
10.1016/j.dib.2019.104619
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Time-of-Flight (ToF) sensors and stereo vision systems are two of the most diffused depth acquisition devices for commercial and industrial applications. They share complementary strengths and weaknesses. For this reason, the combination of data acquired from these devices can improve the final depth estimation accuracy. This paper introduces a dataset acquired with a multi-camera system composed by a Microsoft Kinect v2 ToF sensor, an Intel RealSense R200 active stereo sensor and a Stereolabs ZED passive stereo camera system. The acquired scenes include indoor settings with different external lighting conditions. The depth ground truth has been acquired for each scene of the dataset using a line laser. The data can be used for developing fusion and denoising algorithms for depth estimation and test with different lighting conditions. A subset of the data has already been used for the experimental evaluation of the work "Stereo and ToF Data Fusion by Learning from Synthetic Data". (c) 2019 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
引用
收藏
页数:7
相关论文
共 50 条
  • [31] Bayesian estimation of common areas in multi-camera systems
    Szlavik, Zoltan
    Sziranyi, Tams
    2006 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP 2006, PROCEEDINGS, 2006, : 1045 - +
  • [32] YOLO Multi-Camera Object Detection and Distance Estimation
    Strbac, Bojan
    Gostovic, Marko
    Lukac, Zeljko
    Samardzija, Dragan
    2020 ZOOMING INNOVATION IN CONSUMER TECHNOLOGIES CONFERENCE (ZINC), 2020, : 26 - 30
  • [33] VideoWeb Dataset for Multi-camera Activities and Non-verbal Communication
    Denina, Giovanni
    Bhanu, Bir
    Hoang Thanh Nguyen
    Ding, Chong
    Kamal, Ahmed
    Ravishankar, Chinya
    Roy-Chowdhury, Amit
    Ivers, Allen
    Varda, Brenda
    DISTRIBUTED VIDEO SENSOR NETWORKS, 2011, : 335 - 347
  • [34] Minimal Solutions for Pose Estimation of a Multi-Camera System
    Lee, Gim Hee
    Li, Bo
    Pollefeys, Marc
    Fraundorfer, Friedrich
    ROBOTICS RESEARCH, ISRR, 2016, 114 : 521 - 538
  • [35] WILDTRACK: A Multi-camera HD Dataset for Dense Unscripted Pedestrian Detection
    Chavdarova, Tatjana
    Baque, Pierre
    Bouquet, Stephane
    Maksai, Andrii
    Jose, Cijo
    Bagautdinov, Timur
    Lettry, Louis
    Fua, Pascal
    Van Gool, Luc
    Fleuret, Francois
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 5030 - 5039
  • [36] Minimal solutions for the multi-camera pose estimation problem
    Lee, Gim Hee
    Li, Bo
    Pollefeys, Marc
    Fraundorfer, Friedrich
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2015, 34 (07): : 837 - 848
  • [37] Multi-camera System Calibration of Indoor Mobile Robot Based on SLAM
    Zhu, Ying
    Wu, Yuxin
    Zhang, Yawan
    Qu, Fukang
    2021 3RD INTERNATIONAL CONFERENCE ON MACHINE LEARNING, BIG DATA AND BUSINESS INTELLIGENCE (MLBDBI 2021), 2021, : 240 - 244
  • [38] A Dataset for Persistent Multi-Target Multi-Camera Tracking in RGB-D
    Layne, Ryan
    Hannuna, Sion
    Camplani, Massimo
    Hall, Jake
    Hospedales, Timothy M.
    Xiang, Tao
    Mirmehdi, Majid
    Damen, Dima
    2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, : 1462 - 1470
  • [39] Lightweight Indoor Multi-Object Tracking in Overlapping FOV Multi-Camera Environments
    Jang, Jungik
    Seon, Minjae
    Choi, Jaehyuk
    SENSORS, 2022, 22 (14)
  • [40] Multi-Camera Saliency
    Luo, Yan
    Jiang, Ming
    Wong, Yongkang
    Zhao, Qi
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2015, 37 (10) : 2057 - 2070