A neuromorphic dataset for tabletop object segmentation in indoor cluttered environment

被引:2
|
作者
Huang, Xiaoqian [1 ,2 ]
Kachole, Sanket [3 ]
Ayyad, Abdulla [1 ]
Naeini, Fariborz Baghaei [3 ]
Makris, Dimitrios [3 ]
Zweiri, Yahya [1 ,4 ]
机构
[1] Khalifa Univ, Adv Res & Innovat Ctr ARIC, Abu Dhabi, U Arab Emirates
[2] Khalifa Univ, Khalifa Univ Ctr Autonomous Robot Syst KUCARS, Abu Dhabi, U Arab Emirates
[3] Kingston Univ, Sch Comp Sci & Math, London, England
[4] Khalifa Univ, Dept Aerosp Engn, Abu Dhabi, U Arab Emirates
关键词
VISION;
D O I
10.1038/s41597-024-02920-1
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Event-based cameras are commonly leveraged to mitigate issues such as motion blur, low dynamic range, and limited time sampling, which plague conventional cameras. However, a lack of dedicated event-based datasets for benchmarking segmentation algorithms, especially those offering critical depth information for occluded scenes, has been observed. In response, this paper introduces a novel Event-based Segmentation Dataset (ESD), a high-quality event 3D spatial-temporal dataset designed for indoor object segmentation within cluttered environments. ESD encompasses 145 sequences featuring 14,166 manually annotated RGB frames, along with a substantial event count of 21.88 million and 20.80 million events from two stereo-configured event-based cameras. Notably, this densely annotated 3D spatial-temporal event-based segmentation benchmark for tabletop objects represents a pioneering initiative, providing event-wise depth, and annotated instance labels, in addition to corresponding RGBD frames. By releasing ESD, our aim is to offer the research community a challenging segmentation benchmark of exceptional quality.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] A neuromorphic dataset for tabletop object segmentation in indoor cluttered environment
    Xiaoqian Huang
    Sanket Kachole
    Abdulla Ayyad
    Fariborz Baghaei Naeini
    Dimitrios Makris
    Yahya Zweiri
    Scientific Data, 11
  • [2] ZeroWaste Dataset: Towards Deformable Object Segmentation in Cluttered Scenes
    Bashkirova, Dina
    Abdelfattah, Mohamed
    Zhu, Ziliang
    Akl, James
    Alladkani, Fadi
    Hu, Ping
    Ablavsky, Vitaly
    Calli, Berk
    Bargal, Sarah Adel
    Saenko, Kate
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 21115 - 21125
  • [3] Object Handling in Cluttered Indoor Environment with a Mobile Manipulator
    Militaru, Cristian
    Mezei, Ady-Daniel
    Tamas, Levente
    PROCEEDING OF 2016 IEEE INTERNATIONAL CONFERENCE ON AUTOMATION, QUALITY AND TESTING, ROBOTICS (AQTR), 2016, : 487 - 492
  • [4] Autonomous Object Segmentation in Cluttered Environment Through Interactive Perception
    Wu, Rui
    Zhao, Dongfang
    Liu, Jiafeng
    Tang, Xianglong
    Huang, Qingcheng
    INTELLIGENCE SCIENCE AND BIG DATA ENGINEERING, ISCIDE 2017, 2017, 10559 : 346 - 355
  • [5] DOS Dataset: A Novel Indoor Deformable Object Segmentation Dataset for Sweeping Robots
    Tan, Zehan
    Yang, Weidong
    Zhang, Zhiwei
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT IV, 2024, 14450 : 352 - 366
  • [6] Object segmentation in cluttered environment based on gaze tracing and gaze blinking
    Ratsamee, Photchara
    Mae, Yasushi
    Kamiyama, Kazuto
    Horade, Mitsuhiro
    Kojima, Masaru
    Arai, Tatsuo
    ROBOMECH JOURNAL, 2021, 8 (01):
  • [7] Object segmentation in cluttered environment based on gaze tracing and gaze blinking
    Photchara Ratsamee
    Yasushi Mae
    Kazuto Kamiyama
    Mitsuhiro Horade
    Masaru Kojima
    Tatsuo Arai
    ROBOMECH Journal, 8
  • [8] SpikeBALL: Neuromorphic Dataset for Object Tracking
    Guerrero-Lebrero, Maria P.
    Quintana, Fernando M.
    Guerrero, Elisa
    ADVANCES IN COMPUTATIONAL INTELLIGENCE, IWANN 2023, PT II, 2023, 14135 : 641 - 652
  • [9] Object segmentation in cluttered and visually complex environments
    Ignakov, Dmitri
    Liu, Guangjun
    Okouneva, Galina
    AUTONOMOUS ROBOTS, 2014, 37 (02) : 111 - 135
  • [10] Object segmentation in cluttered and visually complex environments
    Dmitri Ignakov
    Guangjun Liu
    Galina Okouneva
    Autonomous Robots, 2014, 37 : 111 - 135