EasyLabel: A Semi-Automatic Pixel-wise Object Annotation Tool for Creating Robotic RGB-D Datasets

被引:31
|
作者
Suchi, Markus [1 ]
Patten, Timothy [1 ]
Fischinger, David [2 ]
Vincze, Markus [1 ]
机构
[1] TU Wien, Automat & Control Inst, Vis Robot Lab, A-1040 Vienna, Austria
[2] Aeolus Robot Inc, A-1010 Vienna, Austria
基金
英国工程与自然科学研究理事会;
关键词
D O I
10.1109/icra.2019.8793917
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Developing robot perception systems for recognizing objects in the real world requires computer vision algorithms to be carefully scrutinized with respect to the expected operating domain. This demands large quantities of ground truth data to rigorously evaluate the performance of algorithms. This paper presents the EasyLabel tool for easily acquiring high-quality ground truth annotation of objects at pixel-level in densely cluttered scenes. In a semi-automatic process, complex scenes are incrementally built and EasyLabel exploits depth changes to extract precise object masks at each step. We use this tool to generate the Object Cluttered Indoor Dataset (OCID) that captures diverse settings of objects, background, context, sensor to scene distance, viewpoint angle and lighting conditions. OCID is used to perform a systematic comparison of existing object segmentation methods. The baseline comparison supports the need for pixel- and object-wise annotation to progress robot vision towards realistic applications. This insight reveals the usefulness of EasyLabel and OCID to better understand the challenges that robots face in the real world.
引用
收藏
页码:6678 / 6684
页数:7
相关论文
共 10 条
  • [1] SALT: A Semi-automatic Labeling Tool for RGB-D Video Sequences
    Stumpf, Dennis
    Krauss, Stephan
    Reis, Gerd
    Wasenmueller, Oliver
    Stricker, Didier
    VISAPP: PROCEEDINGS OF THE 16TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS - VOL. 4: VISAPP, 2021, : 595 - 603
  • [2] OLT: A Toolkit for Object Labeling Applied to Robotic RGB-D Datasets
    Ruiz-Sarmiento, J. R.
    Galindo, C.
    Gonzalez-Jimenez, J.
    2015 EUROPEAN CONFERENCE ON MOBILE ROBOTS (ECMR), 2015,
  • [3] Lightweight Pixel-Wise Generative Robot Grasping Detection Based on RGB-D Dense Fusion
    Tian, Hongkun
    Song, Kechen
    Li, Song
    Ma, Shuai
    Yan, Yunhui
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
  • [4] Real-Time Pixel-Wise Grasp Detection Based on RGB-D Feature Dense Fusion
    Wu, Yongxiang
    Fu, Yili
    Wang, Shuguo
    2021 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION (IEEE ICMA 2021), 2021, : 970 - 975
  • [5] Hand-object Interaction based Semi-automatic Objects Annotation for Human Activity Datasets
    Wu, Yuankai
    Gu, Zhouyi
    Zakour, Marsil
    Chaudhari, Rahul Gopal
    2022 IEEE 24TH INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP), 2022,
  • [6] Semi-automatic 3D Object Keypoint Annotation and Detection for the Masses
    Blomqvist, Kenneth
    Chung, Jen Jen
    Ott, Lionel
    Siegwart, Roland
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 3908 - 3914
  • [7] Robotic Grasping of Target Objects Based on Semi Automated Annotation Approach with RGB-D Camera
    Deng, Haonan
    Wei, Yuzhang
    Xu, Qingsong
    2022 IEEE 17TH CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA), 2022, : 973 - 978
  • [8] Single RGB Image 6D Object Grasping System Using Pixel-Wise Voting Network
    Zhang, Zhongjie
    Zhou, Chengzhe
    Koike, Yasuharu
    Li, Jiamao
    MICROMACHINES, 2022, 13 (02)
  • [9] Automation of "Ground Truth" Annotation for Multi-View RGB-D Object Instance Recognition Datasets
    Aldoma, Aitor
    Faeulhammer, Thomas
    Vincze, Markus
    2014 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2014), 2014, : 5016 - 5023
  • [10] VAST (Volume Annotation and Segmentation Tool): Efficient Manual and Semi-Automatic Labeling of Large 3D Image Stacks
    Berger, Daniel R.
    Seung, H. Sebastian
    Lichtman, Jeff W.
    FRONTIERS IN NEURAL CIRCUITS, 2018, 12