Skeleton Ground Truth Extraction: Methodology, Annotation Tool and Benchmarks

被引:1
|
作者
Yang, Cong [1 ]
Indurkhya, Bipin [2 ]
See, John [3 ]
Gao, Bo [4 ]
Ke, Yan [4 ]
Boukhers, Zeyd [5 ]
Yang, Zhenyu [6 ]
Grzegorzek, Marcin [7 ]
机构
[1] Soochow Univ, Suzhou, Peoples R China
[2] Jagiellonian Univ, Krakow, Poland
[3] Heriot Watt Univ Malaysia, Putrajaya, Malaysia
[4] Clobot, Shanghai, Peoples R China
[5] Fraunhofer FIT, St Augustin, Germany
[6] Southeast Univ, Nanjing, Peoples R China
[7] Univ Lubeck, Lubeck, Germany
关键词
SHAPE SKELETONS; IMAGE; CLASSIFICATION; RECOGNITION;
D O I
10.1007/s11263-023-01926-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Skeleton Ground Truth (GT) is critical to the success of supervised skeleton extraction methods, especially with the popularity of deep learning techniques. Furthermore, we see skeleton GTs used not only for training skeleton detectors with Convolutional Neural Networks (CNN), but also for evaluating skeleton-related pruning and matching algorithms. However, most existing shape and image datasets suffer from the lack of skeleton GT and inconsistency of GT standards. As a result, it is difficult to evaluate and reproduce CNN-based skeleton detectors and algorithms on a fair basis. In this paper, we present a heuristic strategy for object skeleton GT extraction in binary shapes and natural images. Our strategy is built on an extended theory of diagnosticity hypothesis, which enables encoding human-in-the-loop GT extraction based on clues from the target's context, simplicity, and completeness. Using this strategy, we developed a tool, SkeView, to generate skeleton GT of 17 existing shape and image datasets. The GTs are then structurally evaluated with representative methods to build viable baselines for fair comparisons. Experiments demonstrate that GTs generated by our strategy yield promising quality with respect to standard consistency, and also provide a balance between simplicity and completeness.
引用
收藏
页码:1219 / 1241
页数:23
相关论文
共 50 条
  • [1] Skeleton Ground Truth Extraction: Methodology, Annotation Tool and Benchmarks
    Cong Yang
    Bipin Indurkhya
    John See
    Bo Gao
    Yan Ke
    Zeyd Boukhers
    Zhenyu Yang
    Marcin Grzegorzek
    International Journal of Computer Vision, 2024, 132 : 1219 - 1241
  • [2] Introduction to a large-scale general purpose ground truth database: Methodology, annotation tool and benchmarks
    Yao, Benjamin
    Yang, Xiong
    Zhu, Song-Chun
    ENERGY MINIMIZATION METHODS IN COMPUTER VISION AND PATTERN RECOGNITION, PROCEEDINGS, 2007, 4679 : 169 - +
  • [3] DocTAG: A Customizable Annotation Tool for Ground Truth Creation
    Giachelle, Fabio
    Irrera, Ornella
    Silvello, Gianmaria
    ADVANCES IN INFORMATION RETRIEVAL, PT II, 2022, 13186 : 288 - 293
  • [4] Interactive Video Annotation Tool for Generating Ground Truth Information
    Park, Sungjoo
    Yang, Chang Mo
    2019 ELEVENTH INTERNATIONAL CONFERENCE ON UBIQUITOUS AND FUTURE NETWORKS (ICUFN 2019), 2019, : 552 - 554
  • [5] Ground truth and benchmarks for performance evaluation
    Takeuchi, A
    Shneier, M
    Hong, T
    Chang, T
    Scrapper, C
    Cheok, G
    UNMANNED GROUND VEHICLE TECHNOLOGY V, 2003, 5083 : 408 - 413
  • [6] Assisted Ground truth generation using Interactive Segmentation on a Visualization and Annotation Tool
    Sampathkmar, Urmila
    Prasath, V. B. Surya
    Meena, Sachin
    Palaniappan, Kannappan
    2016 IEEE APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP (AIPR), 2016,
  • [7] Ground truth annotation of traffic video data
    Mossi, Jose M.
    Albiol, Antonio
    Albiol, Alberto
    Oliver, Javier
    MULTIMEDIA TOOLS AND APPLICATIONS, 2014, 70 (01) : 461 - 474
  • [8] Ground truth annotation of traffic video data
    Jose M. Mossi
    Antonio Albiol
    Alberto Albiol
    Javier Oliver
    Multimedia Tools and Applications, 2014, 70 : 461 - 474
  • [9] Empirical methodology for crowdsourcing ground truth
    Dumitrache, Anca
    Inel, Oana
    Timmermans, Benjamin
    Ortiz, Carlos
    Sips, Robert-Jan
    Aroyo, Lora
    Welty, Chris
    SEMANTIC WEB, 2021, 12 (03) : 403 - 421
  • [10] Annotation Tool for Precise Emotion Ground Truth Label Acquisition while Watching 360° VR Videos
    Xue, Tong
    El Ali, Abdallah
    Ding, Gangyi
    Cesar, Pablo
    2020 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND VIRTUAL REALITY (AIVR 2020), 2020, : 371 - 372