Efficient Object Annotation via Speaking and Pointing

被引:0
|
作者
Michael Gygli
Vittorio Ferrari
机构
[1] Google Research,
来源
关键词
Speech-based annotation; Object annotation; Multimodal interfaces; Large-scale computer vision;
D O I
暂无
中图分类号
学科分类号
摘要
Deep neural networks deliver state-of-the-art visual recognition, but they rely on large datasets, which are time-consuming to annotate. These datasets are typically annotated in two stages: (1) determining the presence of object classes at the image level and (2) marking the spatial extent for all objects of these classes. In this work we use speech, together with mouse inputs, to speed up this process. We first improve stage one, by letting annotators indicate object class presence via speech. We then combine the two stages: annotators draw an object bounding box via the mouse and simultaneously provide its class label via speech. Using speech has distinct advantages over relying on mouse inputs alone. First, it is fast and allows for direct access to the class name, by simply saying it. Second, annotators can simultaneously speak and mark an object location. Finally, speech-based interfaces can be kept extremely simple, hence using them requires less mouse movement compared to existing approaches. Through extensive experiments on the COCO and ILSVRC datasets we show that our approach yields high-quality annotations at significant speed gains. Stage one takes 2.3×-14.9×\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2.3{\times }-14.9{\times }$$\end{document} less annotation time than existing methods based on a hierarchical organization of the classes to be annotated. Moreover, when combining the two stages, we find that object class labels come for free: annotating them at the same time as bounding boxes has zero additional cost. On COCO, this makes the overall process 1.9×\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1.9\times $$\end{document} faster than the two-stage approach.
引用
收藏
页码:1061 / 1075
页数:14
相关论文
共 50 条
  • [1] Efficient Object Annotation via Speaking and Pointing
    Gygli, Michael
    Ferrari, Vittorio
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2020, 128 (05) : 1061 - 1075
  • [2] Extreme clicking for efficient object annotation
    Papadopoulos, Dim P.
    Uijlings, Jasper R. R.
    Keller, Frank
    Ferrari, Vittorio
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : CP38 - CP38
  • [3] Efficient Object Annotation for Surveillance and Automotive Applications
    Swetha, Sirnam
    Mishra, Anand
    Hegde, Guruprasad M.
    Jawahar, C. V.
    2016 IEEE WINTER APPLICATIONS OF COMPUTER VISION WORKSHOPS (WACVW), 2016,
  • [4] Object recognition via recognition of finger pointing actions
    Hild, M
    Hashimoto, M
    Yoshida, K
    12TH INTERNATIONAL CONFERENCE ON IMAGE ANALYSIS AND PROCESSING, PROCEEDINGS, 2003, : 88 - 93
  • [5] Pointing, speaking and meaning
    Schaefer, Robert
    OSTERREICHISCHE ZEITSCHRIFT FUER SOZIOLOGIE, 2013, 38 : 181 - 194
  • [6] Tolerating Annotation Displacement in Dense Object Counting via Point Annotation Probability Map
    Chen, Yuehai
    Yang, Jing
    Chen, Badong
    Du, Shaoyi
    Hua, Gang
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 6359 - 6372
  • [7] Object pointing: A complement to bitmap pointing in GUIs
    Guiard, Y
    Blanch, R
    Beaudouin-Lafon, M
    GRAPHICS INTERFACE 2004, PROCEEDINGS, 2004, : 9 - 16
  • [8] An efficient weakly semi-supervised method for object automated annotation
    Xingzheng Wang
    Guoyao Wei
    Songwei Chen
    Jiehao Liu
    Multimedia Tools and Applications, 2024, 83 : 9417 - 9440
  • [9] An efficient weakly semi-supervised method for object automated annotation
    Wang, Xingzheng
    Wei, Guoyao
    Chen, Songwei
    Liu, Jiehao
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (03) : 9417 - 9440
  • [10] Efficient Object Placement via FTOPNet
    Ye, Guosheng
    Wang, Jianming
    Yang, Zizhong
    ELECTRONICS, 2023, 12 (19)