Plug and Play Active Learning for Object Detection

被引:5
|
作者
Yang, Chenhongyi [1 ]
Huang, Lichao [2 ]
Crowley, Elliot J. [1 ]
机构
[1] Univ Edinburgh, Sch Engn, Edinburgh, Midlothian, Scotland
[2] Horizon Robot, Beijing, Peoples R China
关键词
D O I
10.1109/CVPR52733.2024.01684
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Annotating datasets for object detection is an expensive and time-consuming endeavor. To minimize this burden, active learning (AL) techniques are employed to select the most informative samples for annotation within a constrained "annotation budget". Traditional AL strategies typically rely on model uncertainty or sample diversity for query sampling, while more advanced methods have focused on developing AL-specific object detector architectures to enhance performance. However, these specialized approaches are not readily adaptable to different object detectors due to the significant engineering effort required for integration. To overcome this challenge, we introduce Plug and Play Active Learning (PPAL), a simple and effective AL strategy for object detection. PPAL is a two-stage method comprising uncertainty-based and diversity-based sampling phases. In the first stage, our Difficulty Calibrated Uncertainty Sampling leverage a category-wise difficulty coefficient that combines both classification and localisation difficulties to re-weight instance uncertainties, from which we sample a candidate pool for the subsequent diversity-based sampling. In the second stage, we propose Category Conditioned Matching Similarity to better compute the similarities of multi-instance images as ensembles of their instance similarities, which is used by the k- Means++ algorithm to sample the final AL queries. PPAL makes no change to model architectures or detector training pipelines; hence it can be easily generalized to different object detectors. We benchmark PPAL on the MS-COCO and Pascal VOC datasets using different detector architectures and show that our method outperforms prior work by a large margin. Code is available at https://github.com/ChenhongyiYang/PPAL
引用
收藏
页码:17784 / 17793
页数:10
相关论文
共 50 条
  • [41] PLUG AND PLAY
    ALLEN, D
    BYTE, 1994, 19 (09): : 14 - 14
  • [42] PLUG AND PLAY
    SCHNEIDER, D
    SCIENTIFIC AMERICAN, 1995, 273 (06) : 16 - 16
  • [43] Plug and play
    Anon
    2001, EMAP Business Communications (28):
  • [44] PLUG AND PLAY
    CROFFORD, TR
    FORTUNE, 1993, 128 (02) : 38 - 38
  • [45] Plug and Play
    Koll-Schretzenmayr, Martina
    DISP, 2015, 51 (03): : 2 - 3
  • [46] Plug & play
    Osenga, M
    DIESEL PROGRESS NORTH AMERICAN EDITION, 2004, 70 (01): : 2 - 2
  • [47] Plug and play
    Chicurel, M
    Brenner, S
    Cohen, P
    Tjian, R
    Aschwanden, C
    Schrope, M
    NEW SCIENTIST, 2000, 166 (2234) : A1 - A3
  • [48] A plug-and-play image enhancement model for end-to-end object detection in low-light condition
    Jiaojiao Yuan
    Yongli Hu
    Yanfeng Sun
    Boyue Wang
    Baocai Yin
    Multimedia Systems, 2024, 30
  • [49] A plug-and-play image enhancement model for end-to-end object detection in low-light condition
    Yuan, Jiaojiao
    Hu, Yongli
    Sun, Yanfeng
    Wang, Boyue
    Yin, Baocai
    MULTIMEDIA SYSTEMS, 2024, 30 (01)
  • [50] Improving small object detection via context-aware and feature-enhanced plug-and-play modules
    Xiao He
    Xiaolong Zheng
    Xiyu Hao
    Heng Jin
    Xiangming Zhou
    Lihuan Shao
    Journal of Real-Time Image Processing, 2024, 21