Plug and Play Active Learning for Object Detection

被引:5
|
作者
Yang, Chenhongyi [1 ]
Huang, Lichao [2 ]
Crowley, Elliot J. [1 ]
机构
[1] Univ Edinburgh, Sch Engn, Edinburgh, Midlothian, Scotland
[2] Horizon Robot, Beijing, Peoples R China
关键词
D O I
10.1109/CVPR52733.2024.01684
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Annotating datasets for object detection is an expensive and time-consuming endeavor. To minimize this burden, active learning (AL) techniques are employed to select the most informative samples for annotation within a constrained "annotation budget". Traditional AL strategies typically rely on model uncertainty or sample diversity for query sampling, while more advanced methods have focused on developing AL-specific object detector architectures to enhance performance. However, these specialized approaches are not readily adaptable to different object detectors due to the significant engineering effort required for integration. To overcome this challenge, we introduce Plug and Play Active Learning (PPAL), a simple and effective AL strategy for object detection. PPAL is a two-stage method comprising uncertainty-based and diversity-based sampling phases. In the first stage, our Difficulty Calibrated Uncertainty Sampling leverage a category-wise difficulty coefficient that combines both classification and localisation difficulties to re-weight instance uncertainties, from which we sample a candidate pool for the subsequent diversity-based sampling. In the second stage, we propose Category Conditioned Matching Similarity to better compute the similarities of multi-instance images as ensembles of their instance similarities, which is used by the k- Means++ algorithm to sample the final AL queries. PPAL makes no change to model architectures or detector training pipelines; hence it can be easily generalized to different object detectors. We benchmark PPAL on the MS-COCO and Pascal VOC datasets using different detector architectures and show that our method outperforms prior work by a large margin. Code is available at https://github.com/ChenhongyiYang/PPAL
引用
收藏
页码:17784 / 17793
页数:10
相关论文
共 50 条
  • [21] Learning to View: Decision Transformers for Active Object Detection
    Ding, Wenhao
    Majcherczyk, Nathalie
    Deshpande, Mohit
    Qi, Xuewei
    Zhao, Ding
    Madhivanan, Rajasimman
    Sen, Arnie
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023), 2023, : 7140 - 7146
  • [22] Learning Active Basis Model for Object Detection and Recognition
    Wu, Ying Nian
    Si, Zhangzhang
    Gong, Haifeng
    Zhu, Song-Chun
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2010, 90 (02) : 198 - 235
  • [23] Employing feature mixture for active learning of object detection
    Zhang, Licheng
    Lam, Siew-Kei
    Luo, Dingsheng
    Wu, Xihong
    NEUROCOMPUTING, 2024, 594
  • [24] Splitting and Merging for Active Contours: Plug-and-Play
    Lashgari, Mojtaba
    Banerjee, Abhirup
    Rabbani, Hossein
    MATHEMATICS, 2025, 13 (06)
  • [25] Active Plug & Play distributed Raman temperature sensing
    Suh, Kwang
    Lee, Chung
    Sanders, Michael
    Kalar, Kent
    19TH INTERNATIONAL CONFERENCE ON OPTICAL FIBRE SENSORS, PTS 1 AND 2, 2008, 7004
  • [26] Active Learning Strategies for Weakly-Supervised Object Detection
    Vo, Huy V.
    Simeoni, Oriane
    Gidaris, Spyros
    Bursuc, Andrei
    Perez, Patrick
    Ponce, Jean
    COMPUTER VISION - ECCV 2022, PT XXX, 2022, 13690 : 211 - 230
  • [27] QBox: Partial Transfer Learning With Active Querying for Object Detection
    Tang, Ying-Peng
    Wei, Xiu-Shen
    Zhao, Borui
    Huang, Sheng-Jun
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (06) : 3058 - 3070
  • [28] ALWOD: Active Learning for Weakly-Supervised Object Detection
    Wang, Yuting
    Ilic, Velibor
    Li, Jiatong
    Kisacanin, Branislav
    Pavlovic, Vladimir
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 6436 - 6446
  • [29] INSTANCE-AWARE UNCERTAINTY FOR ACTIVE LEARNING IN OBJECT DETECTION
    Zhang, Zhipeng
    Ma, Wenting
    Yuan, Xiaohang
    Hao, Yuan
    Guo, Meng
    Tang, Hongyi
    Zhou, Zhiheng
    Yao, Zhenjie
    2024 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2024, : 298 - 304
  • [30] FAIME: An object-oriented methodology for application plug-and-play
    Chu, B
    Long, JS
    Matthews, M
    Barnes, JG
    Sims, J
    Hamilton, M
    Lambert, R
    JOURNAL OF OBJECT-ORIENTED PROGRAMMING, 1998, 11 (05): : 20 - +