Employing feature mixture for active learning of object detection

被引:0
|
作者
Zhang, Licheng [1 ]
Lam, Siew-Kei [2 ]
Luo, Dingsheng [3 ]
Wu, Xihong [3 ]
机构
[1] Southern Univ Sci & Technol, Inst Neurosci, Shenzhen 518055, Guangdong, Peoples R China
[2] Nanyang Technol Univ, Singapore 639798, Singapore
[3] Peking Univ, Beijing 100871, Peoples R China
关键词
Active learning; Object detection; Feature mixture; SSD;
D O I
10.1016/j.neucom.2024.127883
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Active learning aims to select the most informative samples for annotation from a large amount of unlabeled data, in order to reduce time-consuming and labor-intensive manual labeling efforts. Although active learning for object detection has made substantial progress in recent years, developing an accurate and efficient active learning algorithm for object detection remains a challenge. In this paper, we propose a novel unsupervised active learning method for deep object detection. This is based on our hypotheses that an object is more likely to be wrongly predicted by the model, if the prediction changes when its feature representations are slightly mixed by another feature representations at a very small ratio. Such unlabeled samples can be regarded as informative samples that can be selected by active learning. Our method employs base representations of all categories generated from the object detection network to examine the robustness of every detected object. We design a scoring function to calculate the informative score of each unlabeled image. We conduct extensive experiments on two public datasets, i.e. , PASCAL VOC and MS-COCO. Experiment results show that our approach consistently outperforms state-of-the-art single -model based methods with significant margins. Our approach also performs on par with multi-model based methods, at much lesser computational cost.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Learning Balance Feature for Object Detection
    Zhang, Zhiqiang
    Qiu, Xin
    Li, Yongzhou
    [J]. ELECTRONICS, 2022, 11 (17)
  • [2] Deep active learning for object detection
    Li, Ying
    Fan, Binbin
    Zhang, Weiping
    Ding, Weiping
    Yin, Jianwei
    [J]. INFORMATION SCIENCES, 2021, 579 : 418 - 433
  • [3] Active Learning for Deep Object Detection
    Brust, Clemens-Alexander
    Kaeding, Christoph
    Denzler, Joachim
    [J]. PROCEEDINGS OF THE 14TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISAPP), VOL 5, 2019, : 181 - 190
  • [4] Scalable Active Learning for Object Detection
    Haussmann, Elmar
    Fenzi, Michele
    Chitta, Kashyap
    Ivanecky, Jan
    Xu, Hanson
    Roy, Donna
    Mittel, Akshita
    Koumchatzky, Nicolas
    Farabet, Clement
    Alvarez, Jose M.
    [J]. 2020 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2020, : 1430 - 1435
  • [5] EFLDet: enhanced feature learning for object detection
    Liao, Yongwei
    Zhang, Guipeng
    Yang, Zhenguo
    Liu, Wenyin
    [J]. NEURAL COMPUTING & APPLICATIONS, 2022, 34 (02): : 1033 - 1045
  • [6] EFLDet: enhanced feature learning for object detection
    Yongwei Liao
    Guipeng Zhang
    Zhenguo Yang
    Wenyin Liu
    [J]. Neural Computing and Applications, 2022, 34 : 1033 - 1045
  • [7] Adaptive learning feature pyramid for object detection
    Wong, Fukoeng
    Hu, Haifeng
    [J]. IET COMPUTER VISION, 2019, 13 (08) : 742 - 748
  • [8] Learning a discriminative feature for object detection based on feature fusing and context learning
    You Lei
    Wang Hongpeng
    Wang Yuan
    [J]. 2017 INTERNATIONAL CONFERENCE ON SECURITY, PATTERN ANALYSIS, AND CYBERNETICS (SPAC), 2017, : 543 - 547
  • [9] Investigation on multisets mixture learning based object detection
    Liu, Zhi-Yong
    Qiao, Hong
    Xu, Lei
    [J]. INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2007, 21 (08) : 1339 - 1351
  • [10] Local to Global Feature Learning for Salient Object Detection
    Feng, Xuelu
    Zhou, Sanping
    Zhu, Zixin
    Wang, Le
    Hua, Gang
    [J]. PATTERN RECOGNITION LETTERS, 2022, 162 : 81 - 88