Unsupervised Object Detection Pretraining with Joint Object Priors Generation and Detector Learning

被引:0
|
作者
Wang, Yizhou [1 ,3 ]
Chen, Meilin [1 ,3 ]
Tang, Shixiang [2 ]
Zhu, Feng [3 ]
Yang, Haiyang [5 ]
Bai, Lei [4 ]
Zhao, Rui [3 ,6 ]
Yan, Yunfeng [1 ]
Qi, Donglian [1 ]
Ouyang, Wanli [2 ,4 ]
机构
[1] Zhejiang Univ, Hangzhou, Peoples R China
[2] Univ Sydney, Sydney, NSW, Australia
[3] SenseTime Res, Hong Kong, Peoples R China
[4] Shanghai AI Lab, Shanghai, Peoples R China
[5] Nanjing Univ, Nanjing, Peoples R China
[6] Shanghai Jiao Tong Univ, Qing Yuan Res Inst, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised pretraining methods for object detection aim to learn object discrimination and localization ability from large amounts of images. Typically, recent works design pretext tasks that supervise the detector to predict the defined object priors. They normally leverage heuristic methods to produce object priors, e.g., selective search, which separates the prior generation and detector learning and leads to sub-optimal solutions. In this work, we propose a novel object detection pretraining framework that could generate object priors and learn detectors jointly by generating accurate object priors from the model itself. Specifically, region priors are extracted by attention maps from the encoder, which highlights foregrounds. Instance priors are the selected high-quality output bounding boxes of the detection decoder. By assuming objects as instances in the foreground, we can generate object priors with both region and instance priors. Moreover, our object priors are jointly refined along with the detector optimization. With better object priors as supervision, the model could achieve better detection capability, which in turn promotes the object priors generation. Our method improves the competitive approaches by +1.3 AP, +1.7 AP in 1% and 10% COCO low-data regimes object detection.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] Adversarially Trained Object Detector for Unsupervised Domain Adaptation
    Fujii, Kazuma
    Kera, Hiroshi
    Kawamoto, Kazuhiko
    IEEE ACCESS, 2022, 10 : 59534 - 59543
  • [32] Unsupervised Image-Generation Enhanced Adaptation for Object Detection in Thermal Images
    Liu, Peng
    Li, Fuyu
    Yuan, Shanshan
    Li, Wanyi
    MOBILE INFORMATION SYSTEMS, 2021, 2021
  • [33] Joint learning of foreground, background and edge for salient object detection
    Wu, Qin
    Zhu, Pengcheng
    Chai, Zhilei
    Guo, Guodong
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2024, 240
  • [34] Multi-object Tracking by Joint Detection and Identification Learning
    Bo Ke
    Huicheng Zheng
    Lvran Chen
    Zhiwei Yan
    Ye Li
    Neural Processing Letters, 2019, 50 : 283 - 296
  • [35] Multi-object Tracking by Joint Detection and Identification Learning
    Ke, Bo
    Zheng, Huicheng
    Chen, Lvran
    Yan, Zhiwei
    Li, Ye
    NEURAL PROCESSING LETTERS, 2019, 50 (01) : 283 - 296
  • [36] Camouflaged Object Detection That Does Not Require Additional Priors
    Dong, Yuchen
    Zhou, Heng
    Li, Chengyang
    Xie, Junjie
    Xie, Yongqiang
    Li, Zhongbo
    APPLIED SCIENCES-BASEL, 2024, 14 (06):
  • [37] Improved Salient Object Detection Based on Background Priors
    Xi, Tao
    Fang, Yuming
    Lin, Weisi
    Zhang, Yabin
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2015, PT I, 2015, 9314 : 411 - 420
  • [38] Morphable Detector for Object Detection on Demand
    Zhao, Xiangyun
    Zou, Xu
    Wu, Ying
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 4751 - 4760
  • [39] Unsupervised Joint Object Discovery and Segmentation in Internet Images
    Rubinstein, Michael
    Joulin, Armand
    Kopf, Johannes
    Liu, Ce
    2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, : 1939 - 1946
  • [40] Salient Object Detection With Spatiotemporal Background Priors for Video
    Xi, Tao
    Zhao, Wei
    Wang, Han
    Lin, Weisi
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017, 26 (07) : 3425 - 3436