Structured Knowledge Distillation for Accurate and Efficient Object Detection

被引:3
|
作者
Zhang, Linfeng [1 ]
Ma, Kaisheng [1 ]
机构
[1] Tsinghua Univ, Inst Interdisciplinary Informat Sci, Beijing 100084, Peoples R China
关键词
Attention; instance segmentation; knowledge distillation; model acceleration and compression; non-local module; object detection; student-teacher learning;
D O I
10.1109/TPAMI.2023.3300470
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Knowledge distillation, which aims to transfer the knowledge learned by a cumbersome teacher model to a lightweight student model, has become one of the most popular and effective techniques in computer vision. However, many previous knowledge distillation methods are designed for image classification and fail in more challenging tasks such as object detection. In this paper, we first suggest that the failure of knowledge distillation on object detection is mainly caused by two reasons: (1) the imbalance between pixels of foreground and background and (2) lack of knowledge distillation on the relation among different pixels. Then, we propose a structured knowledge distillation scheme, including attention-guided distillation and non-local distillation to address the two issues, respectively. Attention-guided distillation is proposed to find the crucial pixels of foreground objects with an attention mechanism and then make the students take more effort to learn their features. Non-local distillation is proposed to enable students to learn not only the feature of an individual pixel but also the relation between different pixels captured by non-local modules. Experimental results have demonstrated the effectiveness of our method on thirteen kinds of object detection models with twelve comparison methods for both object detection and instance segmentation. For instance, Faster RCNN with our distillation achieves 43.9 mAP on MS COCO2017, which is 4.1 higher than the baseline. Additionally, we show that our method is also beneficial to the robustness and domain generalization ability of detectors. Codes and model weights have been released on GitHub(1).
引用
收藏
页码:15706 / 15724
页数:19
相关论文
共 50 条
  • [1] Learning Efficient Object Detection Models with Knowledge Distillation
    Chen, Guobin
    Choi, Wongun
    Yu, Xiang
    Han, Tony
    Chandraker, Manmohan
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [2] ROBUST AND ACCURATE OBJECT DETECTION VIA SELF-KNOWLEDGE DISTILLATION
    Xu, Weipeng
    Chu, Pengzhi
    Xie, Renhao
    Xiao, Xiongziyan
    Huang, Hongcheng
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 91 - 95
  • [3] Towards Efficient 3D Object Detection with Knowledge Distillation
    Yang, Jihan
    Shi, Shaoshuai
    Ding, Runyu
    Wang, Zhe
    Qi, Xiaojuan
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [4] Dual Relation Knowledge Distillation for Object Detection
    Ni, Zhen-Liang
    Yang, Fukui
    Wen, Shengzhao
    Zhang, Gang
    [J]. PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 1276 - 1284
  • [5] New Knowledge Distillation for Incremental Object Detection
    Chen, Li
    Yu, Chunyan
    Chen, Lvcai
    [J]. 2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [6] Shared Knowledge Distillation Network for Object Detection
    Guo, Zhen
    Zhang, Pengzhou
    Liang, Peng
    [J]. ELECTRONICS, 2024, 13 (08)
  • [7] PointDistiller: Structured Knowledge Distillation Towards Efficient and Compact 3D Detection
    Zhang, Linfeng
    Dong, Runpei
    Tai, Hung-Shuo
    Ma, Kaisheng
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 21791 - 21801
  • [8] KD-SCFNet: Towards more accurate and lightweight salient object detection via knowledge distillation
    Zhang, Jin
    Shi, Yanjiao
    Yang, Jinyu
    Guo, Qianqian
    [J]. NEUROCOMPUTING, 2024, 572
  • [9] Kernel-mask knowledge distillation for efficient and accurate arbitrary-shaped text detection
    Honghui Chen
    Yuhang Qiu
    Mengxi Jiang
    Jianhui Lin
    Pingping Chen
    [J]. Complex & Intelligent Systems, 2024, 10 : 75 - 86
  • [10] Kernel-mask knowledge distillation for efficient and accurate arbitrary-shaped text detection
    Chen, Honghui
    Qiu, Yuhang
    Jiang, Mengxi
    Lin, Jianhui
    Chen, Pingping
    [J]. COMPLEX & INTELLIGENT SYSTEMS, 2024, 10 (01) : 75 - 86