ROBUST AND ACCURATE OBJECT DETECTION VIA SELF-KNOWLEDGE DISTILLATION

被引:0
|
作者
Xu, Weipeng [1 ]
Chu, Pengzhi [1 ]
Xie, Renhao [1 ]
Xiao, Xiongziyan [1 ]
Huang, Hongcheng [1 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
关键词
deep learning; object detection; adversarial robustness; knowledge distillation;
D O I
10.1109/ICIP46576.2022.9898031
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Object detection has achieved promising performance on clean datasets, but how to achieve better tradeoff between the adversarial robustness and clean precision is still underexplored. Adversarial training is the mainstream method to improve robustness, but most of the works will sacrifice clean precision to gain robustness than standard training. In this paper, we propose Unified Decoupled Feature Alignment (UDFA), a novel fine-tuning paradigm which achieves better performance than existing methods, by fully exploring the combination between self-knowledge distillation and adversarial training for object detection. With extensive experiments on the PASCAL-VOC and MS-COCO benchmarks, the evaluation results show that UDFA can surpass the standard training and state-of-the-art adversarial training methods for object detection. For example, compared with teacher detector, our approach on GFLV2 with ResNet-50 improves clean precision by 2.2 AP on PASCAL-VOC; compared with SOTA adversarial training methods, our approach improves clean precision by 1.6 AP, while improving adversarial robustness by 0.5 AP. Our code is available at https://github.com/grispeut/udfa.
引用
收藏
页码:91 / 95
页数:5
相关论文
共 50 条
  • [21] Noisy Self-Knowledge Distillation for Text Summarization
    Liu, Yang
    Shen, Sheng
    Lapata, Mirella
    [J]. 2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 692 - 703
  • [22] KD-SCFNet: Towards more accurate and lightweight salient object detection via knowledge distillation
    Zhang, Jin
    Shi, Yanjiao
    Yang, Jinyu
    Guo, Qianqian
    [J]. NEUROCOMPUTING, 2024, 572
  • [23] Weakly Supervised Referring Expression Grounding via Dynamic Self-Knowledge Distillation
    Mi, Jinpeng
    Chen, Zhiqian
    Zhang, Jianwei
    [J]. 2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, IROS, 2023, : 1254 - 1260
  • [24] Refine Myself by Teaching Myself : Feature Refinement via Self-Knowledge Distillation
    Ji, Mingi
    Shin, Seungjae
    Hwang, Seunghyun
    Park, Gibeom
    Moon, Il-Chul
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 10659 - 10668
  • [25] Enhancing deep feature representation in self-knowledge distillation via pyramid feature refinement
    Yu, Hao
    Feng, Xin
    Wang, Yunlong
    [J]. PATTERN RECOGNITION LETTERS, 2024, 178 : 35 - 42
  • [26] TRUTH AND CONSEQUENCES - THE COSTS AND BENEFITS OF ACCURATE SELF-KNOWLEDGE
    BROWN, JD
    DUTTON, KA
    [J]. PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN, 1995, 21 (12) : 1288 - 1296
  • [27] Siamese Sleep Transformer For Robust Sleep Stage Scoring With Self-knowledge Distillation and Selective Batch Sampling
    Kwak, Heon-Gyu
    Kweon, Young-Seok
    Shin, Gi-Hwan
    [J]. 2023 11TH INTERNATIONAL WINTER CONFERENCE ON BRAIN-COMPUTER INTERFACE, BCI, 2023,
  • [28] MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition
    Yang, Chuanguang
    An, Zhulin
    Zhou, Helong
    Cai, Linhang
    Zhi, Xiang
    Wu, Jiwen
    Xu, Yongjun
    Zhang, Qian
    [J]. COMPUTER VISION, ECCV 2022, PT XXIV, 2022, 13684 : 534 - 551
  • [29] A Novel Small Target Detection Strategy: Location Feature Extraction in the Case of Self-Knowledge Distillation
    Liu, Gaohua
    Li, Junhuan
    Yan, Shuxia
    Liu, Rui
    [J]. APPLIED SCIENCES-BASEL, 2023, 13 (06):
  • [30] Self-Knowledge Distillation for First Trimester Ultrasound Saliency Prediction
    Gridach, Mourad
    Savochkina, Elizaveta
    Drukker, Lior
    Papageorghiou, Aris T.
    Noble, J. Alison
    [J]. SIMPLIFYING MEDICAL ULTRASOUND, ASMUS 2022, 2022, 13565 : 117 - 127