共 50 条
ROBUST AND ACCURATE OBJECT DETECTION VIA SELF-KNOWLEDGE DISTILLATION
被引:0
|作者:
Xu, Weipeng
[1
]
Chu, Pengzhi
[1
]
Xie, Renhao
[1
]
Xiao, Xiongziyan
[1
]
Huang, Hongcheng
[1
]
机构:
[1] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
关键词:
deep learning;
object detection;
adversarial robustness;
knowledge distillation;
D O I:
10.1109/ICIP46576.2022.9898031
中图分类号:
TP18 [人工智能理论];
学科分类号:
081104 ;
0812 ;
0835 ;
1405 ;
摘要:
Object detection has achieved promising performance on clean datasets, but how to achieve better tradeoff between the adversarial robustness and clean precision is still underexplored. Adversarial training is the mainstream method to improve robustness, but most of the works will sacrifice clean precision to gain robustness than standard training. In this paper, we propose Unified Decoupled Feature Alignment (UDFA), a novel fine-tuning paradigm which achieves better performance than existing methods, by fully exploring the combination between self-knowledge distillation and adversarial training for object detection. With extensive experiments on the PASCAL-VOC and MS-COCO benchmarks, the evaluation results show that UDFA can surpass the standard training and state-of-the-art adversarial training methods for object detection. For example, compared with teacher detector, our approach on GFLV2 with ResNet-50 improves clean precision by 2.2 AP on PASCAL-VOC; compared with SOTA adversarial training methods, our approach improves clean precision by 1.6 AP, while improving adversarial robustness by 0.5 AP. Our code is available at https://github.com/grispeut/udfa.
引用
收藏
页码:91 / 95
页数:5
相关论文