A method of knowledge distillation based on feature fusion and attention mechanism for complex traffic scenes

被引:9
|
作者
Li, Cui-jin [1 ,2 ]
Qu, Zhong [1 ]
Wang, Sheng-ye [1 ]
机构
[1] Chongqing Univ Posts & Telecommun, Coll Comp Sci & Technol, Chongqing 400065, Peoples R China
[2] Chongqing Inst Engn, Coll Elect Informat, Chongqing 400056, Peoples R China
基金
中国国家自然科学基金;
关键词
Object detection; Knowledge distillation; Attention mechanism; Feature fusion; Complex traffic scenes;
D O I
10.1016/j.engappai.2023.106533
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Object detectors based on deep learning can run smoothly on a terminal device in complex traffic scenes, and the model compression method has become a research hotspot. Considering student network single learning in the knowledge distillation algorithm, the dependence on loss function design leads to parameter sensitivity and other problems, we propose a new knowledge distillation method with second-order term attention mechanisms and feature fusion of adjacent layers. First, we build a knowledge distillation framework based on YOLOv5 and propose a new attention mechanism in the teacher network backbone to extract the hot map. Then, we combine the hot map features with the next level features through the fusion module. By fusing the useful information of the low convolution layer and the feature map of the high convolution layer to help the student network obtain the final prediction map. Finally, to improve the accuracy of small objects, we add a 160 x 160 detection head and use a transformer encoder block module to replace the convolution network of the head. Sufficient experimental results show that our method achieves state-of-the-art performance. The speed and number of parameters remain unchanged, but the average detection accuracy is 97.4% on the KITTI test set. On the Cityscapes test set, the average detection accuracy reaches 92.7%.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] Knowledge Distillation Based on Positive-Unlabeled Classification and Attention Mechanism
    Tang, Jialiang
    Liu, Mingjin
    Jiang, Ning
    Yu, Wenxin
    Yang, Changzheng
    Zhou, Jinjia
    2021 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2021,
  • [42] Apple Leaf Disease Diagnosis Based on Knowledge Distillation and Attention Mechanism
    Dong, Qin
    Gu, Rongchen
    Chen, Shuting
    Zhu, Jinxin
    IEEE ACCESS, 2024, 12 : 65154 - 65165
  • [43] A Novel Target Tracking Scheme Based on Attention Mechanism in Complex Scenes
    Wang, Yu
    Yang, Zhutian
    Yang, Wei
    Yang, Jiamin
    ELECTRONICS, 2022, 11 (19)
  • [44] Pomelo Tree Detection Method Based on Attention Mechanism and Cross-Layer Feature Fusion
    Yuan, Haotian
    Huang, Kekun
    Ren, Chuanxian
    Xiong, Yongzhu
    Duan, Jieli
    Yang, Zhou
    REMOTE SENSING, 2022, 14 (16)
  • [45] Method for multi-band image feature-level fusion based on the attention mechanism
    Yang, Xiaoli
    Lin, Suzhen
    Xi'an Dianzi Keji Daxue Xuebao/Journal of Xidian University, 2020, 47 (01): : 120 - 127
  • [46] Bearing Fault Diagnosis Method Based on Attention Mechanism and Multi-Channel Feature Fusion
    Gao, Hongfeng
    Ma, Jie
    Zhang, Zhonghang
    Cai, Chaozhi
    IEEE ACCESS, 2024, 12 : 45011 - 45025
  • [47] Multiscale knowledge distillation with attention based fusion for robust human activity recognition
    Yuan, Zhaohui
    Yang, Zhengzhe
    Ning, Hao
    Tang, Xiangyang
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [48] Multiscale network based on feature fusion for fire disaster detection in complex scenes
    Feng, Jian
    Sun, Yu
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 240
  • [49] Object Detection Algorithm for Complex Road Scenes Based on Adaptive Feature Fusion
    Ran, Xiansheng
    Su, Shanjie
    Chen, Junhao
    Zhang, Zhiyun
    Computer Engineering and Applications, 2023, 59 (24) : 216 - 226
  • [50] Knowledge Fusion Distillation: Improving Distillation with Multi-scale Attention Mechanisms
    Li, Linfeng
    Su, Weixing
    Liu, Fang
    He, Maowei
    Liang, Xiaodan
    NEURAL PROCESSING LETTERS, 2023, 55 (05) : 6165 - 6180