A method of knowledge distillation based on feature fusion and attention mechanism for complex traffic scenes

被引:9
|
作者
Li, Cui-jin [1 ,2 ]
Qu, Zhong [1 ]
Wang, Sheng-ye [1 ]
机构
[1] Chongqing Univ Posts & Telecommun, Coll Comp Sci & Technol, Chongqing 400065, Peoples R China
[2] Chongqing Inst Engn, Coll Elect Informat, Chongqing 400056, Peoples R China
基金
中国国家自然科学基金;
关键词
Object detection; Knowledge distillation; Attention mechanism; Feature fusion; Complex traffic scenes;
D O I
10.1016/j.engappai.2023.106533
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Object detectors based on deep learning can run smoothly on a terminal device in complex traffic scenes, and the model compression method has become a research hotspot. Considering student network single learning in the knowledge distillation algorithm, the dependence on loss function design leads to parameter sensitivity and other problems, we propose a new knowledge distillation method with second-order term attention mechanisms and feature fusion of adjacent layers. First, we build a knowledge distillation framework based on YOLOv5 and propose a new attention mechanism in the teacher network backbone to extract the hot map. Then, we combine the hot map features with the next level features through the fusion module. By fusing the useful information of the low convolution layer and the feature map of the high convolution layer to help the student network obtain the final prediction map. Finally, to improve the accuracy of small objects, we add a 160 x 160 detection head and use a transformer encoder block module to replace the convolution network of the head. Sufficient experimental results show that our method achieves state-of-the-art performance. The speed and number of parameters remain unchanged, but the average detection accuracy is 97.4% on the KITTI test set. On the Cityscapes test set, the average detection accuracy reaches 92.7%.
引用
收藏
页数:11
相关论文
共 50 条
  • [11] AF-ICNet Semantic Segmentation Method for Unstructured Scenes Based on Small Target Category Attention Mechanism and Feature Fusion
    Ai Qinglin
    Zhang Junrui
    Wu Feiqing
    ACTA PHOTONICA SINICA, 2023, 52 (01)
  • [12] Author Correction: Attention and feature transfer based knowledge distillation
    Guoliang Yang
    Shuaiying Yu
    Yangyang Sheng
    Hao Yang
    Scientific Reports, 13
  • [13] Multistage feature fusion knowledge distillation
    Li, Gang
    Wang, Kun
    Lv, Pengfei
    He, Pan
    Zhou, Zheng
    Xu, Chuanyun
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [14] Dynamic Refining Knowledge Distillation Based on Attention Mechanism
    Peng, Xuan
    Liu, Fang
    PRICAI 2022: TRENDS IN ARTIFICIAL INTELLIGENCE, PT II, 2022, 13630 : 45 - 58
  • [15] Real-Time Semantic Segmentation Algorithm for Street Scenes Based on Attention Mechanism and Feature Fusion
    Wu, Bao
    Xiong, Xingzhong
    Wang, Yong
    ELECTRONICS, 2024, 13 (18)
  • [16] Feature fusion-based collaborative learning for knowledge distillation
    Li, Yiting
    Sun, Liyuan
    Gou, Jianping
    Du, Lan
    Ou, Weihua
    INTERNATIONAL JOURNAL OF DISTRIBUTED SENSOR NETWORKS, 2021, 17 (11)
  • [17] A speech emotion recognition method for the elderly based on feature fusion and attention mechanism
    Jian, Qijian
    Xiang, Min
    Huang, Wei
    THIRD INTERNATIONAL CONFERENCE ON ELECTRONICS AND COMMUNICATION; NETWORK AND COMPUTER TECHNOLOGY (ECNCT 2021), 2022, 12167
  • [18] Image Geolocation Method Based on Attention Mechanism Front Loading and Feature Fusion
    Lu, Huayuan
    Yang, Chunfang
    Qi, Baojun
    Zhu, Ma
    Xu, Jingqian
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2022, 2022
  • [19] A keypoint-based object detection method with attention mechanism and feature fusion
    Wang, Hui
    Yang, Tangwen
    2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 2113 - 2118
  • [20] Improving vehicle detection accuracy in complex traffic scenes through context attention and multi-scale feature fusion module
    Liu, Wenbo
    Zhao, Binglin
    Zhu, Yuxin
    Deng, Tao
    Yan, Fei
    APPLIED INTELLIGENCE, 2025, 55 (06)