Object Detection Model Based on Attention Mechanism

被引:0
|
作者
Han, Mengxue [1 ,2 ]
Tang, Xiangyan [1 ,2 ]
Yang, Yue [2 ,3 ]
Huang, Zhennan [4 ]
机构
[1] Hainan Univ, Sch Comp Sci & Technol, Haikou 570228, Hainan, Peoples R China
[2] Hainan Blockchain Technol Engn Res Ctr, Haikou 570228, Hainan, Peoples R China
[3] Hainan Univ, Sch Cyberspace Secur Acad, Cryptog Acad, Haikou 570228, Hainan, Peoples R China
[4] Officers Coll PAP, Informat Secur Major, Chengdu 610000, Peoples R China
来源
基金
中国国家自然科学基金; 海南省自然科学基金;
关键词
Attention mechanism; Object detection; Multi-scale information fusion;
D O I
10.1007/978-981-97-4387-2_6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Detection of objects is a key research direction in computational vision. The purpose is to identify whether an image contains a specific target by employing corresponding detection methods and returning the position of the detected target in the image. This field finds extensive applications in autonomous driving, medical diagnosis, satellite images, and more. Aiming at the problems of the existing models, such as insufficient receptive field and weak detection ability of targets, more noise interference in the feature extraction process, and false detection and missed detection caused by environmental interference, this paper proposes a object detection model based on attention mechanism, and constructs a target detection system based on this. Firstly, this paper introduces the attention mechanism and related data sets. After analyzing the basic framework of You only look Once vision 7 (YOLOv7), it is proposed that attention mechanism should be taken into account after three characteristic graphs are output to its backbone network, and the model structure with the highest accuracy can be obtained through comparative experiments of control variables. The experimental results show that adding attention mechanism can improve the detection accuracy of YOLOv7 network architecture, and the average accuracy of three different models on PASCAL VOC data set is increased by 1.67%, 2.09% and 2.11% respectively.
引用
收藏
页码:74 / 88
页数:15
相关论文
共 50 条
  • [21] Event-based Object Detection with Lightweight Spatial Attention Mechanism
    Liang, Zichen
    Chen, Guang
    Li, Zhijun
    Liu, Peigen
    Knoll, Alois
    2021 6TH IEEE INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONICS (ICARM 2021), 2021, : 498 - 503
  • [22] Global-Local Attention Mechanism Based Small Object Detection
    Liu, Bao
    Huang, Jinlei
    2023 IEEE 12TH DATA DRIVEN CONTROL AND LEARNING SYSTEMS CONFERENCE, DDCLS, 2023, : 1439 - 1443
  • [23] Maritime object detection using attention mechanism
    Walid Messaoud
    Rim Trabelsi
    Adnane Cabani
    Fatma Abdelkefi
    Signal, Image and Video Processing, 2024, 18 : 1833 - 1845
  • [24] Maritime object detection using attention mechanism
    Messaoud, Walid
    Trabelsi, Rim
    Cabani, Adnane
    Abdelkefi, Fatma
    SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (02) : 1833 - 1845
  • [25] Object tactile character recognition model based on attention mechanism LSTM
    Xu, Zhe
    Chen, Muxin
    Liu, Chunfang
    2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 7095 - 7100
  • [26] Object Detection Algorithm Based on Context Information and Self-Attention Mechanism
    Liang, Hong
    Zhou, Hui
    Zhang, Qian
    Wu, Ting
    SYMMETRY-BASEL, 2022, 14 (05):
  • [27] Improved YOLOv7 Underwater Object Detection Based on Attention Mechanism
    Fu, Junshang
    Tian, Ying
    ENGINEERING LETTERS, 2024, 32 (07) : 1377 - 1384
  • [28] A Feature-Enhanced Small Object Detection Algorithm Based on Attention Mechanism
    Quan, Zhe
    Sun, Jun
    SENSORS, 2025, 25 (02)
  • [29] A Dense Small Object Detection Algorithm Based on a Global Normalization Attention Mechanism
    Wu, Huixin
    Zhu, Yang
    Wang, Liuyi
    APPLIED SCIENCES-BASEL, 2023, 13 (21):
  • [30] Airport Object Extraction Based on Visual Attention Mechanism and Parallel Line Detection
    Lv, Jing
    Lv, Wen
    Zhang, Libao
    TARGET AND BACKGROUND SIGNATURES III, 2017, 10432