Efficient Object Detection and Recognition of Body Welding Studs Based on Improved YOLOv7

被引:0
|
作者
Huang, Hong [1 ]
Peng, Xiangqian [1 ]
Hu, Xiaoping [2 ]
Ou, Wenchu [1 ]
机构
[1] Hunan Univ Sci & Technol, Sch Mech Engn, Xiangtan 411201, Peoples R China
[2] Prov Key Lab Hlth Maintenance Mech Equipment, Xiangtan 411201, Peoples R China
来源
IEEE ACCESS | 2024年 / 12卷
基金
中国国家自然科学基金;
关键词
Welding studs; object detection; EfficientFormerV2; NWD; YOLOv7;
D O I
10.1109/ACCESS.2024.3376473
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The welding stud is a widely used part in automobile manufacturing, and its welding quality plays a crucial role in component assembly efficiency and vehicle quality. In welded stud target inspection, the complex body environment and different lighting conditions will have a certain impact on the inspection accuracy, and most of the existing methods have limited efficiency. In this paper, in order to solve the problems of low accuracy and slow speed in the stud target inspection process, we propose an innovative welding stud target inspection method based on YOLOv7. First, the EfficientFormerV2 backbone network is adopted to utilize the new partial convolution, which can extract spatial features more efficiently, reduce redundant computation, and improve the detection speed. Secondly, the bounding box loss function is changed to NWD, which reduces the loss value, accelerates the convergence speed of the network model, and better improves the detection of studs. After the test, the improved YOLOv7 network model is better than the traditional network in both speed and accuracy of welded stud target detection. (1) The mAP0.5 increased from 94.6% to 95.2%, and the mAP0.5:0.95 increased from 63.7% to 65.4%. (2) The detection speed increased from 96.1 f/s to 147.1 f/s. The results of the study can provide technical support for the subsequent tasks of automatic detection and position estimation of body welding studs.
引用
收藏
页码:41531 / 41541
页数:11
相关论文
共 50 条
  • [21] Driver fatigue detection based on improved YOLOv7
    Li, Xianguo
    Li, Xueyan
    Shen, Zhenqian
    Qian, Guangmin
    JOURNAL OF REAL-TIME IMAGE PROCESSING, 2024, 21 (03)
  • [22] Improved Complex Road Scene Object Detection Algorithm of YOLOv7
    Du, Juan
    Cui, Shaohua
    Jin, Meijuan
    Ru, Chen
    Computer Engineering and Applications, 2024, 60 (01) : 96 - 103
  • [23] Improved YOLOv7 for Small Object Detection Algorithm Based on Attention and Dynamic Convolution
    Li, Kai
    Wang, Yanni
    Hu, Zhongmian
    APPLIED SCIENCES-BASEL, 2023, 13 (16):
  • [24] YOLOv7-UAV: An Unmanned Aerial Vehicle Image Object Detection Algorithm Based on Improved YOLOv7
    Zeng, Yalin
    Zhang, Tian
    He, Weikai
    Zhang, Ziheng
    ELECTRONICS, 2023, 12 (14)
  • [25] Improved YOLOv7 model for underwater sonar image object detection
    Qin, Ken Sinkou
    Liu, Di
    Wang, Fei
    Zhou, Jingchun
    Yang, Jiaxuan
    Zhang, Weishi
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2024, 100
  • [26] Student Behavior Recognition in Classroom Based on Improved YOLOv7
    Liu, Huayong
    Yue, Ming
    PROCEEDINGS OF 2023 7TH INTERNATIONAL CONFERENCE ON ELECTRONIC INFORMATION TECHNOLOGY AND COMPUTER ENGINEERING, EITCE 2023, 2023, : 1539 - 1544
  • [27] GrainNet: efficient detection and counting of wheat grains based on an improved YOLOv7 modeling
    Xin Wang
    Changchun Li
    Chenyi Zhao
    Yinghua Jiao
    Hengmao Xiang
    Xifang Wu
    Huabin Chai
    Plant Methods, 21 (1)
  • [28] A Small Object Detection Method for Drone-Captured Images Based on Improved YOLOv7
    Zhao, Dewei
    Shao, Faming
    Liu, Qiang
    Yang, Li
    Zhang, Heng
    Zhang, Zihan
    REMOTE SENSING, 2024, 16 (06)
  • [29] Night target detection algorithm based on improved YOLOv7
    Bowen, Zheng
    Huacai, Lu
    Shengbo, Zhu
    Xinqiang, Chen
    Hongwei, Xing
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [30] Improved YOLOv7 Small Object Detection Algorithm for Seaside Aerial Images
    Yu, Miao
    Jia, YinShan
    ARTIFICIAL INTELLIGENCE AND ROBOTICS, ISAIR 2023, 2024, 1998 : 483 - 491