MF-YOLO: A Lightweight Method for Real-Time Dangerous Driving Behavior Detection

被引:0
|
作者
Wang, Chen [1 ]
Lin, Mohan [2 ]
Shao, Liang [1 ]
Xiang, Jiawei [1 ]
机构
[1] Wenzhou Univ, Coll Mech & Elect Engn, Wenzhou 325035, Peoples R China
[2] Kean Univ, Coll Liberal Arts, Union, NJ 07083 USA
基金
中国国家自然科学基金;
关键词
Feature extraction; Transformers; Vehicles; YOLO; Computational modeling; Convolution; Computer vision; Support vector machines; Convolutional neural networks; Attention mechanism; driver's dangerous behavior; lightweight neural network; multiple fusion;
D O I
10.1109/TIM.2024.3472868
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Dangerous driving behavior is a serious issue leading to harm drivers and further increase traffic burden. A You Only Look Once (YOLO) model is a commonly used fast detection model suitable for real-time dangerous driving behavior detection with poor detection performance. To address this problem, a lightweight object detection model called multiple fusion YOLO (MF-YOLO) model is proposed to show the superior capability in small target detection and compatibility with mobile chipsets. First, we design a novel backbone using convolution and vision transformer (ViT) multifusion blocks to fuse local and global context information. Second, a lightweight feature pyramid network (FPN) neck is developed to reduce model complexity and enhance feature extraction ability. Third, an attention mechanism is added to the neck for concentrating the YOLO model on relevant information during feature fusion. Finally, the activation function of fractional rectified linear unit (FReLU) equipped with spatial intersection over union (SIoU) loss function to improve model speed and accuracy. Experimental results from our self-built driving scenario dataset indicate that MF-YOLO achieved mean average precision (mAP) of 91.4%, surpassing YOLOv5n by 6.4%, and even outperforming the latest YOLOv8n by 2.3%.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] A real-time and lightweight traffic sign detection method based on ghost-YOLO
    Zhang, Shuo
    Che, Shengbing
    Liu, Zhen
    Zhang, Xu
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (17) : 26063 - 26087
  • [2] Lightweight tomato real-time detection method based on improved YOLO and mobile deployment
    Zeng, Taiheng
    Li, Siyi
    Song, Qiming
    Zhong, Fenglin
    Wei, Xuan
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2023, 205
  • [3] A real-time and lightweight traffic sign detection method based on ghost-YOLO
    Shuo Zhang
    Shengbing Che
    Zhen Liu
    Xu Zhang
    Multimedia Tools and Applications, 2023, 82 : 26063 - 26087
  • [4] TRC-YOLO: A real-time detection method for lightweight targets based on mobile devices
    Wang, Guanbo
    Ding, Hongwei
    Yang, Zhijun
    Li, Bo
    Wang, Yihao
    Bao, Liyong
    IET COMPUTER VISION, 2022, 16 (02) : 126 - 142
  • [5] PHL-YOLO: a real-time lightweight yarn inspection method
    Dai, Jiachao
    Ren, Jia
    Li, Shangjie
    JOURNAL OF REAL-TIME IMAGE PROCESSING, 2025, 22 (01)
  • [6] Real-time and lightweight detection of grape diseases based on Fusion Transformer YOLO
    Liu, Yifan
    Yu, Qiudong
    Geng, Shuze
    FRONTIERS IN PLANT SCIENCE, 2024, 15
  • [7] MBAB-YOLO: A Modified Lightweight Architecture for Real-Time Small Target Detection
    Zhang, Jun
    Meng, Yizhen
    Yu, Xiaohui
    Bi, Hongjing
    Chen, Zhipeng
    Li, Huafeng
    Yang, Runtao
    Tian, Jingjun
    IEEE ACCESS, 2023, 11 : 78384 - 78401
  • [8] Real-time face detection based on YOLO
    Wang Yang
    Zheng Jiachun
    PROCEEDINGS OF THE 2018 1ST IEEE INTERNATIONAL CONFERENCE ON KNOWLEDGE INNOVATION AND INVENTION (ICKII 2018), 2018, : 221 - 224
  • [9] Poster: Lightweight Features Sharing for Real-Time Object Detection in Cooperative Driving
    Hawlader, Faisal
    Robinet, Francois
    Frank, Raphael
    2023 IEEE VEHICULAR NETWORKING CONFERENCE, VNC, 2023, : 159 - 160
  • [10] Real-Time Hand Detection Method Based on Lightweight Network
    Jin, Fangrui
    Wang, Yangping
    Yong, Jiu
    Computer Engineering and Applications, 2023, 59 (14) : 192 - 200