Enhanced Object Detection in Autonomous Vehicles through LiDAR-Camera Sensor Fusion

被引:3
|
作者
Dai, Zhongmou [1 ,2 ]
Guan, Zhiwei [1 ,3 ]
Chen, Qiang [1 ]
Xu, Yi [4 ,5 ]
Sun, Fengyi [1 ]
机构
[1] Tianjin Univ Technol & Educ, Sch Automobile & Transportat, Tianjin 300222, Peoples R China
[2] Shandong Transport Vocat Coll, Weifang 261206, Peoples R China
[3] Tianjin Sino German Univ Appl Sci, Sch Automobile & Rail Transportat, Tianjin 300350, Peoples R China
[4] Natl & Local Joint Engn Res Ctr Intelligent Vehicl, Tianjin 300222, Peoples R China
[5] QINGTE Grp Co Ltd, Qingdao 266106, Peoples R China
来源
WORLD ELECTRIC VEHICLE JOURNAL | 2024年 / 15卷 / 07期
关键词
autonomous vehicles; object detection; object tracking; LiDAR-camera fusion; improved DeepSORT; EXTRINSIC CALIBRATION; TRACKING;
D O I
10.3390/wevj15070297
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
To realize accurate environment perception, which is the technological key to enabling autonomous vehicles to interact with their external environments, it is primarily necessary to solve the issues of object detection and tracking in the vehicle-movement process. Multi-sensor fusion has become an essential process in efforts to overcome the shortcomings of individual sensor types and improve the efficiency and reliability of autonomous vehicles. This paper puts forward moving object detection and tracking methods based on LiDAR-camera fusion. Operating based on the calibration of the camera and LiDAR technology, this paper uses YOLO and PointPillars network models to perform object detection based on image and point cloud data. Then, a target box intersection-over-union (IoU) matching strategy, based on center-point distance probability and the improved Dempster-Shafer (D-S) theory, is used to perform class confidence fusion to obtain the final fusion detection result. In the process of moving object tracking, the DeepSORT algorithm is improved to address the issue of identity switching resulting from dynamic objects re-emerging after occlusion. An unscented Kalman filter is utilized to accurately predict the motion state of nonlinear objects, and object motion information is added to the IoU matching module to improve the matching accuracy in the data association process. Through self-collected data verification, the performances of fusion detection and tracking are judged to be significantly better than those of a single sensor. The evaluation indexes of the improved DeepSORT algorithm are 66% for MOTA and 79% for MOTP, which are, respectively, 10% and 5% higher than those of the original DeepSORT algorithm. The improved DeepSORT algorithm effectively solves the problem of tracking instability caused by the occlusion of moving objects.
引用
收藏
页数:24
相关论文
共 50 条
  • [41] Motion Guided LiDAR-Camera Self-calibration and Accelerated Depth Upsampling for Autonomous Vehicles
    Juan Castorena
    Gintaras V. Puskorius
    Gaurav Pandey
    Journal of Intelligent & Robotic Systems, 2020, 100 : 1129 - 1138
  • [42] Motion Guided LiDAR-Camera Self-calibration and Accelerated Depth Upsampling for Autonomous Vehicles
    Castorena, Juan
    Puskorius, Gintaras, V
    Pandey, Gaurav
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2020, 100 (3-4) : 1129 - 1138
  • [43] An advanced object classification strategy using YOLO through camera and LiDAR sensor fusion
    Kim, Jinsoo
    Kim, Jongwon
    Cho, Jeongho
    2019 13TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATION SYSTEMS (ICSPCS), 2019,
  • [44] BEVFusion: A Simple and Robust LiDAR-Camera Fusion Framework
    Liang, Tingting
    Xie, Hongwei
    Yu, Kaicheng
    Xia, Zhongyu
    Lin, Zhiwei
    Wang, Yongtao
    Tang, Tao
    Wang, Bing
    Tang, Zhi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [45] Camera/LiDAR Sensor Fusion-based Autonomous Navigation
    Yusefi, Abdullah
    Durdu, Akif
    Toy, Ibrahim
    2024 23RD INTERNATIONAL SYMPOSIUM INFOTEH-JAHORINA, INFOTEH, 2024,
  • [46] LIDAR-camera fusion for road detection using fully convolutional neural networks
    Caltagirone, Luca
    Bellone, Mauro
    Svensson, Lennart
    Wande, Mattias
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2019, 111 : 125 - 131
  • [47] Multi-Stage Residual Fusion Network for LIDAR-Camera Road Detection
    Yu, Dameng
    Xiong, Hui
    Xu, Qing
    Wang, Jianqiang
    Li, Keqiang
    2019 30TH IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV19), 2019, : 2323 - 2328
  • [48] Joint Multi-Object Detection and Tracking with Camera-LiDAR Fusion for Autonomous Driving
    Huang, Kemiao
    Hao, Qi
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 6983 - 6989
  • [49] Fusion of 3D LIDAR and Camera Data for Object Detection in Autonomous Vehicle Applications
    Zhao, Xiangmo
    Sun, Pengpeng
    Xu, Zhigang
    Min, Haigen
    Yu, Hongkai
    IEEE SENSORS JOURNAL, 2020, 20 (09) : 4901 - 4913
  • [50] A Sensor Fusion System with Thermal Infrared Camera and LiDAR for Autonomous Vehicles: Its Calibration and. Application
    Choi, Ji Dong
    Kim, Min Young
    12TH INTERNATIONAL CONFERENCE ON UBIQUITOUS AND FUTURE NETWORKS (ICUFN 2021), 2021, : 361 - 365