Regional feature fusion for on-road detection of objects using camera and 3D-LiDAR in high-speed autonomous vehicles

被引:75
|
作者
Wu, Qingyu [1 ]
Li, Xiaoxiao [1 ]
Wang, Kang [2 ]
Bilal, Hazrat [3 ]
机构
[1] Univ Chinese Acad Sci, Beijing 100049, Peoples R China
[2] China Mobile Zhejiang Innovat Res Co Ltd, Hangzhou 310030, Zhejiang, Peoples R China
[3] Univ Sci & Technol China, Dept Automat, Hefei 2300271, Peoples R China
关键词
Autonomous vehicle; Object detection; 3D LIDAR; CNN; Feature extraction; Regional features;
D O I
10.1007/s00500-023-09278-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Autonomous vehicles require accurate, and fast decision-making perception systems to know the driving environment. The 2D object detection is critical in allowing the perception system to know the environment. However, 2D object detection lacks depth information, which are crucial for understanding the driving environment. Therefore, 3D object detection is essential for the perception system of autonomous vehicles to predict the location of objects and understand the driving environment. The 3D object detection also faces challenges because of scale changes, and occlusions. Therefore in this study, a novel object detection method is presented that fuses the complementary information of 2D and 3D object detection to accurately detect objects in autonomous vehicles. Firstly, the aim is to project the 3D-LiDAR data into image space. Secondly, the regional proposal network (RPN) to produce a region of interest (ROI) is utilised. The ROI pooling network is used to map the ROI into ResNet50 feature extractor to get a feature map of fixed size. To accurately predict the dimensions of all the objects, we fuse the features of the 3D-LiDAR with the regional features obtained from camera images. The fused features from 3D-LiDAR and camera images are employed as input to the faster-region based convolution neural network (Faster-RCNN) network for the detection of objects. The assessment results on the KITTI object detection dataset reveal that the method can accurately predict car, van, truck, pedestrian and cyclist with an average precision of 94.59%, 82.50%, 79.60%, 85.31%, 86.33%, respectively, which is better than most of the previous methods. Moreover, the average processing time of the proposed method is only 70 ms which meets the real-time demand of autonomous vehicles. Additionally, the proposed model runs at 15.8 frames per second (FPS), which is faster than state-of-the-art fusion methods for 3D-LiDAR and camera.
引用
收藏
页码:18195 / 18213
页数:19
相关论文
共 50 条
  • [1] Regional feature fusion for on-road detection of objects using camera and 3D-LiDAR in high-speed autonomous vehicles
    Qingyu Wu
    Xiaoxiao Li
    Kang Wang
    Hazrat Bilal
    Soft Computing, 2023, 27 : 18195 - 18213
  • [2] 3D-LIDAR Feature Based Localization for Autonomous Vehicles
    Wei, Pengfei
    Wang, Xiaonian
    Guo, Yafeng
    2020 IEEE 16TH INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING (CASE), 2020, : 288 - 293
  • [3] Robust Curb Detection with Fusion of 3D-Lidar and Camera Data
    Tan, Jun
    Li, Jian
    An, Xiangjing
    He, Hangen
    SENSORS, 2014, 14 (05): : 9046 - 9073
  • [4] Map-Based Localization Method for Autonomous Vehicles Using 3D-LIDAR
    Wang, Liang
    Zhang, Yihuan
    Wang, Jun
    IFAC PAPERSONLINE, 2017, 50 (01): : 276 - 281
  • [5] Multi-Target Tracking using a 3D-Lidar Sensor for Autonomous Vehicles
    Choi, Jaebum
    Ulbrich, Simon
    Lichte, Bernd
    Maurer, Markus
    2013 16TH INTERNATIONAL IEEE CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS - (ITSC), 2013, : 881 - 886
  • [6] Validation of localization method for autonomous vehicles using road feature map and 3D LiDAR sensor
    Hwang J.
    Ahn K.-J.
    Kang Y.
    Journal of Institute of Control, Robotics and Systems, 2019, 25 (06): : 557 - 564
  • [7] Development of Blocked Route Signboard Detection Algorithm for Autonomous Mobile Robot Combining Omnidirectional Camera and 3D-LiDAR
    Kono, Toshiki
    Ohkubo, Tomoyuki
    Supsup, Kazuyuki Kobayashi
    Watanabe, Kajiro
    2020 59TH ANNUAL CONFERENCE OF THE SOCIETY OF INSTRUMENT AND CONTROL ENGINEERS OF JAPAN (SICE), 2020, : 484 - 489
  • [8] PLC-Fusion: Perspective-Based Hierarchical and Deep LiDAR Camera Fusion for 3D Object Detection in Autonomous Vehicles
    Mushtaq, Husnain
    Deng, Xiaoheng
    Azhar, Fizza
    Ali, Mubashir
    Sherazi, Hafiz Husnain Raza
    INFORMATION, 2024, 15 (11)
  • [9] Fusion of 3D LIDAR and Camera Data for Object Detection in Autonomous Vehicle Applications
    Zhao, Xiangmo
    Sun, Pengpeng
    Xu, Zhigang
    Min, Haigen
    Yu, Hongkai
    IEEE SENSORS JOURNAL, 2020, 20 (09) : 4901 - 4913
  • [10] Improvement of 3D-SLAM Accuracy by Removing Moving Objects on 3D-LiDAR Point Cloud Using Image Recognition in Web Camera
    Konno, Jun
    Ando, Yoshinobu
    2022 22ND INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2022), 2022, : 527 - 531