Efficient Occupancy Grid Mapping and Camera-LiDAR Fusion for Conditional Imitation Learning Driving

被引:2
|
作者
Eraqi, Hesham M. [1 ,2 ]
Moustafa, Mohamed N. [1 ]
Honer, Jens [2 ]
机构
[1] Amer Univ Cairo, Comp Sci & Engn Dept, Cairo, Egypt
[2] Valeo, Driving Assistance Dept, Paris, France
关键词
VISION;
D O I
10.1109/itsc45102.2020.9294222
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Deep neural networks trained end-to-end on demonstrations of human driving have learned to follow roads, avoid obstacles, and take specific turns in intersections to reach a destination. Such conditional imitation learning approach is demonstrated to drive efficiently when deployed on the same training environments, but performance dramatically decreases when deployed to new environments and is not consistent against varying weathers. In this work, a proposed model aims to cope with such two challenges by fusing laser scanner input with the camera. Additionally, a new efficient method of Occupancy Grid Mapping is introduced and used to rectify the model output to further improve the performance. On CARLA simulator urban driving benchmark, the proposed system improves autonomous driving success rate and average distance traveled towards destination on all driving tasks and environments combinations, while it's trained on automatically recorded traces. Autonomous driving success rate generalization improves by 57% and weather consistency improved by around four times.
引用
收藏
页数:7
相关论文
共 50 条
  • [31] Improved occupancy grid learning - The ConForM approach to occupancy grid mapping
    Collins, Thomas
    Collins, J. J.
    Ryan, Conor
    ICINCO 2007: PROCEEDINGS OF THE FOURTH INTERNATIONAL CONFERENCE ON INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS, VOL RA-2: ROBOTICS AND AUTOMATION, VOL 2, 2007, : 492 - 497
  • [32] YCANet: Target Detection for Complex Traffic Scenes Based on Camera-LiDAR Fusion
    Shen, Zhen
    He, Yunze
    Du, Xu
    Yu, Junfeng
    Wang, Hongjin
    Wang, Yaonan
    IEEE SENSORS JOURNAL, 2024, 24 (06) : 8379 - 8389
  • [33] CLOCs: Camera-LiDAR Object Candidates Fusion for 3D Object Detection
    Pang, Su
    Morris, Daniel
    Radha, Hayder
    2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 10386 - 10393
  • [34] A New Approach to Lidar and Camera Fusion for Autonomous Driving
    Bae, Seunghwan
    Han, Dongun
    Park, Seongkeun
    2023 INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE IN INFORMATION AND COMMUNICATION, ICAIIC, 2023, : 751 - 753
  • [35] Robust Feature-Based Camera-LiDAR Fusion for Global Localization of a Mobile Robot
    Salam, Yasir
    Li, Yinbei
    Yang, Jiaqiang
    Fan, Wei
    CONTROL ENGINEERING AND APPLIED INFORMATICS, 2024, 26 (04): : 40 - 49
  • [36] Probabilistic multi-modal depth estimation based on camera-LiDAR sensor fusion
    Obando-Ceron, Johan S.
    Romero-Cano, Victor
    Monteiro, Sildomar
    MACHINE VISION AND APPLICATIONS, 2023, 34 (05)
  • [37] High-Accuracy Lane Geodetic Coordinates Extraction Based on Camera-Lidar Fusion
    Peng, Cheng-Wei
    Hsu, Chen-Chien
    Wang, Wei-Yen
    2023 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS, ICCE, 2023,
  • [38] Camera-LiDAR Multi-Level Sensor Fusion for Target Detection at the Network Edge
    Mendez, Javier
    Molina, Miguel
    Rodriguez, Noel
    Cuellar, Manuel P.
    Morales, Diego P.
    SENSORS, 2021, 21 (12)
  • [39] Camera-LiDAR Cross-Modality Fusion Water Segmentation for Unmanned Surface Vehicles
    Gao, Jiantao
    Zhang, Jingting
    Liu, Chang
    Li, Xiaomao
    Peng, Yan
    JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2022, 10 (06)
  • [40] CamLiFlow: Bidirectional Camera-LiDAR Fusion for Joint Optical Flow and Scene Flow Estimation
    Liu, Haisong
    Lu, Tao
    Xu, Yihui
    Liu, Jia
    Li, Wenjie
    Chen, Lijun
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 5781 - 5791