Removing Movable Objects from Grid Maps of Self-Driving Cars Using Deep Neural Networks

被引:0
|
作者
Guidolini, Ranik [1 ]
Carneiro, Raphael V. [1 ]
Badue, Claudine [1 ]
Oliveira-Santos, Thiago [1 ]
De Souza, Alberto F. [1 ]
机构
[1] Univ Fed Espirito Santo, Dept Informat, Vitoria, ES, Brazil
关键词
Occupancy Grid Maps; Self-Driving Cars; Deep Neural Networks;
D O I
10.1109/ijcnn.2019.8851779
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose a technique for removing traces of movable objects from occupancy grid maps based on deep neural networks, dubbed enhanced occupancy grid map generation (E-OGM-G). In E-OGM-G, we capture camera images synchronized and aligned with LiDAR rays, semantically segment these images, and compute which laser rays of the LiDAR hit pixels segmented as belonging to movable objects. By clustering laser rays that are close together in a 2D projection, we are able to identify clusters that belong to movable objects and avoid using them in the process of generating the OGMs - this allows generating OGMs clean of movable objects. Clean OGMs are important for several aspects of self-driving cars' operation (i.e., localization). We tested E-OGM-G using data obtained in a real-world scenario - a 2.6 km stretch of a busy multi-lane urban road. Our results showed that E-OGM-G can achieve a precision of 81.19% considering the whole OGMs generated, of 89.76% considering a track in these OGMs of width of 12 m, and of 100.00% considering a track of width of 3.4 m. We then tested a self-driving car using the automatically cleaned OGMs. The self-driving car was able to properly localize itself and to autonomously drive itself in the world using the cleaned OGMs. These successful results showed that the proposed technique is effective in removing movable objects from static OGMs.
引用
收藏
页数:8
相关论文
共 50 条
  • [21] Deep Learning for Self-Driving Cars: Chances and Challenges Extended Abstract
    Rao, Qing
    Frtunikj, Jelena
    PROCEEDINGS 2018 IEEE/ACM 1ST INTERNATIONAL WORKSHOP ON SOFTWARE ENGINEERING FOR AI IN AUTONOMOUS SYSTEMS (SEFAIAS), 2018, : 35 - 38
  • [22] End-to-End Self-Driving Using Deep Neural Networks with Multi-auxiliary Tasks
    Wang, Dan
    Wen, Junjie
    Wang, Yuyong
    Huang, Xiangdong
    Pei, Feng
    AUTOMOTIVE INNOVATION, 2019, 2 (02) : 127 - 136
  • [23] Predicting Steering Actions for Self-Driving Cars Through Deep Learning
    Ou, Chaojie
    Bedawi, Safaa Mahmoud
    Koesdwiady, Arief B.
    Karray, Fakhri
    2018 IEEE 88TH VEHICULAR TECHNOLOGY CONFERENCE (VTC-FALL), 2018,
  • [24] An Improved Deep Learning Solution for Object Detection in Self-Driving Cars
    Mobahi, Mina
    Sadati, Seyed Hossein
    2020 28TH IRANIAN CONFERENCE ON ELECTRICAL ENGINEERING (ICEE), 2020, : 316 - 320
  • [25] DISTANCE MEASUREMENT FOR SELF-DRIVING CARS USING STEREO CAMERA
    Salman, Yasir Dawood
    Ku-Mahamud, Ku Ruhana
    Kamioka, Eiji
    PROCEEDINGS OF THE 6TH INTERNATIONAL CONFERENCE ON COMPUTING AND INFORMATICS: EMBRACING ECO-FRIENDLY COMPUTING, 2017, : 235 - 242
  • [26] Self-driving Cars Using CNN and Q-learning
    Chishti, Syed Owais Ali
    Riaz, Sana
    Zaib, Muhammad Bilal
    Nauman, Mohammad
    2018 IEEE 21ST INTERNATIONAL MULTI-TOPIC CONFERENCE (INMIC), 2018,
  • [27] End-to-End Self-Driving Using Deep Neural Networks with Multi-auxiliary Tasks
    Dan Wang
    Junjie Wen
    Yuyong Wang
    Xiangdong Huang
    Feng Pei
    Automotive Innovation, 2019, 2 : 127 - 136
  • [28] Track Maneuvering using PID Control for Self-driving Cars
    Farag, Wael
    RECENT ADVANCES IN ELECTRICAL & ELECTRONIC ENGINEERING, 2020, 13 (01) : 91 - 100
  • [29] Using data from multiplex networks on vehicles in road tests, in intelligent transportation systems, and in self-driving cars
    Shadrin S.S.
    Ivanov A.M.
    Karpukhin K.E.
    Russian Engineering Research, 2016, 36 (10) : 811 - 814
  • [30] From Big Data to Better Behavior in Self-Driving Cars
    Fathi, F.
    Abghour, N.
    Ouzzif, M.
    PROCEEDINGS OF 2018 2ND INTERNATIONAL CONFERENCE ON CLOUD AND BIG DATA COMPUTING (ICCBDC 2018), 2018, : 42 - 46