End-to-End Self-Driving Using Deep Neural Networks with Multi-auxiliary Tasks

被引:20
|
作者
Wang, Dan [1 ]
Wen, Junjie [1 ]
Wang, Yuyong [1 ]
Huang, Xiangdong [1 ]
Pei, Feng [1 ]
机构
[1] GAC R&D Ctr, 381 Wushan Rd, Guangzhou, Peoples R China
关键词
Self-driving; Multi-auxiliary tasks; CNN-LSTM; Deep learning;
D O I
10.1007/s42154-019-00057-1
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
End-to-end self-driving is a method that directly maps raw visual images to vehicle control signals using deep convolutional neural network (CNN). Although prediction of steering angle has achieved good result in single task, the current approach does not effectively simultaneously predict the steering angle and the speed. In this paper, various end-to-end multi-task deep learning networks using deep convolutional neural network combined with long short-term memory recurrent neural network (CNN-LSTM) are designed and compared, which could obtain not only the visual spatial information but also the dynamic temporal information in the driving scenarios, and improve steering angle and speed predictions. Furthermore, two auxiliary tasks based on semantic segmentation and object detection are proposed to improve the understanding of driving scenarios. Experiments are conducted on the public Udacity dataset and a newly collected Guangzhou Automotive Cooperate dataset. The results show that the proposed network architecture could predict steering angles and vehicle speed accurately. In addition, the impact of multi-auxiliary tasks on the network performance is analyzed by visualization method, which shows the salient map of network. Finally, the proposed network architecture has been well verified on the autonomous driving simulation platform Grand Theft Auto V (GTAV) and experimental road with an average takeover rate of two times per 10 km.
引用
收藏
页码:127 / 136
页数:10
相关论文
共 50 条
  • [21] End-to-End Premature Ventricular Contraction Detection Using Deep Neural Networks
    Kraft, Dimitri
    Bieber, Gerald
    Jokisch, Peter
    Rumm, Peter
    SENSORS, 2023, 23 (20)
  • [22] An End-To-End Flood Stage Prediction System Using Deep Neural Networks
    Windheuser, L.
    Karanjit, R.
    Pally, R.
    Samadi, S.
    Hubig, N. C.
    EARTH AND SPACE SCIENCE, 2023, 10 (01)
  • [23] Image Shadow Removal Using End-To-End Deep Convolutional Neural Networks
    Fan, Hui
    Han, Meng
    Li, Jinjiang
    APPLIED SCIENCES-BASEL, 2019, 9 (05):
  • [24] A study on tooth segmentation and numbering using end-to-end deep neural networks
    Silva, Bernardo
    Pinheiro, Lais
    Oliveira, Luciano
    Pithon, Matheus
    2020 33RD SIBGRAPI CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES (SIBGRAPI 2020), 2020, : 164 - 171
  • [25] End-to-End Blind Image Quality Assessment Using Deep Neural Networks
    Ma, Kede
    Liu, Wentao
    Zhang, Kai
    Duanmu, Zhengfang
    Wang, Zhou
    Zuo, Wangmeng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (03) : 1202 - 1213
  • [26] Separation of Nonlinearly Mixed Sources Using End-to-End Deep Neural Networks
    Zamani, Hojatollah
    Razavikia, Saeed
    Otroshi-Shahreza, Hatef
    Amini, Arash
    IEEE SIGNAL PROCESSING LETTERS, 2020, 27 : 101 - 105
  • [27] DeepLanes: End-To-End Lane Position Estimation using Deep Neural Networks
    Gurghian, Alexandru
    Koduri, Tejaswi
    Bailur, Smita V.
    Carey, Kyle J.
    Murali, Vidya N.
    PROCEEDINGS OF 29TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, (CVPRW 2016), 2016, : 38 - 45
  • [28] The Effects of Speed and Delays on Test-Time Performance of End-to-End Self-Driving
    Tampuu, Ardi
    Roosild, Kristjan
    Uduste, Ilmar
    SENSORS, 2024, 24 (06)
  • [29] DeepAttest: An End-to-End Attestation Framework for Deep Neural Networks
    Chen, Huili
    Fu, Cheng
    Rouhani, Bita Darvish
    Zhao, Jishen
    Koushanfar, Farinaz
    PROCEEDINGS OF THE 2019 46TH INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA '19), 2019, : 487 - 498
  • [30] END-TO-END OPTIMIZED SPEECH CODING WITH DEEP NEURAL NETWORKS
    Kankanahalli, Srihari
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 2521 - 2525