Think Twice before Driving: Towards Scalable Decoders for End-to-End Autonomous Driving

被引:10
|
作者
Jia, Xiaosong [1 ,2 ]
Wu, Penghao [2 ,3 ]
Chen, Li [2 ]
Xie, Jiangwei [2 ]
He, Conghui [2 ]
Yan, Junchi [1 ,2 ]
Li, Hongyang [1 ,2 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
[2] Shanghai AI Lab, Shanghai, Peoples R China
[3] Univ Calif San Diego, San Diego, CA USA
关键词
D O I
10.1109/CVPR52729.2023.02105
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
End-to-end autonomous driving has made impressive progress in recent years. Existing methods usually adopt the decoupled encoder-decoder paradigm, where the encoder extracts hidden features from raw sensor data, and the decoder outputs the ego-vehicle's future trajectories or actions. Under such a paradigm, the encoder does not have access to the intended behavior of the ego agent, leaving the burden of finding out safety-critical regions from the massive receptive field and inferring about future situations to the decoder. Even worse, the decoder is usually composed of several simple multi-layer perceptrons (MLP) or GRUs while the encoder is delicately designed (e.g., a combination of heavy ResNets or Transformer). Such an imbalanced resource-task division hampers the learning process. In this work, we aim to alleviate the aforementioned problem by two principles: (1) fully utilizing the capacity of the encoder; (2) increasing the capacity of the decoder. Concretely, we first predict a coarse-grained future position and action based on the encoder features. Then, conditioned on the position and action, the future scene is imagined to check the ramification if we drive accordingly. We also retrieve the encoder features around the predicted coordinate to obtain fine-grained information about the safety-critical region. Finally, based on the predicted future and the retrieved salient feature, we refine the coarse-grained position and action by predicting its offset from ground-truth. The above refinement module could be stacked in a cascaded fashion, which extends the capacity of the decoder with spatial-temporal prior knowledge about the conditioned future. We conduct experiments on the CARLA simulator and achieve state-of-the-art performance in closed-loop benchmarks. Extensive ablation studies demonstrate the effectiveness of each proposed module.
引用
收藏
页码:21983 / 21994
页数:12
相关论文
共 50 条
  • [41] Performance optimization of autonomous driving control under end-to-end deadlines
    Yunhao Bai
    Li Li
    Zejiang Wang
    Xiaorui Wang
    Junmin Wang
    [J]. Real-Time Systems, 2022, 58 : 509 - 547
  • [42] Agile Autonomous Driving using End-to-End Deep Imitation Learning
    Pan, Yunpeng
    Cheng, Ching-An
    Saigol, Kamil
    Lee, Keuntaek
    Yan, Xinyan
    Theodorou, Evangelos A.
    Boots, Byron
    [J]. ROBOTICS: SCIENCE AND SYSTEMS XIV, 2018,
  • [43] Learning End-to-end Autonomous Driving using Guided Auxiliary Supervision
    Mehta, Ashish
    Subramanian, Adithya
    Subramanian, Anbumani
    [J]. ELEVENTH INDIAN CONFERENCE ON COMPUTER VISION, GRAPHICS AND IMAGE PROCESSING (ICVGIP 2018), 2018,
  • [44] End-to-End Autonomous Driving Risk Analysis: A Behavioural Anomaly Detection Approach
    Ryan, Cian
    Murphy, Finbarr
    Mullins, Martin
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2021, 22 (03) : 1650 - 1662
  • [45] Generative Adversarial Imitation Learning for End-to-End Autonomous Driving on Urban Environments
    Karl Couto, Gustavo Claudio
    Antonelo, Eric Aislan
    [J]. 2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,
  • [46] End-to-End Interactive Prediction and Planning with Optical Flow Distillation for Autonomous Driving
    Wang, Hengli
    Cai, Peide
    Fan, Rui
    Sun, Yuxiang
    Liu, Ming
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 2229 - 2238
  • [47] Investigating the Impact of Time-Lagged End-to-End Control in Autonomous Driving
    Asai, Haruna
    Hashimoto, Yoshihiro
    Lisi, Giuseppe
    [J]. INTELLIGENT HUMAN SYSTEMS INTEGRATION 2020, 2020, 1131 : 111 - 117
  • [48] Time-to-Line Crossing Enhanced End-to-End Autonomous Driving Framework
    Jung, Chanyoung
    Seong, Hyunki
    Shim, David Hyunchul
    [J]. 2020 IEEE 23RD INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2020,
  • [49] Recent Advancements in End-to-End Autonomous Driving Using Deep Learning: A Survey
    Chib, Pranav Singh
    Singh, Pravendra
    [J]. IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2024, 9 (01): : 103 - 118
  • [50] RobustE2E: Exploring the Robustness of End-to-End Autonomous Driving
    Jiang, Wei
    Wang, Lu
    Zhang, Tianyuan
    Chen, Yuwei
    Dong, Jian
    Bao, Wei
    Zhang, Zichao
    Fu, Qiang
    [J]. ELECTRONICS, 2024, 13 (16)