Think Twice before Driving: Towards Scalable Decoders for End-to-End Autonomous Driving

被引:10
|
作者
Jia, Xiaosong [1 ,2 ]
Wu, Penghao [2 ,3 ]
Chen, Li [2 ]
Xie, Jiangwei [2 ]
He, Conghui [2 ]
Yan, Junchi [1 ,2 ]
Li, Hongyang [1 ,2 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
[2] Shanghai AI Lab, Shanghai, Peoples R China
[3] Univ Calif San Diego, San Diego, CA USA
关键词
D O I
10.1109/CVPR52729.2023.02105
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
End-to-end autonomous driving has made impressive progress in recent years. Existing methods usually adopt the decoupled encoder-decoder paradigm, where the encoder extracts hidden features from raw sensor data, and the decoder outputs the ego-vehicle's future trajectories or actions. Under such a paradigm, the encoder does not have access to the intended behavior of the ego agent, leaving the burden of finding out safety-critical regions from the massive receptive field and inferring about future situations to the decoder. Even worse, the decoder is usually composed of several simple multi-layer perceptrons (MLP) or GRUs while the encoder is delicately designed (e.g., a combination of heavy ResNets or Transformer). Such an imbalanced resource-task division hampers the learning process. In this work, we aim to alleviate the aforementioned problem by two principles: (1) fully utilizing the capacity of the encoder; (2) increasing the capacity of the decoder. Concretely, we first predict a coarse-grained future position and action based on the encoder features. Then, conditioned on the position and action, the future scene is imagined to check the ramification if we drive accordingly. We also retrieve the encoder features around the predicted coordinate to obtain fine-grained information about the safety-critical region. Finally, based on the predicted future and the retrieved salient feature, we refine the coarse-grained position and action by predicting its offset from ground-truth. The above refinement module could be stacked in a cascaded fashion, which extends the capacity of the decoder with spatial-temporal prior knowledge about the conditioned future. We conduct experiments on the CARLA simulator and achieve state-of-the-art performance in closed-loop benchmarks. Extensive ablation studies demonstrate the effectiveness of each proposed module.
引用
收藏
页码:21983 / 21994
页数:12
相关论文
共 50 条
  • [1] Adversarial Driving: Attacking End-to-End Autonomous Driving
    Wu, Han
    Yunas, Syed
    Rowlands, Sareh
    Ruan, Wenjie
    Wahlstrom, Johan
    [J]. 2023 IEEE INTELLIGENT VEHICLES SYMPOSIUM, IV, 2023,
  • [2] Multimodal End-to-End Autonomous Driving
    Xiao, Yi
    Codevilla, Felipe
    Gurram, Akhil
    Urfalioglu, Onay
    Lopez, Antonio M.
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (01) : 537 - 547
  • [3] End-to-end Autonomous Driving: Advancements and Challenges
    Chu, Duan-Feng
    Wang, Ru-Kang
    Wang, Jing-Yi
    Hua, Qiao-Zhi
    Lu, Li-Ping
    Wu, Chao-Zhong
    [J]. Zhongguo Gonglu Xuebao/China Journal of Highway and Transport, 2024, 37 (10): : 209 - 232
  • [4] End-to-End Autonomous Driving: Challenges and Frontiers
    OpenDriveLab, Shanghai Ai Lab, Shanghai
    200233, China
    不详
    不详
    72074, Germany
    不详
    72076, Germany
    [J]. IEEE Trans Pattern Anal Mach Intell, 2024, 12 (10164-10183):
  • [5] Towards End-to-End Escape in Urban Autonomous Driving Using Reinforcement Learning
    Sakhai, Mustafa
    Wielgosz, Maciej
    [J]. INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 2, INTELLISYS 2023, 2024, 823 : 21 - 40
  • [6] Towards End-to-End Chase in Urban Autonomous Driving Using Reinforcement Learning
    Kolomanski, Michal
    Sakhai, Mustafa
    Nowak, Jakub
    Wielgosz, Maciej
    [J]. INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 3, 2023, 544 : 408 - 426
  • [7] A Review of End-to-End Autonomous Driving in Urban Environments
    Coelho, Daniel
    Oliveira, Miguel
    [J]. IEEE ACCESS, 2022, 10 : 75296 - 75311
  • [8] End-to-End Urban Autonomous Driving With Safety Constraints
    Hou, Changmeng
    Zhang, Wei
    [J]. IEEE ACCESS, 2024, 12 : 132198 - 132209
  • [9] End-to-End Federated Learning for Autonomous Driving Vehicles
    Zhang, Hongyi
    Bosch, Jan
    Olsson, Helena Holmstrom
    [J]. 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [10] End-to-end Spatiotemporal Attention Model for Autonomous Driving
    Zhao, Ruijie
    Zhang, Yanxin
    Huang, Zhiqing
    Yin, Chenkun
    [J]. PROCEEDINGS OF 2020 IEEE 4TH INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC 2020), 2020, : 2649 - 2653