Guiding Attention in End-to-End Driving Models

被引:0
|
作者
Porres, Diego [1 ]
Xiao, Yi [1 ]
Villalonga, Gabriel [1 ]
Levy, Alexandre [1 ]
Lopez, Antonio M. [1 ,2 ]
机构
[1] Univ Autonoma Barcelona UAB, Comp Vis Ctr CVC, Barcelona, Spain
[2] Univ Autonoma Barcelona UAB, Dept Ciencies Computac, Barcelona, Spain
关键词
D O I
10.1109/IV55156.2024.10588598
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Vision-based end-to-end driving models trained by imitation learning can lead to affordable solutions for autonomous driving. However, training these well-performing models usually requires a huge amount of data, while still lacking explicit and intuitive activation maps to reveal the inner workings of these models while driving. In this paper, we study how to guide the attention of these models to improve their driving quality and obtain more intuitive activation maps by adding a loss term during training using salient semantic maps. In contrast to previous work, our method does not require these salient semantic maps to be available during testing time, as well as removing the need to modify the model's architecture to which it is applied. We perform tests using perfect and noisy salient semantic maps with encouraging results in both, the latter of which is inspired by possible errors encountered with real data. Using CIL++ as a representative state-of-the-art model and the CARLA simulator with its standard benchmarks, we conduct experiments that show the effectiveness of our method in training better autonomous driving models, especially when data and computational resources are scarce.
引用
收藏
页码:2353 / 2360
页数:8
相关论文
共 50 条
  • [1] End-to-end Spatiotemporal Attention Model for Autonomous Driving
    Zhao, Ruijie
    Zhang, Yanxin
    Huang, Zhiqing
    Yin, Chenkun
    PROCEEDINGS OF 2020 IEEE 4TH INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC 2020), 2020, : 2649 - 2653
  • [2] Hidden Biases of End-to-End Driving Models
    Jaeger, Bernhard
    Chitta, Kashyap
    Geiger, Andreas
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 8206 - 8215
  • [3] NEAT: Neural Attention Fields for End-to-End Autonomous Driving
    Chitta, Kashyap
    Prakash, Aditya
    Geiger, Andreas
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 15773 - 15783
  • [4] Explaining Autonomous Driving by Learning End-to-End Visual Attention
    Cultrera, Luca
    Seidenari, Lorenzo
    Becattini, Federico
    Pala, Pietro
    Del Bimbo, Alberto
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 1389 - 1398
  • [5] Multi-task Learning with Attention for End-to-end Autonomous Driving
    Ishihara, Keishi
    Kanervisto, Anssi
    Miura, Jun
    Hautamaki, Ville
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 2896 - 2905
  • [6] End-to-end driving model based on deep learning and attention mechanism
    Zhu, Wuqiang
    Lu, Yang
    Zhang, Yongliang
    Wei, Xing
    Wei, Zhen
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2022, 42 (04) : 3337 - 3348
  • [7] Learning Driving Models From Parallel End-to-End Driving Data Set
    Chen, Long
    Wang, Qing
    Lu, Xiankai
    Cao, Dongpu
    Wang, Fei-Yue
    PROCEEDINGS OF THE IEEE, 2020, 108 (02) : 262 - 273
  • [8] More Persuasive Explanation Method for End-to-End Driving Models
    Zhang, Chenkai
    Deguchi, Daisuke
    Okafuji, Yuki
    Murase, Hiroshi
    IEEE ACCESS, 2023, 11 : 4270 - 4282
  • [9] Improved training of end-to-end attention models for speech recognition
    Zeyer, Albert
    Irie, Kazuki
    Schlueter, Ralf
    Ney, Hermann
    19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES, 2018, : 7 - 11
  • [10] Learning Stability Attention in Vision-based End-to-end Driving Policies
    Wang, Tsun-Hsuan
    Xiao, Wei
    Chahine, Makram
    Amini, Alexander
    Hasani, Ramin
    Rus, Daniela
    LEARNING FOR DYNAMICS AND CONTROL CONFERENCE, VOL 211, 2023, 211