On the Role of Explainable Machine Learning for Secure Smart Vehicles

被引:4
|
作者
Scalas, Michele [1 ]
Giacinto, Giorgio [1 ]
机构
[1] Univ Cagliari, Dept Elect & Elect Engn, Cagliari, Italy
关键词
Explainability; Cybersecurity; Machine learning; Mobility; Smart Vehicles; Automotive; Connected Cars; Autonomous Driving;
D O I
10.23919/aeitautomotive50086.2020.9307431
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The concept of mobility is experiencing a serious transformation due to the Mobility-as-a-Service paradigm. Accordingly, vehicles, usually referred to as smart, are seeing their architecture revamped to integrate connection to the outside environment (V2X) and autonomous driving. A significant part of these innovations is enabled by machine learning. However, deploying such systems raises some concerns. First, the complexity of the algorithms often prevents understanding what these models learn, which is relevant in the safety-critical context of mobility. Second, several studies have demonstrated the vulnerability of machine learning-based algorithms to adversarial attacks. For these reasons, research on the explainability of machine learning is raising. In this paper, we then explore the role of interpretable machine learning in the ecosystem of smart vehicles, with the goal of figuring out if and in what terms explanations help to design secure vehicles. We provide an overview of the potential uses of explainable machine learning, along with recent work in the literature that has started to investigate the topic, including from the perspectives of human-agent systems and cyber-physical systems. Our analysis highlights both benefits and criticalities in employing explanations.
引用
收藏
页数:6
相关论文
共 50 条
  • [21] Explainable machine learning for diffraction patterns
    Nawaz, Shah
    Rahmani, Vahid
    Pennicard, David
    Setty, Shabarish Pala Ramakantha
    Klaudel, Barbara
    Graafsma, Heinz
    JOURNAL OF APPLIED CRYSTALLOGRAPHY, 2023, 56 : 1494 - 1504
  • [22] Explainable machine learning in materials science
    Zhong, Xiaoting
    Gallagher, Brian
    Liu, Shusen
    Kailkhura, Bhavya
    Hiszpanski, Anna
    Han, T. Yong-Jin
    NPJ COMPUTATIONAL MATERIALS, 2022, 8 (01)
  • [23] eXplainable Cooperative Machine Learning with NOVA
    Baur, Tobias
    Heimerl, Alexander
    Lingenfelser, Florian
    Wagner, Johannes
    Valstar, Michel F.
    Schuller, Bjoern
    Andre, Elisabeth
    KUNSTLICHE INTELLIGENZ, 2020, 34 (02): : 143 - 164
  • [24] Principles and Practice of Explainable Machine Learning
    Belle, Vaishak
    Papantonis, Ioannis
    FRONTIERS IN BIG DATA, 2021, 4
  • [25] Explainable Machine Learning for Trustworthy AI
    Giannotti, Fosca
    ARTIFICIAL INTELLIGENCE RESEARCH AND DEVELOPMENT, 2022, 356 : 3 - 3
  • [26] Explainable Machine Learning for Fraud Detection
    Psychoula, Ismini
    Gutmann, Andreas
    Mainali, Pradip
    Lee, S. H.
    Dunphy, Paul
    Petitcolas, Fabien A. P.
    COMPUTER, 2021, 54 (10) : 49 - 59
  • [27] Explainable machine learning models with privacy
    Bozorgpanah, Aso
    Torra, Vicenc
    PROGRESS IN ARTIFICIAL INTELLIGENCE, 2024, 13 (01) : 31 - 50
  • [28] eXplainable Cooperative Machine Learning with NOVA
    Tobias Baur
    Alexander Heimerl
    Florian Lingenfelser
    Johannes Wagner
    Michel F. Valstar
    Björn Schuller
    Elisabeth André
    KI - Künstliche Intelligenz, 2020, 34 : 143 - 164
  • [29] Explainable machine learning models with privacy
    Aso Bozorgpanah
    Vicenç Torra
    Progress in Artificial Intelligence, 2024, 13 : 31 - 50
  • [30] Hardware Acceleration of Explainable Machine Learning
    Pan, Zhixin
    Mishra, Prabhat
    PROCEEDINGS OF THE 2022 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2022), 2022, : 1127 - 1130