Enhancing Explainability in Mobility Data Science Through a Combination of Methods

被引:0
|
作者
Makridis, Georgios [1 ]
Koukos, Vasileios [1 ]
Fatouros, Georgios [1 ]
Separdani, Maria Margarita [2 ]
Kyriazis, Dimosthenis [1 ]
机构
[1] Univ Piraeus, Dept Digital Syst, Pireas Karaoli Ke Dimitriou 80, Piraeus 18534, Greece
[2] Univ Piraeus, Dept Maritime Studies, Pireas Karaoli Ke Dimitriou 80, Piraeus 18534, Greece
来源
关键词
Mobility data; Vessel route forecasting; XAI; GeoXAI;
D O I
10.1007/978-3-031-62269-4_4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the domain of Mobility Data Science, the intricate task of interpreting models trained on trajectory data, and elucidating the spatio-temporal movement of entities, has persistently posed significant challenges. Conventional XAI techniques, although brimming with potential, frequently overlook the distinct structure and nuances inherent within trajectory data. Observing this deficiency, we introduced a comprehensive framework that harmonizes pivotal XAI techniques: LIME (Local Interpretable Model-agnostic Explanations), SHAP (SHapley Additive exPlanations), Saliency maps, attention mechanisms, direct trajectory visualization, and Permutation Feature Importance (PFI). Unlike conventional strategies that deploy these methods singularly, our unified approach capitalizes on the collective efficacy of these techniques, yielding deeper and more granular insights for models reliant on trajectory data. In crafting this synthesis, we effectively address the multifaceted essence of trajectories, achieving not only amplified interpretability but also a nuanced, contextually rich comprehension of model decisions. To validate and enhance our framework, we undertook a survey to gauge preferences and reception among various user demographics. Our findings underscored a dichotomy: professionals with academic orientations, particularly those in roles like Data Scientist, IT Expert, and ML Engineer, showcased a profound, technical understanding and often exhibited a predilection for amalgamated methods for interpretability. Conversely, end-users or individuals less acquainted with AI and Data Science showcased simpler inclinations, such as bar plots indicating timestep significance or visual depictions pinpointing pivotal segments of a vessel's trajectory. Notably, the survey highlighted a unanimous appreciation for juxtaposing predicted versus actual trajectories as a direct benchmark of model performance. Furthermore, visualizations emphasizing critical past vessel positions were hailed, with technical roles finding them especially enlightening and end-users perceiving them as intuitive and enlightening. Our tabled results provide a more detailed breakdown of XAI usability preferences in Vessel Route Forecasting (VRF), further enriching our contributions to the field.
引用
收藏
页码:45 / 60
页数:16
相关论文
共 50 条
  • [1] Building Trust in Earth Science Findings through Data Traceability and Results Explainability
    Olaya, Paula
    Kennedy, Dominic
    Llamas, Ricardo
    Valera, Leobardo
    Vargas, Rodrigo
    Lofstead, Jay
    Taufer, Michela
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2023, 34 (02) : 704 - 717
  • [2] Explainability of radiomics through formal methods
    Varriano, Giulia
    Guerriero, Pasquale
    Santone, Antonella
    Mercaldo, Francesco
    Brunese, Luca
    [J]. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2022, 220
  • [3] Enhancing Explainability of Neural Networks Through Architecture Constraints
    Yang, Zebin
    Zhang, Aijun
    Sudjianto, Agus
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (06) : 2610 - 2621
  • [4] Enhancing Data Science Ethics Through Statistical Education and Practice
    Utts, Jessica
    [J]. INTERNATIONAL STATISTICAL REVIEW, 2021, 89 (01) : 1 - 17
  • [5] An illustration of model agnostic explainability methods applied to environmental data
    Wikle, Christopher K.
    Datta, Abhirup
    Hari, Bhava Vyasa
    Boone, Edward L.
    Sahoo, Indranil
    Kavila, Indulekha
    Castruccio, Stefano
    Simmons, Susan J.
    Burr, Wesley S.
    Chang, Won
    [J]. ENVIRONMETRICS, 2023, 34 (01)
  • [6] Enhancing android malware detection explainability through function call graph APIs
    Soi, Diego
    Sanna, Alessandro
    Maiorca, Davide
    Giacinto, Giorgio
    [J]. JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2024, 80
  • [7] Data Science for Migration and Mobility
    Blanco-Herrero, David
    [J]. INTERNATIONAL MIGRATION, 2023, 61 (05) : 333 - 334
  • [8] Explainability in Graph Data Science Interpretability, replicability, and reproducibility of community detection
    Aviyente, Selin
    Karaaslanli, Abdullah
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2022, 39 (04) : 25 - 39
  • [9] Transforming research methods education through data science literacy
    Overton, Michael
    Kleinschmit, Stephen
    [J]. TEACHING PUBLIC ADMINISTRATION, 2023, 41 (02) : 149 - 169
  • [10] SAGES surgical data science task force: enhancing surgical innovation, education and quality improvement through data science
    Madani, Amin
    Liu, Yao
    Pryor, Aurora
    Altieri, Maria
    Hashimoto, Daniel A.
    Feldman, Liane
    [J]. SURGICAL ENDOSCOPY AND OTHER INTERVENTIONAL TECHNIQUES, 2024, 38 (07): : 3489 - 3493