What Lies Beneath: A Note on the Explainability of Black-box Machine Learning Models for Road Traffic Forecasting

被引:0
|
作者
Barredo-Arrieta, Alejandro [1 ]
Lana, Ibai [1 ]
Del Ser, Javier [1 ,2 ]
机构
[1] TECNALIA, Derio 48160, Bizkaia, Spain
[2] Univ Basque Country, UPV EHU, Bilbao 48013, Bizkaia, Spain
关键词
FLOW PREDICTION;
D O I
暂无
中图分类号
U [交通运输];
学科分类号
08 ; 0823 ;
摘要
Traffic flow forecasting is widely regarded as an essential gear in the complex machinery underneath Intelligent Transport Systems, being a critical component of avant-garde Automated Traffic Management Systems. Research in this area has stimulated a vibrant activity, yielding a plethora of new forecasting methods contributed to the community on a yearly basis. Efforts in this domain are mainly oriented to the development of prediction models featuring with ever-growing levels of performances and/or computational efficiency. After the swerve towards Artificial Intelligence that gradually took place in the modeling sphere of traffic forecasting, predictive schemes have ever since adopted all the benefits of applied machine learning, but have also incurred some caveats. The adoption of highly complex, black-box models has subtracted comprehensibility to forecasts: even though they perform better, they are more obscure to ITS practitioners, which hinders their practicality. In this paper we propose the adoption of explainable Artificial Intelligence (xAI) tools that are currently being used in other domains, in order to extract further knowledge from black-box traffic forecasting models. In particular we showcase the utility of xAI to unveil the knowledge extracted by Random Forests and Recurrent Neural Networks when predicting real traffic. The obtained results are insightful and suggest that the traffic forecasting model should be analyzed from more points of view beyond that of prediction accuracy or any other regression score alike, due to the treatment each algorithm gives to input variables: even with the same nominal score value, some methods can take advantage of inner knowledge that others instead disregard.
引用
收藏
页码:2232 / 2237
页数:6
相关论文
共 50 条
  • [1] In-Training Explainability Frameworks: A Method to Make Black-Box Machine Learning Models More Explainable
    Acun, Cagla
    Nasraoui, Olfa
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON WEB INTELLIGENCE AND INTELLIGENT AGENT TECHNOLOGY, WI-IAT, 2023, : 230 - 237
  • [2] Explainable Debugger for Black-box Machine Learning Models
    Rasouli, Peyman
    Yu, Ingrid Chieh
    [J]. 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [3] Identifying the Machine Learning Family from Black-Box Models
    Fabra-Boluda, Raul
    Ferri, Cesar
    Hernandez-Orallo, Jose
    Martinez-Plumed, Fernando
    Jose Ramirez-Quintana, Maria
    [J]. ADVANCES IN ARTIFICIAL INTELLIGENCE, CAEPIA 2018, 2018, 11160 : 55 - 65
  • [4] Data Synthesis for Testing Black-Box Machine Learning Models
    Saha, Diptikalyan
    Aggarwal, Aniya
    Hans, Sandeep
    [J]. PROCEEDINGS OF THE 5TH JOINT INTERNATIONAL CONFERENCE ON DATA SCIENCE & MANAGEMENT OF DATA, CODS COMAD 2022, 2022, : 110 - 114
  • [5] Black-box Adversarial Machine Learning Attack on Network Traffic Classification
    Usama, Muhammad
    Qayyum, Adnan
    Qadir, Junaid
    Al-Fuqaha, Ala
    [J]. 2019 15TH INTERNATIONAL WIRELESS COMMUNICATIONS & MOBILE COMPUTING CONFERENCE (IWCMC), 2019, : 84 - 89
  • [6] The Black-Box Syndrome: Embracing Randomness in Machine Learning Models
    Anthis, Z.
    [J]. ARTIFICIAL INTELLIGENCE IN EDUCATION: POSTERS AND LATE BREAKING RESULTS, WORKSHOPS AND TUTORIALS, INDUSTRY AND INNOVATION TRACKS, PRACTITIONERS AND DOCTORAL CONSORTIUM, PT II, 2022, 13356 : 3 - 9
  • [7] Do Intermediate Feature Coalitions Aid Explainability of Black-Box Models?
    Patil, Minal Suresh
    Framling, Kary
    [J]. EXPLAINABLE ARTIFICIAL INTELLIGENCE, XAI 2023, PT I, 2023, 1901 : 115 - 130
  • [8] Interacting with Predictions: Visual Inspection of Black-box Machine Learning Models
    Krause, Josua
    Perer, Adam
    Ng, Kenney
    [J]. 34TH ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2016, 2016, : 5686 - 5697
  • [9] AMEBA: An Adaptive Approach to the Black-Box Evasion of Machine Learning Models
    Calzavara, Stefano
    Cazzaro, Lorenzo
    Lucchese, Claudio
    [J]. ASIA CCS'21: PROCEEDINGS OF THE 2021 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 292 - 306
  • [10] Techniques to Improve Ecological Interpretability of Black-Box Machine Learning Models
    Welchowski, Thomas
    Maloney, Kelly O.
    Mitchell, Richard
    Schmid, Matthias
    [J]. JOURNAL OF AGRICULTURAL BIOLOGICAL AND ENVIRONMENTAL STATISTICS, 2022, 27 (01) : 175 - 197