Predicting travel mode choice with a robust neural network and Shapley additive explanations analysis

被引:1
|
作者
Tang, Li [1 ,2 ]
Tang, Chuanli [1 ]
Fu, Qi [1 ]
Ma, Changxi [3 ]
机构
[1] Xihua Univ, Sch Automobile & Transportat, Chengdu 610039, Peoples R China
[2] Xihua Univ, Control & Safety Key Lab Sichuan Prov, Vehicle Measurement, Chengdu, Peoples R China
[3] Lanzhou Jiaotong Univ, Sch Traff & Transportat, Lanzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
behavioural sciences computing; demand forecasting; feature selection; neural network interpretability;
D O I
10.1049/itr2.12514
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Predicting and understanding travellers' mode choices is crucial to developing urban transportation systems and formulating traffic demand management strategies. Machine learning (ML) methods have been widely used as promising alternatives to traditional discrete choice models owing to their high prediction accuracy. However, a significant body of ML methods, especially the branch of neural networks, is constrained by overfitting and a lack of model interpretability. This study employs a neural network with feature selection for predicting travel mode choices and Shapley additive explanations (SHAP) analysis for model interpretation. A dataset collected in Chengdu, China was used for experimentation. The results reveal that the neural network achieves commendable prediction performance, with a 12% improvement over the traditional multinomial logit model. Also, feature selection using a combined result from two embedded methods can alleviate the overfitting tendency of the neural network, while establishing a more robust model against redundant or unnecessary variables. Additionally, the SHAP analysis identifies factors such as travel expenditure, age, driving experience, number of cars owned, individual monthly income, and trip purpose as significant features in our dataset. The heterogeneity of mode choice behaviour is significant among demographic groups, including different age, car ownership, and income levels. This study uses a neural network with feature selection for travel mode choice prediction and Shapley additive explanations (SHAP) analysis for model interpretation. It highlights that by applying feature selection using a joint result from two embedded methods, a more robust model in neural networks that improves the overfitting problem in mode choice prediction was able to be developed. Additionally, interpreting the neural network using SHAP overcomes the limitation of neural network models not being interpretable. image
引用
收藏
页码:1339 / 1354
页数:16
相关论文
共 50 条
  • [1] SHapley Additive exPlanations for Explaining Artificial Neural Network Based Mode Choice Models
    Anil Koushik
    M. Manoj
    N. Nezamuddin
    Transportation in Developing Economies, 2024, 10
  • [2] SHapley Additive exPlanations for Explaining Artificial Neural Network Based Mode Choice Models
    Koushik, Anil
    Manoj, M.
    Nezamuddin, N.
    TRANSPORTATION IN DEVELOPING ECONOMIES, 2024, 10 (01)
  • [3] Elucidating microbubble structure behavior with a Shapley Additive Explanations neural network algorithm
    Zhuo, Qingxia
    Zhang, Linfei
    Wang, Lei
    Liu, Qinkai
    Zhang, Sen
    Wang, Guanjun
    Xue, Chenyang
    OPTICAL FIBER TECHNOLOGY, 2024, 88
  • [4] Neural network models and shapley additive explanations for a beam-ring structure
    Sun, Ying
    Zhang, Luying
    Yao, Minghui
    Zhang, Junhua
    CHAOS SOLITONS & FRACTALS, 2024, 185
  • [5] An artificial neural network-pharmacokinetic model and its interpretation using Shapley additive explanations
    Ogami, Chika
    Tsuji, Yasuhiro
    Seki, Hiroto
    Kawano, Hideaki
    To, Hideto
    Matsumoto, Yoshiaki
    Hosono, Hiroyuki
    CPT-PHARMACOMETRICS & SYSTEMS PHARMACOLOGY, 2021, 10 (07): : 760 - 768
  • [6] Illuminating the Neural Landscape of Pilot Mental States: A Convolutional Neural Network Approach with Shapley Additive Explanations Interpretability
    Alreshidi, Ibrahim
    Bisandu, Desmond
    Moulitsas, Irene
    SENSORS, 2023, 23 (22)
  • [7] Predicting and Interpreting Student Performance Using Ensemble Models and Shapley Additive Explanations
    Sahlaoui, Hayat
    Alaoui, El Arbi Abdellaoui
    Nayyar, Anand
    Agoujil, Said
    Jaber, Mustafa Musa
    IEEE ACCESS, 2021, 9 : 152688 - 152703
  • [8] Robust spatiotemporal crash risk prediction with gated recurrent convolution network and interpretable insights from SHapley additive explanations
    Kashifi, Mohammad Tamim
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 127
  • [9] Shapley Additive Explanations for Text Classification and Sentiment Analysis of Internet Movie Database
    Dewi, Christine
    Tsai, Bing-Jun
    Chen, Rung-Ching
    RECENT CHALLENGES IN INTELLIGENT INFORMATION AND DATABASE SYSTEMS, ACIIDS 2022, 2022, 1716 : 69 - 80
  • [10] UNDERSTANDING TRAVEL BEHAVIOR: A DEEP NEURAL NETWORK AND SHAP APPROACH TO MODE CHOICE DETERMINANTS
    Cevik, H.
    Pribyl, O.
    Samandar, S.
    NEURAL NETWORK WORLD, 2024, 34 (04) : 219 - 241