Towards explainable traffic flow prediction with large language models

被引:1
|
作者
Guo, Xusen [1 ]
Zhang, Qiming [1 ]
Jiang, Junyue [2 ]
Peng, Mingxing [1 ]
Zhu, Meixin [1 ,3 ]
Yang, Hao Frank [2 ]
机构
[1] Hong Kong Univ Sci & Technol Guangzhou, Intelligent Transportat Thrust, Syst Hub, Guangzhou 511400, Peoples R China
[2] Johns Hopkins Univ, Dept Civil & Syst Engn, Baltimore, MD 21218 USA
[3] Guangdong Prov Key Lab Integrated Commun Sensing &, Guangzhou 511400, Peoples R China
基金
中国国家自然科学基金;
关键词
Traffic flow prediction; Large language models; Spatial-temporal prediction; Explainability; NETWORKS;
D O I
10.1016/j.commtr.2024.100150
中图分类号
U [交通运输];
学科分类号
08 ; 0823 ;
摘要
Traffic forecasting is crucial for intelligent transportation systems. It has experienced significant advancements thanks to the power of deep learning in capturing latent patterns of traffic data. However, recent deep-learning architectures require intricate model designs and lack an intuitive understanding of the mapping from input data to predicted results. Achieving both accuracy and explainability in traffic prediction models remains a challenge due to the complexity of traffic data and the inherent opacity of deep learning models. To tackle these challenges, we propose a traffic flow prediction model based on large language models (LLMs) to generate explainable traffic predictions, named xTP-LLM. By transferring multi-modal traffic data into natural language descriptions, xTP-LLM captures complex time-series patterns and external factors from comprehensive traffic data. The LLM framework is fine-tuned using language-based instructions to align with spatial-temporal traffic flow data. Empirically, xTP-LLM shows competitive accuracy compared with deep learning baselines, while providing an intuitive and reliable explanation for predictions. This study contributes to advancing explainable traffic prediction models and lays a foundation for future exploration of LLM applications in transportation.
引用
收藏
页数:17
相关论文
共 50 条
  • [41] A Comparison of Detrending Models and Multi-Regime Models for Traffic Flow Prediction
    Li, Zhiheng
    Li, Yuebiao
    Li, Li
    IEEE INTELLIGENT TRANSPORTATION SYSTEMS MAGAZINE, 2014, 6 (04) : 34 - 44
  • [42] Wikipedia traffic data and electoral prediction: towards theoretically informed models
    Taha Yasseri
    Jonathan Bright
    EPJ Data Science, 5
  • [43] Wikipedia traffic data and electoral prediction: towards theoretically informed models
    Yasseri, Taha
    Bright, Jonathan
    EPJ DATA SCIENCE, 2016, 5
  • [44] Towards Transparent Traffic Solutions: Reinforcement Learning and Explainable AI for Traffic Congestion
    Khan, Shan
    Ghazal, Taher M.
    Alyas, Tahir
    Waqas, M.
    Raza, Muhammad Ahsan
    Ali, Oualid
    Khan, Muhammad Adnan
    Abbas, Sagheer
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2025, 16 (01) : 503 - 511
  • [45] Towards Interpretable Mental Health Analysis with Large Language Models
    Yang, Kailai
    Ji, Shaoxiong
    Zhang, Tianlin
    Xie, Qianqian
    Kuang, Ziyan
    Ananiadou, Sophia
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 6056 - 6077
  • [46] Towards an understanding of large language models in software engineering tasks
    Zheng, Zibin
    Ning, Kaiwen
    Zhong, Qingyuan
    Chen, Jiachi
    Chen, Wenqing
    Guo, Lianghong
    Wang, Weicheng
    Wang, Yanlin
    EMPIRICAL SOFTWARE ENGINEERING, 2025, 30 (02)
  • [47] WaterBench: Towards Holistic Evaluation of Watermarks for Large Language Models
    Tul, Shangqing
    Sun, Yuliang
    Bail, Yushi
    Yu, Jifan
    Hou, Lei
    Li, Juanzi
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 1517 - 1542
  • [48] Towards Analysis and Interpretation of Large Language Models for Arithmetic Reasoning
    Akter, Mst Shapna
    Shahriar, Hossain
    Cuzzocrea, Alfredo
    2024 11TH IEEE SWISS CONFERENCE ON DATA SCIENCE, SDS 2024, 2024, : 267 - 270
  • [49] TITANIC: Towards Production Federated Learning with Large Language Models
    Su, Ningxin
    Hu, Chenghao
    Li, Baochun
    Li, Bo
    IEEE INFOCOM 2024-IEEE CONFERENCE ON COMPUTER COMMUNICATIONS, 2024, : 611 - 620
  • [50] Towards efficient and effective unlearning of large language models for recommendation
    Wang, Hangyu
    Lin, Jianghao
    Chen, Bo
    Yang, Yang
    Tang, Ruiming
    Zhang, Weinan
    Yu, Yong
    FRONTIERS OF COMPUTER SCIENCE, 2025, 19 (03)