Toward Interactive Next Location Prediction Driven by Large Language Models

被引:0
|
作者
Chen, Yong [1 ,2 ]
Chi, Ben [1 ,3 ]
Li, Chuanjia [1 ,3 ]
Zhang, Yuliang [1 ,2 ,4 ]
Liao, Chenlei [1 ,2 ]
Chen, Xiqun [1 ,2 ]
Xie, Na [5 ]
机构
[1] Zhejiang Univ, Inst Intelligent Transportat Syst, Hangzhou 310058, Peoples R China
[2] Zhejiang Univ, Coll Civil Engn & Architecture, Hangzhou 310058, Peoples R China
[3] Zhejiang Univ, Polytech Inst, Hangzhou 310058, Peoples R China
[4] Hangzhou City Univ, Intelligent Transportat Syst Res Ctr, Hangzhou 310015, Peoples R China
[5] Cent Univ Finance & Econ, Sch Management Sci & Engn, Beijing 100081, Peoples R China
基金
中国国家自然科学基金;
关键词
Predictive models; Accuracy; Spatiotemporal phenomena; Cognition; Natural languages; Feature extraction; Data models; Computational modeling; Deep learning; Large language models; Human mobility; large language model (LLM); location prediction; multiround continuous dialogue; Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS);
D O I
10.1109/TCSS.2024.3522965
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Individual next location prediction plays a crucial role in location-based applications, such as route navigation and service recommendation. Although the existing research based on deep learning effectively captures users' spatiotemporal travel preferences, there are challenges in the interpretability of location prediction, heavily relying on large-scale historical travel data for model training. Drawing inspiration from the powerful reasoning capabilities of large language models (LLMs), this study proposes a novel multiround continuous dialogue mechanism and candidate set enhancement method, leveraging LLMs for next location prediction through step-by-step reasoning. In the first round of dialogue, we introduce activity prediction as an auxiliary task to narrow down the candidate locations. Subsequently, we establish an activity-aware prompt to enable LLM to achieve accurate location prediction and provide corresponding reasoning. Finally, we incorporate a third round of dialogue to prompt LLM to make necessary corrections by integrating the prediction results of deep learning models. To address the issues of LLMs being affected by element ranking within the candidate set, we propose a new candidate set enhancement method based on the entropy-weighted Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS). Our model can understand user travel preferences by fusing location, activity, and time information through natural language. Extensive experiments are conducted on two public datasets of user check-ins, and the results show that our model achieves prediction performance comparable to deep learning models in full-sample prediction and outperforms them in the few-shot settings. Our model provides logical and explainable reasoning, offering insightful guidance for downstream application tasks.
引用
收藏
页数:17
相关论文
共 50 条
  • [41] Exploring Large Language Models for Trajectory Prediction: A Technical Perspective
    Munir, Farzeen
    Mihaylova, Tsvetomila
    Azam, Shoaib
    Kucner, Tomasz Piotr
    Kyrki, Ville
    COMPANION OF THE 2024 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, HRI 2024 COMPANION, 2024, : 774 - 778
  • [42] Approaching Optimal pH Enzyme Prediction with Large Language Models
    Zaretckii, Mark
    Buslaev, Pavel
    Kozlovskii, Igor
    Morozov, Alexander
    Popov, Petr
    ACS SYNTHETIC BIOLOGY, 2024, 13 (09): : 3013 - 3021
  • [43] INTERVENOR: Prompting the Coding Ability of Large Language Models with the Interactive Chain of Repair
    Wang, Hanbin
    Liu, Zhenghao
    Wang, Shuo
    Cui, Ganqu
    Ding, Ning
    Liu, Zhiyuan
    Yu, Ge
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 2081 - 2107
  • [44] FedID: Federated Interactive Distillation for Large-Scale Pretraining Language Models
    Ma, Xinge
    Liu, Jiangming
    Wang, Jin
    Zhang, Xuejie
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 8566 - 8577
  • [45] LLM Honeypot: Leveraging Large Language Models as Advanced Interactive Honeypot Systems
    Otal, Hakan T.
    Canbaz, M. Abdullah
    2024 IEEE CONFERENCE ON COMMUNICATIONS AND NETWORK SECURITY, CNS 2024, 2024,
  • [46] KIEval: A Knowledge-grounded Interactive Evaluation Framework for Large Language Models
    Yu, Zhuohao
    Gao, Chang
    Yao, Wenjin
    Wang, Yidong
    Ye, Wei
    Wang, Jindong
    Xie, Xing
    Zhang, Yue
    Zhang, Shikun
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 5967 - 5985
  • [47] Explanatory Interactive Machine Learning with Counterexamples from Constrained Large Language Models
    Slany, Emanuel
    Scheele, Stephan
    Schmid, Ute
    KI 2024: ADVANCES IN ARTIFICIAL INTELLIGENCE, KI 2024, 2024, 14992 : 324 - 331
  • [48] Toward Reproducing Network Research Results Using Large Language Models
    Xiang, Qiao
    Lin, Yuling
    Fang, Mingjun
    Huang, Bang
    Huang, Siyong
    Wen, Ridi
    Le, Franck
    Kong, Linghe
    Shu, Jiwu
    PROCEEDINGS OF THE 22ND ACM WORKSHOP ON HOT TOPICS IN NETWORKS, HOTNETS 2023, 2023, : 56 - 62
  • [49] Toward Reliable Biodiversity Information Extraction From Large Language Models
    Elliott, Michael J.
    Fortes, Jose A. B.
    2024 IEEE 20TH INTERNATIONAL CONFERENCE ON E-SCIENCE, E-SCIENCE 2024, 2024,
  • [50] Toward a Better Understanding of the Emotional Dynamics of Negotiation with Large Language Models
    Lin, Eleanor
    Hale, James
    Gratch, Jonathan
    PROCEEDINGS OF THE 2023 INTERNATIONAL SYMPOSIUM ON THEORY, ALGORITHMIC FOUNDATIONS, AND PROTOCOL DESIGN FOR MOBILE NETWORKS AND MOBILE COMPUTING, MOBIHOC 2023, 2023, : 545 - 550