Spatial-Temporal Cross-View Contrastive Pre-Training for Check-in Sequence Representation Learning

被引:0
|
作者
Gong, Letian [1 ,2 ]
Wan, Huaiyu [1 ,2 ]
Guo, Shengnan [1 ,2 ]
Li, Xiucheng [3 ]
Lin, Yan [1 ,2 ]
Zheng, Erwen [1 ,2 ]
Wang, Tianyi [1 ,2 ]
Zhou, Zeyu [1 ,2 ]
Lin, Youfang [1 ,2 ]
机构
[1] Beijing Jiaotong Univ, Key Lab Big Data & Artificial Intelligence Transpo, Minist Educ, Beijing 100044, Peoples R China
[2] CAAC, Key Lab Intelligent Passenger Serv Civil Aviat, Beijing 101318, Peoples R China
[3] Harbin Inst Technol, Sch Comp Sci & Technol, Shenzhen 518055, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Semantics; Trajectory; Predictive models; Uncertainty; Task analysis; Noise; Data mining; Check-in sequence; contrastive cluster; representation learning; spatial-temporal cross-view;
D O I
10.1109/TKDE.2024.3434565
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The rapid growth of location-based services (LBS) has yielded massive amounts of data on human mobility. Effectively extracting meaningful representations for user-generated check-in sequences is pivotal for facilitating various downstream services. However, the user-generated check-in data are simultaneously influenced by the surrounding objective circumstances and the user's subjective intention. Specifically, the temporal uncertainty and spatial diversity exhibited in check-in data make it difficult to capture the macroscopic spatial-temporal patterns of users and to understand the semantics of user mobility activities. Furthermore, the distinct characteristics of the temporal and spatial information in check-in sequences call for an effective fusion method to incorporate these two types of information. In this paper, we propose a novel Spatial-Temporal Cross-view Contrastive Representation (STCCR) framework for check-in sequence representation learning. Specifically, STCCR addresses the above challenges by employing self-supervision from "spatial topic" and "temporal intention" views, facilitating effective fusion of spatial and temporal information at the semantic level. Besides, STCCR leverages contrastive clustering to uncover users' shared spatial topics from diverse mobility activities, while employing angular momentum contrast to mitigate the impact of temporal uncertainty and noise. We extensively evaluate STCCR on three real-world datasets and demonstrate its superior performance across three downstream tasks.
引用
收藏
页码:9308 / 9321
页数:14
相关论文
共 50 条
  • [1] Contrastive Pre-training with Adversarial Perturbations for Check-in Sequence Representation Learning
    Gong, Letian
    Lin, Youfang
    Guo, Shengnan
    Lin, Yan
    Wang, Tianyi
    Zheng, Erwen
    Zhou, Zeyu
    Wan, Huaiyu
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 4, 2023, : 4276 - 4283
  • [2] Cross-View Temporal Contrastive Learning for Self-Supervised Video Representation
    Wang, Lulu
    Xu, Zengmin
    Zhang, Xuelian
    Meng, Ruxing
    Lu, Tao
    Computer Engineering and Applications, 2024, 60 (18) : 158 - 166
  • [3] Enhanced Spatial-Temporal Salience for Cross-View Gait Recognition
    Huang, Tianhuan
    Ben, Xianye
    Gong, Chen
    Zhang, Baochang
    Yan, Rui
    Wu, Qiang
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (10) : 6967 - 6980
  • [4] Multilingual Molecular Representation Learning via Contrastive Pre-training
    Guo, Zhihui
    Sharma, Pramod
    Martinez, Andy
    Du, Liang
    Abraham, Robin
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 3441 - 3453
  • [5] VarCLR: Variable Semantic Representation Pre-training via Contrastive Learning
    Chen, Qibin
    Lacomis, Jeremy
    Schwartz, Edward J.
    Neubig, Graham
    Vasilescu, Bogdan
    Le Goues, Claire
    2022 ACM/IEEE 44TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2022), 2022, : 2327 - 2339
  • [6] Evaluation of local spatial-temporal features for cross-view action recognition
    Gao, Zan
    Nie, Weizhi
    Liu, Anan
    Zhang, Hua
    NEUROCOMPUTING, 2016, 173 : 110 - 117
  • [7] A Multi-view Molecular Pre-training with Generative Contrastive Learning
    Liu, Yunwu
    Zhang, Ruisheng
    Yuan, Yongna
    Ma, Jun
    Li, Tongfeng
    Yu, Zhixuan
    INTERDISCIPLINARY SCIENCES-COMPUTATIONAL LIFE SCIENCES, 2024, 16 (03) : 741 - 754
  • [8] Contrastive Graph Representation Learning with Adversarial Cross-View Reconstruction and Information Bottleneck
    Shou, Yuntao
    Lan, Haozhi
    Cao, Xiangyong
    NEURAL NETWORKS, 2025, 184
  • [9] CvFormer: Cross-view transFormers with pre-training for fMRI analysis of human brain
    Meng, Xiangzhu
    Wei, Wei
    Liu, Qiang
    Wang, Yu
    Li, Min
    Wang, Liang
    PATTERN RECOGNITION LETTERS, 2024, 186 : 85 - 90
  • [10] Cross-view temporal graph contrastive learning for session-based recommendation
    Wang, Haosen
    Yan, Surong
    Wu, Chunqi
    Han, Long
    Zhou, Linghong
    KNOWLEDGE-BASED SYSTEMS, 2023, 264