Spatial-Temporal Cross-View Contrastive Pre-Training for Check-in Sequence Representation Learning

被引:0
|
作者
Gong, Letian [1 ,2 ]
Wan, Huaiyu [1 ,2 ]
Guo, Shengnan [1 ,2 ]
Li, Xiucheng [3 ]
Lin, Yan [1 ,2 ]
Zheng, Erwen [1 ,2 ]
Wang, Tianyi [1 ,2 ]
Zhou, Zeyu [1 ,2 ]
Lin, Youfang [1 ,2 ]
机构
[1] Beijing Jiaotong Univ, Key Lab Big Data & Artificial Intelligence Transpo, Minist Educ, Beijing 100044, Peoples R China
[2] CAAC, Key Lab Intelligent Passenger Serv Civil Aviat, Beijing 101318, Peoples R China
[3] Harbin Inst Technol, Sch Comp Sci & Technol, Shenzhen 518055, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Semantics; Trajectory; Predictive models; Uncertainty; Task analysis; Noise; Data mining; Check-in sequence; contrastive cluster; representation learning; spatial-temporal cross-view;
D O I
10.1109/TKDE.2024.3434565
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The rapid growth of location-based services (LBS) has yielded massive amounts of data on human mobility. Effectively extracting meaningful representations for user-generated check-in sequences is pivotal for facilitating various downstream services. However, the user-generated check-in data are simultaneously influenced by the surrounding objective circumstances and the user's subjective intention. Specifically, the temporal uncertainty and spatial diversity exhibited in check-in data make it difficult to capture the macroscopic spatial-temporal patterns of users and to understand the semantics of user mobility activities. Furthermore, the distinct characteristics of the temporal and spatial information in check-in sequences call for an effective fusion method to incorporate these two types of information. In this paper, we propose a novel Spatial-Temporal Cross-view Contrastive Representation (STCCR) framework for check-in sequence representation learning. Specifically, STCCR addresses the above challenges by employing self-supervision from "spatial topic" and "temporal intention" views, facilitating effective fusion of spatial and temporal information at the semantic level. Besides, STCCR leverages contrastive clustering to uncover users' shared spatial topics from diverse mobility activities, while employing angular momentum contrast to mitigate the impact of temporal uncertainty and noise. We extensively evaluate STCCR on three real-world datasets and demonstrate its superior performance across three downstream tasks.
引用
收藏
页码:9308 / 9321
页数:14
相关论文
共 50 条
  • [31] STOA-VLP: Spatial-Temporal Modeling of Object and Action for Video-Language Pre-training
    Zhong, Weihong
    Zheng, Mao
    Tang, Duyu
    Luo, Xuan
    Gong, Heng
    Feng, Xiaocheng
    Qin, Bing
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 3, 2023, : 3715 - 3723
  • [32] A Hierarchical Spatial-Temporal Cross-Attention Scheme for Video Summarization Using Contrastive Learning
    Teng, Xiaoyu
    Gui, Xiaolin
    Xu, Pan
    Tong, Jianglei
    An, Jian
    Liu, Yang
    Jiang, Huilan
    SENSORS, 2022, 22 (21)
  • [33] Learning Depth Representation From RGB-D Videos by Time-Aware Contrastive Pre-Training
    He, Zongtao
    Wang, Liuyi
    Dang, Ronghao
    Li, Shu
    Yan, Qingqing
    Liu, Chengju
    Chen, Qijun
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (06) : 4143 - 4158
  • [34] BRep-BERT: Pre-training Boundary Representation BERT with Sub-graph Node Contrastive Learning
    Lou, Yunzhong
    Li, Xueyang
    Chen, Haotian
    Zhou, Xiangdong
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 1657 - 1666
  • [35] MMCPP: A MULTI-MODAL CONTRASTIVE PRE-TRAINING MODEL FOR PLACE REPRESENTATION BASED ON THE SPATIO-TEMPORAL FRAMEWORK
    Chen, Y.
    Yu, X. S.
    Qin, K.
    GEOSPATIAL WEEK 2023, VOL. 10-1, 2023, : 303 - 310
  • [36] CASTLE: A CONTEXT-AWARE SPATIAL-TEMPORAL LOCATION EMBEDDING PRE-TRAINING MODEL FOR NEXT LOCATION PREDICTION
    Cheng, Junyi
    Huang, Jie
    Zhang, Xianfeng
    ISPRS GEOSPATIAL CONFERENCE 2022, JOINT 6TH SENSORS AND MODELS IN PHOTOGRAMMETRY AND REMOTE SENSING, SMPR/ 4TH GEOSPATIAL INFORMATION RESEARCH, GIRESEARCH CONFERENCES, VOL. 48-4, 2023, : 15 - 21
  • [37] JointGraph: joint pre-training framework for traffic forecasting with spatial-temporal gating diffusion graph attention network
    Xiangyuan Kong
    Xiang Wei
    Jian Zhang
    Weiwei Xing
    Wei Lu
    Applied Intelligence, 2023, 53 : 13723 - 13740
  • [38] JointGraph: joint pre-training framework for traffic forecasting with spatial-temporal gating diffusion graph attention network
    Kong, Xiangyuan
    Wei, Xiang
    Zhang, Jian
    Xing, Weiwei
    Lu, Wei
    APPLIED INTELLIGENCE, 2023, 53 (11) : 13723 - 13740
  • [39] A Two-View EEG Representation for Brain Cognition by Composite Temporal-Spatial Contrastive Learning
    Chen, Zheng
    Zhu, Lingwei
    Jia, Haohui
    Matsubara, Takashi
    PROCEEDINGS OF THE 2023 SIAM INTERNATIONAL CONFERENCE ON DATA MINING, SDM, 2023, : 334 - 342
  • [40] Cross-view identification based on gait bioinformation using a dynamic densely connected spatial-temporal feature decoupling network
    Qiao, Shuo
    Tang, Chao
    Hu, Huosheng
    Wang, Wenjian
    Tong, Anyang
    Ren, Fang
    Biomedical Signal Processing and Control, 104