CSTRM: Contrastive Self-Supervised Trajectory Representation Model for trajectory similarity computation

被引:13
|
作者
Liu, Xiang [1 ]
Tan, Xiaoying [2 ]
Guo, Yuchun [1 ]
Chen, Yishuai [1 ]
Zhang, Zhe [3 ]
机构
[1] Beijing Jiaotong Univ, Beijing, Peoples R China
[2] China Justice Big Data Inst CO Ltd, Beijing, Peoples R China
[3] Nanjing Univ Posts & Telecommun, Nanjing, Peoples R China
基金
中国国家自然科学基金;
关键词
Trajectory representation; Trajectory similarity; Contrastive learning; Self-supervised learning; BERT; DISTANCE;
D O I
10.1016/j.comcom.2022.01.001
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The trajectory representation model has become a common method for calculating the similarity of trajectories. Existing works have used the encoder-decoder model, which is trained by reconstructing the original trajectory from a noisy trajectory. However, this reconstructive model ignores the point-level differences between these two trajectories and captures only the trajectory-level features. As a result, it achieves low accuracy on ranking tasks. To solve this problem, we propose a novel contrastive model to learn trajectory representations by distinguishing the trajectory-level and point-level differences between trajectories. Furthermore, to solve the lack of training data, we propose a self-supervised approach to augment training pairs of trajectories. Compared with existing models, our model achieves a significant performance improvement on various trajectory similarity tasks.
引用
收藏
页码:159 / 167
页数:9
相关论文
共 50 条
  • [41] Pose-Aware Self-supervised Learning with Viewpoint Trajectory Regularization
    Wang, Jiayun
    Chen, Yubei
    Yu, Stella X.
    COMPUTER VISION - ECCV 2024, PT XXI, 2025, 15079 : 19 - 37
  • [42] Adversarial Self-Supervised Contrastive Learning
    Kim, Minseon
    Tack, Jihoon
    Hwang, Sung Ju
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020), 2020, 33
  • [43] A Survey on Contrastive Self-Supervised Learning
    Jaiswal, Ashish
    Babu, Ashwin Ramesh
    Zadeh, Mohammad Zaki
    Banerjee, Debapriya
    Makedon, Fillia
    TECHNOLOGIES, 2021, 9 (01)
  • [44] Self-Supervised Learning: Generative or Contrastive
    Liu, Xiao
    Zhang, Fanjin
    Hou, Zhenyu
    Mian, Li
    Wang, Zhaoyu
    Zhang, Jing
    Tang, Jie
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (01) : 857 - 876
  • [45] Boost Supervised Pretraining for Visual Transfer Learning: Implications of Self-Supervised Contrastive Representation Learning
    Sun, Jinghan
    Wei, Dong
    Ma, Kai
    Wang, Liansheng
    Zheng, Yefeng
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 2307 - 2315
  • [46] IPCL: ITERATIVE PSEUDO-SUPERVISED CONTRASTIVE LEARNING TO IMPROVE SELF-SUPERVISED FEATURE REPRESENTATION
    Kumar, Sonal
    Phukan, Anirudh
    Sur, Arijit
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 6270 - 6274
  • [47] Self-Supervised Contrastive Representation Learning for Semi-Supervised Time-Series Classification
    Eldele, Emadeldeen
    Ragab, Mohamed
    Chen, Zhenghua
    Wu, Min
    Kwoh, Chee-Keong
    Li, Xiaoli
    Guan, Cuntai
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (12) : 15604 - 15618
  • [48] Cut-in maneuver detection with self-supervised contrastive video representation learning
    Nalcakan, Yagiz
    Bastanlar, Yalin
    SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (06) : 2915 - 2923
  • [49] SELF-SUPERVISED CONTRASTIVE LEARNING FOR CROSS-DOMAIN HYPERSPECTRAL IMAGE REPRESENTATION
    Lee, Hyungtae
    Kwon, Heesung
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3239 - 3243
  • [50] Cross-View Temporal Contrastive Learning for Self-Supervised Video Representation
    Wang, Lulu
    Xu, Zengmin
    Zhang, Xuelian
    Meng, Ruxing
    Lu, Tao
    Computer Engineering and Applications, 2024, 60 (18) : 158 - 166