Temporal Contrastive Pre-Training for Sequential Recommendation

被引:9
|
作者
Tian, Changxin [1 ]
Lin, Zihan [1 ]
Bian, Shuqing [1 ]
Wang, Jinpeng [2 ]
Zhao, Wayne Xin [3 ]
机构
[1] Renmin Univ China, Sch Informat, Beijing, Peoples R China
[2] Meituan Grp, Beijing, Peoples R China
[3] Renmin Univ China, Gaoling Sch Artificial Intelligence, Beijing, Peoples R China
基金
北京市自然科学基金; 中国国家自然科学基金;
关键词
Sequential Recommendation; Pre-training; Contrastive Learning;
D O I
10.1145/3511808.3557468
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, pre-training based approaches are proposed to leverage self-supervised signals for improving the performance of sequential recommendation. However, most of existing pre-training recommender systems simply model the historical behavior of a user as a sequence, while lack of sufficient consideration on temporal interaction patterns that are useful for modeling user behavior. In order to better model temporal characteristics of user behavior sequences, we propose a Temporal Contrastive Pre-training method for Sequential Recommendation (TCPSRec for short). Based on the temporal intervals, we consider dividing the interaction sequence into more coherent subsequences, and design temporal pre-training objectives accordingly. Specifically, TCPSRec models two important temporal properties of user behavior, i.e., invariance and periodicity. For invariance, we consider both global invariance and local invariance to capture the long-term preference and short-term intention, respectively. For periodicity, TCPSRec models coarse-grained periodicity and fine-grained periodicity at the subsequence level, which is more stable than modeling periodicity at the item level. By integrating the above strategies, we develop a unified contrastive learning framework with four specially designed pre-training objectives for fusing temporal information into sequential representations. We conduct extensive experiments on six real-world datasets, and the results demonstrate the effectiveness and generalization of our proposed method.
引用
收藏
页码:1925 / 1934
页数:10
相关论文
共 50 条
  • [21] Multilingual Molecular Representation Learning via Contrastive Pre-training
    Guo, Zhihui
    Sharma, Pramod
    Martinez, Andy
    Du, Liang
    Abraham, Robin
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 3441 - 3453
  • [22] Leveraging Time Irreversibility with Order-Contrastive Pre-training
    Agrawal, Monica
    Lang, Hunter
    Offin, Michael
    Gazit, Lior
    Sontag, David
    [J]. INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [23] Graph Neural Pre-training for Recommendation with Side Information
    Liu, Siwei
    Meng, Zaiqiao
    Macdonald, Craig
    Ounis, Iadh
    [J]. ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2023, 41 (03)
  • [24] Active Learning with Contrastive Pre-training for Facial Expression Recognition
    Roy, Shuvendu
    Etemad, Ali
    [J]. 2023 11TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION, ACII, 2023,
  • [25] Contrastive Representations Pre-Training for Enhanced Discharge Summary BERT
    Won, DaeYeon
    Lee, YoungJun
    Choi, Ho-Jin
    Jung, YuChae
    [J]. 2021 IEEE 9TH INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI 2021), 2021, : 507 - 508
  • [26] Supervised Contrastive Pre-training for Mammographic Triage Screening Models
    Cao, Zhenjie
    Yang, Zhicheng
    Tang, Yuxing
    Zhang, Yanbo
    Han, Mei
    Xiao, Jing
    Ma, Jie
    Chang, Peng
    [J]. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT VII, 2021, 12907 : 129 - 139
  • [27] Relation Extraction with Weighted Contrastive Pre-training on Distant Supervision
    Wan, Zhen
    Cheng, Fei
    Liu, Qianying
    Mao, Zhuoyuan
    Song, Haiyue
    Kurohashi, Sadao
    [J]. 17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 2580 - 2585
  • [28] Vision-Language Pre-Training with Triple Contrastive Learning
    Yang, Jinyu
    Duan, Jiali
    Tran, Son
    Xu, Yi
    Chanda, Sampath
    Chen, Liqun
    Zeng, Belinda
    Chilimbi, Trishul
    Huang, Junzhou
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 15650 - 15659
  • [29] Contrastive Language-Image Pre-Training with Knowledge Graphs
    Pan, Xuran
    Ye, Tianzhu
    Han, Dongchen
    Song, Shiji
    Huang, Gao
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [30] MimCo: Masked Image Modeling Pre-training with Contrastive Teacher
    Zhou, Qiang
    Yu, Chaohui
    Luo, Hao
    Wang, Zhibin
    Li, Hao
    [J]. PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 4487 - 4495