Learning Transferable User Representations with Sequential Behaviors via Contrastive Pre-training

被引:16
|
作者
Cheng, Mingyue [1 ]
Yuan, Fajie [2 ]
Liu, Qi [1 ]
Xin, Xin [3 ]
Chen, Enhong [1 ]
机构
[1] Univ Sci & Technol China, Anhui Prov Key Lab Big Data Anal & Applicat, Sch Data Sci, Hefei, Anhui, Peoples R China
[2] Westlake Univ, Hangzhou, Peoples R China
[3] Shandong Univ, Tai An, Shandong, Peoples R China
基金
中国国家自然科学基金;
关键词
Recommender systems; User representation; Contrastive learning; Sequential behaviors;
D O I
10.1109/ICDM51629.2021.00015
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning effective user representations from sequential user-item interactions is a fundamental problem for recommender systems (RS). Recently, several unsupervised methods focusing on user representations pre-training have been explored. In general, these methods apply similar learning paradigms by first corrupting the behavior sequence, and then restoring the original input with some item-level prediction loss functions. Despite its effectiveness, we argue that there exist important gaps between such item-level optimization objective and user-level representations, and as a result, the learned user representations may only lead to sub-optimal generalization performance. In this paper, we propose a novel self-supervised pre-training framework, called CLUE, which stands for employing Contrastive Learning for modeling sequence-level User representation. The core idea of CLUE is to regard each user behavior sequence as a whole and then construct the self-supervision signals by transforming the original user behaviors by data augmentations (DA). Specifically, we employ two Siamese (weight-sharing) networks to learn the user-oriented representations, where the optimization goal is to maximize the similarity of learned representations of the same user by these two encoders. More importantly, we perform careful investigation of the impacts of view generating strategies for user behavior inputs from a more comprehensive perspective, including processing sequential behaviors by explicit DA strategies and employing dropout as implicit DA. To verify the effectiveness of CLUE, we perform extensive experiments on several user-related tasks with different scales and characteristics. Our experimental results show that the user representations learned by CLUE surpass existing item-level baselines under several evaluation protocols.
引用
收藏
页码:51 / 60
页数:10
相关论文
共 50 条
  • [1] Temporal Contrastive Pre-Training for Sequential Recommendation
    Tian, Changxin
    Lin, Zihan
    Bian, Shuqing
    Wang, Jinpeng
    Zhao, Wayne Xin
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 1925 - 1934
  • [2] Multilingual Molecular Representation Learning via Contrastive Pre-training
    Guo, Zhihui
    Sharma, Pramod
    Martinez, Andy
    Du, Liang
    Abraham, Robin
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 3441 - 3453
  • [3] VarCLR: Variable Semantic Representation Pre-training via Contrastive Learning
    Chen, Qibin
    Lacomis, Jeremy
    Schwartz, Edward J.
    Neubig, Graham
    Vasilescu, Bogdan
    Le Goues, Claire
    [J]. 2022 ACM/IEEE 44TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2022), 2022, : 2327 - 2339
  • [4] Robust Pre-Training by Adversarial Contrastive Learning
    Jiang, Ziyu
    Chen, Tianlong
    Chen, Ting
    Wang, Zhangyang
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [5] Contrastive Representations Pre-Training for Enhanced Discharge Summary BERT
    Won, DaeYeon
    Lee, YoungJun
    Choi, Ho-Jin
    Jung, YuChae
    [J]. 2021 IEEE 9TH INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI 2021), 2021, : 507 - 508
  • [6] New Intent Discovery with Pre-training and Contrastive Learning
    Zhang, Yuwei
    Zhang, Haode
    Zhan, Li-Ming
    Wu, Xiao-Ming
    Lam, Albert Y. S.
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 256 - 269
  • [7] Image Difference Captioning with Pre-training and Contrastive Learning
    Yao, Linli
    Wang, Weiying
    Jin, Qin
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 3108 - 3116
  • [8] UserBERT: Pre-training User Model with Contrastive Self-supervision
    Wu, Chuhan
    Wu, Fangzhao
    Qi, Tao
    Huang, Yongfeng
    [J]. PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22), 2022, : 2087 - 2092
  • [9] GBERT: Pre-training User representations for Ephemeral Group Recommendation
    Zhang, Song
    Zheng, Nan
    Wang, Danli
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 2631 - 2639
  • [10] PTUM: Pre-training User Model from Unlabeled User Behaviors via Self-supervision
    Wu, Chuhan
    Wu, Fangzhao
    Qi, Tao
    Lian, Jianxun
    Huang, Yongfeng
    Xie, Xing
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 1939 - 1944