Towards more effective encoders in pre-training for sequential recommendation

被引:1
|
作者
Sun, Ke [1 ]
Qian, Tieyun [1 ]
Zhong, Ming [1 ]
Li, Xuhui [2 ]
机构
[1] Wuhan Univ, Sch Comp Sci, Wuhan, Peoples R China
[2] Wuhan Univ, Sch Informat Management, Wuhan, Peoples R China
基金
中国国家自然科学基金;
关键词
Sequential recommendation; Self-supervised learning; Pre-training; Encoder; CONTEXT;
D O I
10.1007/s11280-023-01163-1
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Pre-training emerges as a new learning paradigm in natural language processing and computer vision. It has also been introduced into sequential recommendation in several seminal studies for alleviating data sparsity issue. However, existing methods adopt the bidirectional transformer as the encoder which suffers from two drawbacks. One is insufficient intention modeling since the transformer architecture is suitable for extracting distributed consumption intention but cannot well catch users' concentrated and occasion consumption intentions. The other is information leakage caused by foreseeing the future item in advance during the bidirectional encoding process. To address these problems, we propose to construct more effective encoders in pre-training for sequential recommendation. Specifically, we first decouple the original bidirectional process in transformer structure into two unidirectional processes which can avoid the information leakage problem and capture the distributed consumption intention. We then employ the locality-aware convolutional neural networks (CNNs) with narrow receptive field for concentrated consumption modeling. We also introduce a random shuffle strategy to empower CNN with the ability of modeling the occasion consumption. Experiments on five datasets demonstrate that our method improves the performance of various types of downstream sequential recommendation models to a large extent, and it also generates the overall better performance than the state-of-the-art self-supervised pre-training methods.
引用
收藏
页码:2801 / 2832
页数:32
相关论文
共 50 条
  • [1] Towards more effective encoders in pre-training for sequential recommendation
    Ke Sun
    Tieyun Qian
    Ming Zhong
    Xuhui Li
    [J]. World Wide Web, 2023, 26 : 2801 - 2832
  • [2] Temporal Contrastive Pre-Training for Sequential Recommendation
    Tian, Changxin
    Lin, Zihan
    Bian, Shuqing
    Wang, Jinpeng
    Zhao, Wayne Xin
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 1925 - 1934
  • [3] UPRec: User-aware Pre-training for sequential Recommendation
    Xiao, Chaojun
    Xie, Ruobing
    Yao, Yuan
    Liu, Zhiyuan
    Sun, Maosong
    Zhang, Xu
    Lin, Leyu
    [J]. AI OPEN, 2023, 4 : 137 - 144
  • [4] Beyond the Sequence: Statistics-Driven Pre-training for Stabilizing Sequential Recommendation Model
    Wang, Sirui
    Li, Peiguang
    Xian, Yunsen
    Zhang, Hongzhi
    [J]. PROCEEDINGS OF THE 17TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2023, 2023, : 723 - 729
  • [5] Image classification with quantum pre-training and auto-encoders
    Piat, Sebastien
    Usher, Nairi
    Severini, Simone
    Herbster, Mark
    Mansi, Tommaso
    Mountney, Peter
    [J]. INTERNATIONAL JOURNAL OF QUANTUM INFORMATION, 2018, 16 (08)
  • [6] Augmenting Sequential Recommendation with Pseudo-Prior Items via Reversely Pre-training Transformer
    Liu, Zhiwei
    Fan, Ziwei
    Wang, Yu
    Yu, Philip S.
    [J]. SIGIR '21 - PROCEEDINGS OF THE 44TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2021, : 1608 - 1612
  • [7] Pre-training of Graph Augmented Transformers for Medication Recommendation
    Shang, Junyuan
    Ma, Tengfei
    Xiao, Cao
    Sun, Jimeng
    [J]. PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 5953 - 5959
  • [8] Multi-Modal Contrastive Pre-training for Recommendation
    Liu, Zhuang
    Ma, Yunpu
    Schubert, Matthias
    Ouyang, Yuanxin
    Xiong, Zhang
    [J]. PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2022, 2022, : 99 - 108
  • [9] UNSUPERVISED PRE-TRAINING OF BIDIRECTIONAL SPEECH ENCODERS VIA MASKED RECONSTRUCTION
    Wang, Weiran
    Tang, Qingming
    Livescu, Karen
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 6889 - 6893
  • [10] ELECTRA: PRE-TRAINING TEXT ENCODERS AS DISCRIMINATORS RATHER THAN GENERATORS
    Clark, Kevin
    Luong, Minh-Thang
    Le, Quoc V.
    Manning, Christopher D.
    [J]. INFORMATION SYSTEMS RESEARCH, 2020,