Towards more effective encoders in pre-training for sequential recommendation

被引:1
|
作者
Sun, Ke [1 ]
Qian, Tieyun [1 ]
Zhong, Ming [1 ]
Li, Xuhui [2 ]
机构
[1] Wuhan Univ, Sch Comp Sci, Wuhan, Peoples R China
[2] Wuhan Univ, Sch Informat Management, Wuhan, Peoples R China
来源
WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS | 2023年 / 26卷 / 05期
基金
中国国家自然科学基金;
关键词
Sequential recommendation; Self-supervised learning; Pre-training; Encoder; CONTEXT;
D O I
10.1007/s11280-023-01163-1
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Pre-training emerges as a new learning paradigm in natural language processing and computer vision. It has also been introduced into sequential recommendation in several seminal studies for alleviating data sparsity issue. However, existing methods adopt the bidirectional transformer as the encoder which suffers from two drawbacks. One is insufficient intention modeling since the transformer architecture is suitable for extracting distributed consumption intention but cannot well catch users' concentrated and occasion consumption intentions. The other is information leakage caused by foreseeing the future item in advance during the bidirectional encoding process. To address these problems, we propose to construct more effective encoders in pre-training for sequential recommendation. Specifically, we first decouple the original bidirectional process in transformer structure into two unidirectional processes which can avoid the information leakage problem and capture the distributed consumption intention. We then employ the locality-aware convolutional neural networks (CNNs) with narrow receptive field for concentrated consumption modeling. We also introduce a random shuffle strategy to empower CNN with the ability of modeling the occasion consumption. Experiments on five datasets demonstrate that our method improves the performance of various types of downstream sequential recommendation models to a large extent, and it also generates the overall better performance than the state-of-the-art self-supervised pre-training methods.
引用
收藏
页码:2801 / 2832
页数:32
相关论文
共 50 条
  • [31] GENET: Unleashing the Power of Side Information for Recommendation via Hypergraph Pre-training
    Li, Yang
    Zhao, Qi'ao
    Lin, Chen
    Zhang, Zhenjie
    Zhu, Xiaomin
    Su, Jinsong
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, DASFAA 2024, PT 3, 2025, 14852 : 343 - 352
  • [32] Curriculum Pre-training Heterogeneous Subgraph Transformer for Top-N Recommendation
    Wang, Hui
    Zhou, Kun
    Zhao, Xin
    Wang, Jingyuan
    Wen, Ji-Rong
    ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2023, 41 (01)
  • [33] Heterogeneous graph convolutional network pre-training as side information for improving recommendation
    Phuc Do
    Phu Pham
    Neural Computing and Applications, 2022, 34 : 15945 - 15961
  • [34] Domain Specific Pre-training Methods for Traditional Chinese Medicine Prescription Recommendation
    Li, Wei
    Yang, Zheng
    Shao, Yanqiu
    ARTIFICIAL INTELLIGENCE, CICAI 2023, PT II, 2024, 14474 : 125 - 135
  • [35] Improving News Recommendation via Bottlenecked Multi-task Pre-training
    Xiao, Xiongfeng
    Li, Qing
    Liu, Songlin
    Zhou, Kun
    PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023, 2023, : 2082 - 2086
  • [36] Using contrastive language-image pre-training for Thai recipe recommendation
    Chuenbanluesuk, Thanatkorn
    Plodprong, Voramate
    Karoon, Weerasak
    Rueangsri, Kotchakorn
    Pojam, Suthasinee
    Siriborvornratanakul, Thitirat
    LANGUAGE RESOURCES AND EVALUATION, 2025,
  • [37] Effective and Efficient Training for Sequential Recommendation using Recency Sampling
    Petrov, Aleksandr
    Macdonald, Craig
    PROCEEDINGS OF THE 16TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2022, 2022, : 81 - 91
  • [38] Pre-training Intent-Aware Encoders for Zero- and Few-Shot Intent Classification
    Sung, Mujeen
    Gun, James
    Mansimov, Elman
    Pappas, Nikolaos
    Shu, Raphael
    Romeo, Salvatore
    Zhang, Yi
    Castelli, Vittorio
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 10433 - 10442
  • [39] Effective Pre-Training Method and Its Compositional Intelligence for Image Captioning
    Choi, Won-Hyuk
    Choi, Yong-Suk
    SENSORS, 2022, 22 (09)
  • [40] A Simple Yet Effective Layered Loss for Pre-Training of Network Embedding
    Chen, Junyang
    Li, Xueliang
    Li, Yuanman
    Li, Paul
    Wang, Mengzhu
    Zhang, Xiang
    Gong, Zhiguo
    Wu, Kaishun
    Leung, Victor C. M.
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2022, 9 (03): : 1827 - 1837