Denoising Self-Attentive Sequential Recommendation

被引:15
|
作者
Chen, Huiyuan [1 ]
Lin, Yusan [1 ]
Pan, Menghai [1 ]
Wang, Lan [1 ]
Yeh, Chin-Chia Michael [1 ]
Li, Xiaoting [1 ]
Zheng, Yan [1 ]
Wang, Fei [1 ]
Yang, Hao [1 ]
机构
[1] Visa Res, Foster City, CA 94404 USA
关键词
Sequential Recommendation; Sparse Transformer; Noise Analysis; Differentiable Mask;
D O I
10.1145/3523227.3546788
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transformer-based sequential recommenders are very powerful for capturing both short-term and long-term sequential item dependencies. This is mainly attributed to their unique self-attention networks to exploit pairwise item-item interactions within the sequence. However, real-world item sequences are often noisy, which is particularly true for implicit feedback. For example, a large portion of clicks do not align well with user preferences, and many products end up with negative reviews or being returned. As such, the current user action only depends on a subset of items, not on the entire sequences. Many existing Transformer-based models use full attention distributions, which inevitably assign certain credits to irrelevant items. This may lead to sub-optimal performance if Transformers are not regularized properly. Here we propose the Rec-denoiser model for better training of self-attentive recommender systems. In Rec-denoiser, we aim to adaptively prune noisy items that are unrelated to the next item prediction. To achieve this, we simply attach each self-attention layer with a trainable binary mask to prune noisy attentions, resulting in sparse and clean attention distributions. This largely purifies item-item dependencies and provides better model interpretability. In addition, the self-attention network is typically not Lipschitz continuous and is vulnerable to small perturbations. Jacobian regularization is further applied to the Transformer blocks to improve the robustness of Transformers for noisy sequences. Our Rec-denoiser is a general plugin that is compatible to many Transformers. Quantitative results on real-world datasets show that our Rec-denoiser outperforms the state-of-the-art baselines.
引用
收藏
页码:92 / 101
页数:10
相关论文
共 50 条
  • [1] Self-Attentive Sequential Recommendation
    Kang, Wang-Cheng
    McAuley, Julian
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2018, : 197 - 206
  • [2] Lightweight Self-Attentive Sequential Recommendation
    Li, Yang
    Chen, Tong
    Zhang, Peng-Fei
    Yin, Hongzhi
    [J]. PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 967 - 977
  • [3] Graph convolutional network and self-attentive for sequential recommendation
    Guo, Kaifeng
    Zeng, Guolei
    [J]. PEERJ COMPUTER SCIENCE, 2023, 9
  • [4] Locker: Locally Constrained Self-Attentive Sequential Recommendation
    He, Zhankui
    Zhao, Handong
    Wang, Zhaowen
    Lin, Zhe
    Kale, Ajinkya
    McAuley, Julian
    [J]. PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 3088 - 3092
  • [5] Sequential Recommendation with Self-Attentive Multi-Adversarial Network
    Ren, Ruiyang
    Liu, Zhaoyang
    Li, Yaliang
    Zhao, Wayne Xin
    Wang, Hui
    Ding, Bolin
    Wen, Ji-Rong
    [J]. PROCEEDINGS OF THE 43RD INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '20), 2020, : 89 - 98
  • [6] SAIN: Self-Attentive Integration Network for Recommendation
    Yun, Seoungjun
    Kim, Raehyun
    Ko, Miyoung
    Kang, Jaewoo
    [J]. PROCEEDINGS OF THE 42ND INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '19), 2019, : 1205 - 1208
  • [7] Sequential Self-Attentive Model for Knowledge Tracing
    Zhang, Xuelong
    Zhang, Juntao
    Lin, Nanzhou
    Yang, Xiandi
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT I, 2021, 12891 : 318 - 330
  • [8] Self-Attentive Recommendation for Multi-Source Review Package
    Chen, Pin-Yu
    Chen, Yu-Hsiu
    Shuai, Hong-Han
    Chang, Yung-Ju
    [J]. 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [9] Self-Attentive Associative Memory
    Le, Hung
    Tran, Truyen
    Venkatesh, Svetha
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [10] Self-Attentive Subset Learning over a Set-Based Preference in Recommendation
    Liu, Kunjia
    Chen, Yifan
    Tang, Jiuyang
    Huang, Hongbin
    Liu, Lihua
    [J]. APPLIED SCIENCES-BASEL, 2023, 13 (03):