Lightweight Self-Attentive Sequential Recommendation

被引:49
|
作者
Li, Yang [1 ]
Chen, Tong [1 ]
Zhang, Peng-Fei [1 ]
Yin, Hongzhi [1 ]
机构
[1] Univ Queensland, Brisbane, Qld, Australia
基金
澳大利亚研究理事会;
关键词
sequential recommendation; lightweight recommender systems; self-attention mechanism;
D O I
10.1145/3459637.3482448
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Modern deep neural networks (DNNs) have greatly facilitated the development of sequential recommender systems by achieving state-of-the-art recommendation performance on various sequential recommendation tasks. Given a sequence of interacted items, existing DNN-based sequential recommenders commonly embed each item into a unique vector to support subsequent computations of the user interest. However, due to the potentially large number of items, the over-parameterised item embedding matrix of a sequential recommender has become a memory bottleneck for efficient deployment in resource-constrained environments, e.g., smartphones and other edge devices. Furthermore, we observe that the widely-used multi-head self-attention, though being effective in modelling sequential dependencies among items, heavily relies on redundant attention units to fully capture both global and local item-item transition patterns within a sequence. In this paper, we introduce a novel lightweight self-attentive network (LSAN) for sequential recommendation. To aggressively compress the original embedding matrix, LSAN leverages the notion of compositional embeddings, where each item embedding is composed by merging a group of selected base embedding vectors derived from substantially smaller embedding matrices. Meanwhile, to account for the intrinsic dynamics of each item, we further propose a temporal context-aware embedding composition scheme. Besides, we develop an innovative twin-attention network that alleviates the redundancy of the traditional multi-head self-attention while retaining full capacity for capturing long- and short-term (i.e., global and local) item dependencies. Comprehensive experiments demonstrate that LSAN significantly advances the accuracy and memory efficiency of existing sequential recommenders.
引用
收藏
页码:967 / 977
页数:11
相关论文
共 50 条
  • [1] Self-Attentive Sequential Recommendation
    Kang, Wang-Cheng
    McAuley, Julian
    2018 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2018, : 197 - 206
  • [2] Denoising Self-Attentive Sequential Recommendation
    Chen, Huiyuan
    Lin, Yusan
    Pan, Menghai
    Wang, Lan
    Yeh, Chin-Chia Michael
    Li, Xiaoting
    Zheng, Yan
    Wang, Fei
    Yang, Hao
    PROCEEDINGS OF THE 16TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2022, 2022, : 92 - 101
  • [3] Graph convolutional network and self-attentive for sequential recommendation
    Guo, Kaifeng
    Zeng, Guolei
    PEERJ COMPUTER SCIENCE, 2023, 9
  • [4] Locker: Locally Constrained Self-Attentive Sequential Recommendation
    He, Zhankui
    Zhao, Handong
    Wang, Zhaowen
    Lin, Zhe
    Kale, Ajinkya
    McAuley, Julian
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 3088 - 3092
  • [5] Sequential Recommendation with Self-Attentive Multi-Adversarial Network
    Ren, Ruiyang
    Liu, Zhaoyang
    Li, Yaliang
    Zhao, Wayne Xin
    Wang, Hui
    Ding, Bolin
    Wen, Ji-Rong
    PROCEEDINGS OF THE 43RD INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '20), 2020, : 89 - 98
  • [6] SAIN: Self-Attentive Integration Network for Recommendation
    Yun, Seoungjun
    Kim, Raehyun
    Ko, Miyoung
    Kang, Jaewoo
    PROCEEDINGS OF THE 42ND INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '19), 2019, : 1205 - 1208
  • [7] Sequential Self-Attentive Model for Knowledge Tracing
    Zhang, Xuelong
    Zhang, Juntao
    Lin, Nanzhou
    Yang, Xiandi
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT I, 2021, 12891 : 318 - 330
  • [8] Self-Attentive Sequential Recommendations with Hyperbolic Representations
    Frolov, Evgeny
    Matveeva, Tatyana
    Mirvakhabova, Leyla
    Oseledets, Ivan
    PROCEEDINGS OF THE EIGHTEENTH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2024, 2024, : 981 - 986
  • [9] Modeling Sequences as Star Graphs to Address Over-Smoothing in Self-Attentive Sequential Recommendation
    Peng, Bo
    Chen, Ziqi
    Parthasarathy, Srinivasan
    Ning, Xia
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2024, 18 (08)
  • [10] Self-Attentive Recommendation for Multi-Source Review Package
    Chen, Pin-Yu
    Chen, Yu-Hsiu
    Shuai, Hong-Han
    Chang, Yung-Ju
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,