Lightweight Self-Attentive Sequential Recommendation

被引:49
|
作者
Li, Yang [1 ]
Chen, Tong [1 ]
Zhang, Peng-Fei [1 ]
Yin, Hongzhi [1 ]
机构
[1] Univ Queensland, Brisbane, Qld, Australia
基金
澳大利亚研究理事会;
关键词
sequential recommendation; lightweight recommender systems; self-attention mechanism;
D O I
10.1145/3459637.3482448
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Modern deep neural networks (DNNs) have greatly facilitated the development of sequential recommender systems by achieving state-of-the-art recommendation performance on various sequential recommendation tasks. Given a sequence of interacted items, existing DNN-based sequential recommenders commonly embed each item into a unique vector to support subsequent computations of the user interest. However, due to the potentially large number of items, the over-parameterised item embedding matrix of a sequential recommender has become a memory bottleneck for efficient deployment in resource-constrained environments, e.g., smartphones and other edge devices. Furthermore, we observe that the widely-used multi-head self-attention, though being effective in modelling sequential dependencies among items, heavily relies on redundant attention units to fully capture both global and local item-item transition patterns within a sequence. In this paper, we introduce a novel lightweight self-attentive network (LSAN) for sequential recommendation. To aggressively compress the original embedding matrix, LSAN leverages the notion of compositional embeddings, where each item embedding is composed by merging a group of selected base embedding vectors derived from substantially smaller embedding matrices. Meanwhile, to account for the intrinsic dynamics of each item, we further propose a temporal context-aware embedding composition scheme. Besides, we develop an innovative twin-attention network that alleviates the redundancy of the traditional multi-head self-attention while retaining full capacity for capturing long- and short-term (i.e., global and local) item dependencies. Comprehensive experiments demonstrate that LSAN significantly advances the accuracy and memory efficiency of existing sequential recommenders.
引用
收藏
页码:967 / 977
页数:11
相关论文
共 50 条
  • [21] Fast Self-Attentive Multimodal Retrieval
    Wehrmann, Jonatas
    Lopes, Mauricio A.
    More, Martin D.
    Barros, Rodrigo C.
    2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018), 2018, : 1871 - 1878
  • [22] Self-attentive Biaffine Dependency Parsing
    Li, Ying
    Li, Zhenghua
    Zhang, Min
    Wang, Rui
    Li, Sheng
    Si, Luo
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 5067 - 5073
  • [23] Point-of-Interest Recommendation: Exploiting Self-Attentive Autoencoders with Neighbor-Aware Influence
    Ma, Chen
    Zhang, Yingxue
    Wang, Qinglong
    Liu, Xue
    CIKM'18: PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, 2018, : 697 - 706
  • [24] Self-Attentive Graph Convolution Network With Latent Group Mining and Collaborative Filtering for Personalized Recommendation
    Liu, Shenghao
    Wang, Bang
    Deng, Xianjun
    Yang, Laurence T.
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2022, 9 (05): : 3212 - 3221
  • [25] Adversarial Learning to Compare: Self-Attentive Prospective Customer Recommendation in Location based Social Networks
    Li, Ruirui
    Wu, Xian
    Wang, Wei
    PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM '20), 2020, : 349 - 357
  • [26] Self-Attentive Pooling for Efficient Deep Learning
    Chen, Fang
    Datta, Gourav
    Kundu, Souvik
    Beerel, Peter A.
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 3963 - 3972
  • [27] Slot Self-Attentive Dialogue State Tracking
    Ye, Fanghua
    Manotumruksa, Jarana
    Zhang, Qiang
    Li, Shenghui
    Yilmaz, Emine
    PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2021 (WWW 2021), 2021, : 1598 - 1608
  • [28] MSAM: Cross-Domain Recommendation Based on Multi-Layer Self-Attentive Mechanism
    Song, XiaoBing
    Bao, JiaYu
    Di, Yicheng
    Li, Yuan
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, ICIC 2023, PT IV, 2023, 14089 : 319 - 332
  • [29] TimeSAN: A Time-Modulated Self-Attentive Network for Next Point-of-Interest Recommendation
    He, Jiayuan
    Qi, Jianzhong
    Ramamohanarao, Kotagiri
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [30] An Attentive Inductive Bias for Sequential Recommendation beyond the Self-Attention
    Shin, Yehjin
    Choi, Jeongwhan
    Wi, Hyowon
    Park, Noseong
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 8, 2024, : 8984 - 8992