Lightweight Self-Attentive Sequential Recommendation

被引:49
|
作者
Li, Yang [1 ]
Chen, Tong [1 ]
Zhang, Peng-Fei [1 ]
Yin, Hongzhi [1 ]
机构
[1] Univ Queensland, Brisbane, Qld, Australia
基金
澳大利亚研究理事会;
关键词
sequential recommendation; lightweight recommender systems; self-attention mechanism;
D O I
10.1145/3459637.3482448
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Modern deep neural networks (DNNs) have greatly facilitated the development of sequential recommender systems by achieving state-of-the-art recommendation performance on various sequential recommendation tasks. Given a sequence of interacted items, existing DNN-based sequential recommenders commonly embed each item into a unique vector to support subsequent computations of the user interest. However, due to the potentially large number of items, the over-parameterised item embedding matrix of a sequential recommender has become a memory bottleneck for efficient deployment in resource-constrained environments, e.g., smartphones and other edge devices. Furthermore, we observe that the widely-used multi-head self-attention, though being effective in modelling sequential dependencies among items, heavily relies on redundant attention units to fully capture both global and local item-item transition patterns within a sequence. In this paper, we introduce a novel lightweight self-attentive network (LSAN) for sequential recommendation. To aggressively compress the original embedding matrix, LSAN leverages the notion of compositional embeddings, where each item embedding is composed by merging a group of selected base embedding vectors derived from substantially smaller embedding matrices. Meanwhile, to account for the intrinsic dynamics of each item, we further propose a temporal context-aware embedding composition scheme. Besides, we develop an innovative twin-attention network that alleviates the redundancy of the traditional multi-head self-attention while retaining full capacity for capturing long- and short-term (i.e., global and local) item dependencies. Comprehensive experiments demonstrate that LSAN significantly advances the accuracy and memory efficiency of existing sequential recommenders.
引用
收藏
页码:967 / 977
页数:11
相关论文
共 50 条
  • [41] Gated Self-attentive Encoder for Neural Machine Translation
    Wei, Xiangpeng
    Hu, Yue
    Xing, Luxi
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, KSEM 2019, PT I, 2019, 11775 : 655 - 666
  • [42] MeSHProbeNet: a self-attentive probe net for MeSH indexing
    Xun, Guangxu
    Jha, Kishlay
    Yuan, Ye
    Wang, Yaqing
    Zhang, Aidong
    BIOINFORMATICS, 2019, 35 (19) : 3794 - 3802
  • [43] SELF-ATTENTIVE SENTIMENTAL SENTENCE EMBEDDING FOR SENTIMENT ANALYSIS
    Lin, Sheng-Chieh
    Su, Wen-Yuh
    Chien, Po-Chuan
    Tsai, Ming-Feng
    Wang, Chuan-Ju
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 1678 - 1682
  • [44] SAFE: Self-Attentive Function Embeddings for Binary Similarity
    Massarelli, Luca
    Di Luna, Giuseppe Antonio
    Petroni, Fabio
    Baldoni, Roberto
    Querzoni, Leonardo
    DETECTION OF INTRUSIONS AND MALWARE, AND VULNERABILITY ASSESSMENT (DIMVA 2019), 2019, 11543 : 309 - 329
  • [45] Swarm Enhanced Attentive Mechanism for Sequential Recommendation
    Geng, Shuang
    Liang, Gemin
    He, Yuqin
    Duan, Liezhen
    Xie, Haoran
    Song, Xi
    ADVANCES IN SWARM INTELLIGENCE, ICSI 2022, PT I, 2022, : 442 - 453
  • [46] Self-Attentive Models for Real-Time Malware Classification
    Lu, Qikai
    Zhang, Hongwen
    Kinawi, Husam
    Niu, Di
    IEEE ACCESS, 2022, 10 : 95970 - 95985
  • [47] Global-Locally Self-Attentive Dialogue State Tracker
    Zhong, Victor
    Xiong, Caiming
    Socher, Richard
    PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL), VOL 1, 2018, : 1458 - 1467
  • [48] A Self-Attentive Model with Gate Mechanism for Spoken Language Understanding
    Li, Changliang
    Li, Liang
    Qi, Ji
    2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), 2018, : 3824 - 3833
  • [49] Self-Attentive Contrastive Learning for Conditioned Periocular and Face Biometrics
    Ng, Tiong-Sik
    Chai, Jacky Chen Long
    Low, Cheng-Yaw
    Beng Jin Teoh, Andrew
    IEEE Transactions on Information Forensics and Security, 2024, 19 : 3251 - 3264
  • [50] Self-Attentive Attributed Network Embedding Through Adversarial Learning
    Yu, Wenchao
    Cheng, Wei
    Aggarwal, Charu
    Zong, Bo
    Chen, Haifeng
    Wang, Wei
    2019 19TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2019), 2019, : 758 - 767