Disentangled Self-Supervision in Sequential Recommenders

被引:129
|
作者
Ma, Jianxin [1 ,2 ]
Zhou, Chang [2 ]
Yang, Hongxia [2 ]
Cui, Peng [1 ]
Wang, Xin [1 ,3 ]
Zhu, Wenwu [1 ,3 ]
机构
[1] Tsinghua Univ, Beijing, Peoples R China
[2] Alibaba Grp, Hangzhou, Peoples R China
[3] Minist Educ, Key Lab Pervas Comp, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
recommender systems; sequence model; disentangled representation learning; self-supervised learning; contrastive learning;
D O I
10.1145/3394486.3403091
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To learn a sequential recommender, the existing methods typically adopt the sequence-to-item (seq2item) training strategy, which supervises a sequence model with a user's next behavior as the label and the user's past behaviors as the input. The seq2item strategy, however, is myopic and usually produces non-diverse recommendation lists. In this paper, we study the problem of mining extra signals for supervision by looking at the longer-term future. There exist two challenges: i) reconstructing a future sequence containing many behaviors is exponentially harder than reconstructing a single next behavior, which can lead to difficulty in convergence, and ii) the sequence of all future behaviors can involve many intentions, not all of which may be predictable from the sequence of earlier behaviors. To address these challenges, we propose a sequence-to-sequence (seq2seq) training strategy based on latent self-supervision and disentanglement. Specifically, we perform self-supervision in the latent space, i.e., reconstructing the representation of the future sequence as a whole, instead of reconstructing the items in the future sequence individually. We also disentangle the intentions behind any given sequence of behaviors and construct seq2seq training samples using only pairs of sub-sequences that involve a shared intention. Results on real-world benchmarks and synthetic data demonstrate the improvement brought by seq2seq training.
引用
收藏
页码:483 / 491
页数:9
相关论文
共 50 条
  • [1] Unsupervised Graph Neural Architecture Search with Disentangled Self-supervision
    Zhang, Zeyang
    Wang, Xin
    Zhang, Ziwei
    Shen, Guangyao
    Shen, Shiqi
    Zhu, Wenwu
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [2] DISENTANGLED SPEECH EMBEDDINGS USING CROSS-MODAL SELF-SUPERVISION
    Nagrani, Arsha
    Chung, Joon Son
    Albanie, Samuel
    Zisserman, Andrew
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 6829 - 6833
  • [3] Learning dual disentangled representation with self-supervision for temporal knowledge graph reasoning
    Xiao, Yao
    Zhou, Guangyou
    Xie, Zhiwen
    Liu, Jin
    Huang, Jimmy Xiangji
    [J]. INFORMATION PROCESSING & MANAGEMENT, 2024, 61 (03)
  • [4] THE FEASIBILITY OF SELF-SUPERVISION
    Hudelson, Earl
    [J]. JOURNAL OF EDUCATIONAL RESEARCH, 1952, 45 (05): : 335 - 347
  • [5] Self-supervision, surveillance and transgression
    Simon, Gail
    [J]. JOURNAL OF FAMILY THERAPY, 2010, 32 (03) : 308 - 325
  • [6] Anomalies, representations, and self-supervision
    Dillon, Barry M.
    Favaro, Luigi
    Feiden, Friedrich
    Modak, Tanmoy
    Plehn, Tilman
    [J]. SCIPOST PHYSICS CORE, 2024, 7 (03):
  • [7] Symmetries, safety, and self-supervision
    Dillon, Barry M.
    Kasieczka, Gregor
    Olischlaeger, Hans
    Plehn, Tilman
    Sorrenson, Peter
    Vogel, Lorenz
    [J]. SCIPOST PHYSICS, 2022, 12 (06):
  • [8] Self-Supervision: Psychodynamic Strategies
    Brenner, Ira
    [J]. JOURNAL OF THE AMERICAN PSYCHOANALYTIC ASSOCIATION, 2024, 72 (02)
  • [9] Leaf vein segmentation with self-supervision
    Li, Lei
    Hu, Wenzheng
    Lu, Jiang
    Zhang, Changshui
    [J]. COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2022, 203
  • [10] Link Prediction with Contextualized Self-Supervision
    Zhang, Daokun
    Yin, Jie
    Yu, Philip S. S.
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (07) : 7138 - 7151