Disentangled Self-Supervision in Sequential Recommenders

被引:129
|
作者
Ma, Jianxin [1 ,2 ]
Zhou, Chang [2 ]
Yang, Hongxia [2 ]
Cui, Peng [1 ]
Wang, Xin [1 ,3 ]
Zhu, Wenwu [1 ,3 ]
机构
[1] Tsinghua Univ, Beijing, Peoples R China
[2] Alibaba Grp, Hangzhou, Peoples R China
[3] Minist Educ, Key Lab Pervas Comp, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
recommender systems; sequence model; disentangled representation learning; self-supervised learning; contrastive learning;
D O I
10.1145/3394486.3403091
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To learn a sequential recommender, the existing methods typically adopt the sequence-to-item (seq2item) training strategy, which supervises a sequence model with a user's next behavior as the label and the user's past behaviors as the input. The seq2item strategy, however, is myopic and usually produces non-diverse recommendation lists. In this paper, we study the problem of mining extra signals for supervision by looking at the longer-term future. There exist two challenges: i) reconstructing a future sequence containing many behaviors is exponentially harder than reconstructing a single next behavior, which can lead to difficulty in convergence, and ii) the sequence of all future behaviors can involve many intentions, not all of which may be predictable from the sequence of earlier behaviors. To address these challenges, we propose a sequence-to-sequence (seq2seq) training strategy based on latent self-supervision and disentanglement. Specifically, we perform self-supervision in the latent space, i.e., reconstructing the representation of the future sequence as a whole, instead of reconstructing the items in the future sequence individually. We also disentangle the intentions behind any given sequence of behaviors and construct seq2seq training samples using only pairs of sub-sequences that involve a shared intention. Results on real-world benchmarks and synthetic data demonstrate the improvement brought by seq2seq training.
引用
收藏
页码:483 / 491
页数:9
相关论文
共 50 条
  • [31] InsCLR: Improving Instance Retrieval with Self-Supervision
    Deng, Zelu
    Zhong, Yujie
    Guo, Sheng
    Huang, Weilin
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 516 - 524
  • [32] Effects of a method of self-supervision for counselor trainees
    Dennin, MK
    Ellis, MV
    [J]. JOURNAL OF COUNSELING PSYCHOLOGY, 2003, 50 (01) : 69 - 83
  • [33] Sense and Learn: Self-supervision for omnipresent sensors
    Saeed, Aaqib
    Ungureanu, Victor
    Gfeller, Beat
    [J]. MACHINE LEARNING WITH APPLICATIONS, 2021, 6
  • [34] Self-supervision, normativity and the free energy principle
    Hohwy, Jakob
    [J]. SYNTHESE, 2021, 199 (1-2) : 29 - 53
  • [35] Prototype Augmentation and Self-Supervision for Incremental Learning
    Zhu, Fei
    Zhang, Xu-Yao
    Wang, Chuang
    Yin, Fei
    Liu, Cheng-Lin
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 5867 - 5876
  • [36] SELF-SUPERVISION MODEL FOR MAINTENANCE OF HELPING SKILLS
    MEYER, RJ
    [J]. PROFESSIONAL PSYCHOLOGY, 1978, 9 (01): : 32 - 37
  • [37] Noise2Self: Blind Denoising by Self-Supervision
    Batson, Joshua
    Royer, Loic
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [38] Self-distillation and self-supervision for partial label learning
    Yu, Xiaotong
    Sun, Shiding
    Tian, Yingjie
    [J]. PATTERN RECOGNITION, 2024, 146
  • [39] Rethinking Self-Supervision Objectives for Generalizable Coherence Modeling
    Jwalapuram, Prathyusha
    Joty, Shafiq
    Lin, Xiang
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 6044 - 6059
  • [40] Time Is MattEr: Temporal Self-supervision for Video Transformers
    Yun, Sukmin
    Kim, Jaehyung
    Han, Dongyoon
    Song, Hwanjun
    Ha, Jung-Woo
    Shin, Jinwoo
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,