Improving Transformer-based Sequential Recommenders through Preference Editing

被引:14
|
作者
Ma, Muyang [1 ]
Ren, Pengjie [1 ]
Chen, Zhumin [1 ]
Ren, Zhaochun [1 ]
Liang, Huasheng [2 ]
Ma, Jun [1 ]
De Rijke, Maarten [3 ]
机构
[1] Shandong Univ, Shandong, Peoples R China
[2] Tencent, WeChat, Beijing, Peoples R China
[3] Univ Amsterdam, Amsterdam, Netherlands
关键词
Transformer-based sequential recommendation; self-supervised learning; user preference extraction and representation;
D O I
10.1145/3564282
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
One of the key challenges in sequential recommendation is how to extract and represent user preferences. Traditional methods rely solely on predicting the next item. But user behavior may be driven by complex preferences. Therefore, these methods cannot make accurate recommendations when the available information user behavior is limited. To explore multiple user preferences, we propose a transformer-based sequential recommendation model, named MrTransformer (Multi-preference Transformer). For training MrTransformer, we devise a preference-editing-based self-supervised learning (SSL) mechanism that explores extra supervision signals based on relations with other sequences. The idea is to force the sequential recommendation model to discriminate between common and unique preferences in different sequences of interactions. By doing so, the sequential recommendation model is able to disentangle user preferences into multiple independent preference representations so as to improve user preference extraction and representation. We carry out extensive experiments on five benchmark datasets. MrTransformer with preference editing significantly outperforms state-of-the-art sequential recommendation methods in terms of Recall, MRR, and NDCG. We find that long sequences of interactions from which user preferences are harder to extract and represent benefit most from preference editing.
引用
收藏
页数:24
相关论文
共 50 条
  • [41] Transformer-based Planning for Symbolic Regression
    Shojaee, Parshin
    Meidani, Kazem
    Farimani, Amir Barati
    Reddy, Chandan K.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [42] Ferrofluid transformer-based tilt sensor
    Allison DeGraff
    Reza Rashidi
    Microsystem Technologies, 2020, 26 : 2499 - 2506
  • [43] Transformer-Based Visual Segmentation: A Survey
    Li, Xiangtai
    Ding, Henghui
    Yuan, Haobo
    Zhang, Wenwei
    Pang, Jiangmiao
    Cheng, Guangliang
    Chen, Kai
    Liu, Ziwei
    Loy, Chen Change
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (12) : 10138 - 10163
  • [44] A Transformer-Based GAN for Anomaly Detection
    Yang, Caiyin
    Lan, Shiyong
    Huangl, Weikang
    Wang, Wenwu
    Liul, Guoliang
    Yang, Hongyu
    Ma, Wei
    Li, Piaoyang
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT II, 2022, 13530 : 345 - 357
  • [45] Transformer-based Arabic Dialect Identification
    Lin, Wanqiu
    Madhavi, Maulik
    Das, Rohan Kumar
    Li, Haizhou
    2020 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING (IALP 2020), 2020, : 192 - 196
  • [46] Transformer-based ripeness segmentation for tomatoes
    Shinoda, Risa
    Kataoka, Hirokatsu
    Hara, Kensho
    Noguchi, Ryozo
    SMART AGRICULTURAL TECHNOLOGY, 2023, 4
  • [47] A transformer-based network for speech recognition
    Tang L.
    International Journal of Speech Technology, 2023, 26 (02) : 531 - 539
  • [48] Tempo: Accelerating Transformer-Based Model Training through Memory Footprint Reduction
    Andoorveedu, Muralidhar
    Zhu, Zhanda
    Zheng, Bojian
    Pekhimenko, Gennady
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [49] EEG Classification with Transformer-Based Models
    Sun, Jiayao
    Xie, Jin
    Zhou, Huihui
    2021 IEEE 3RD GLOBAL CONFERENCE ON LIFE SCIENCES AND TECHNOLOGIES (IEEE LIFETECH 2021), 2021, : 92 - 93
  • [50] Transformer-based LLMs for Sensor Data
    Okita, Tsuyoshi
    Ukita, Kosuke
    Matsuishi, Koki
    Kagiyama, Masaharu
    Hirata, Kodai
    Miyazaki, Asahi
    ADJUNCT PROCEEDINGS OF THE 2023 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING & THE 2023 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTING, UBICOMP/ISWC 2023 ADJUNCT, 2023, : 499 - 504