GBERT: Pre-training User representations for Ephemeral Group Recommendation

被引:5
|
作者
Zhang, Song [1 ,2 ]
Zheng, Nan [1 ,2 ]
Wang, Danli [1 ,2 ]
机构
[1] Chinese Acad Sci, State Key Lab Management & Control Complex Syst, Inst Automat, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Group Recommendation; Neural Networks; Pre-training; Representation Learning; User Preference;
D O I
10.1145/3511808.3557330
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Due to the prevalence of group activities on social networks, group recommendations have received an increasing number of attentions. Most group recommendation methods concentrated on dealing with persistent groups, while little attention has paid to ephemeral groups. Ephemeral groups are formed ad-hoc for one-time activities, and therefore they suffer severely from data sparsity and cold-start problems. To deal with such problems, we propose a pre-training and fine-tuning method called GBERT for improved group recommendations, which employs BERT to enhance the expressivity and capture group-specific preferences of members. In the pre-training stage, GBERT employs three pre-training tasks to alleviate data sparsity and cold-start problem, and learn better user representations. In the fine-tuning stage, an influence-based regulation objective is designed to regulate user and group representations by allocating weights according to each member's influence. Extensive experiments on three public datasets demonstrate its superiority over the state-of-the-art methods for ephemeral group recommendations.
引用
收藏
页码:2631 / 2639
页数:9
相关论文
共 50 条
  • [1] U-BERT: Pre-training User Representations for Improved Recommendation
    Qiu, Zhaopeng
    Wu, Xian
    Gao, Jingyue
    Fan, Wei
    [J]. THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 4320 - 4327
  • [2] UPRec: User-aware Pre-training for sequential Recommendation
    Xiao, Chaojun
    Xie, Ruobing
    Yao, Yuan
    Liu, Zhiyuan
    Sun, Maosong
    Zhang, Xu
    Lin, Leyu
    [J]. AI OPEN, 2023, 4 : 137 - 144
  • [3] Learning Transferable User Representations with Sequential Behaviors via Contrastive Pre-training
    Cheng, Mingyue
    Yuan, Fajie
    Liu, Qi
    Xin, Xin
    Chen, Enhong
    [J]. 2021 21ST IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2021), 2021, : 51 - 60
  • [4] Temporal Contrastive Pre-Training for Sequential Recommendation
    Tian, Changxin
    Lin, Zihan
    Bian, Shuqing
    Wang, Jinpeng
    Zhao, Wayne Xin
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 1925 - 1934
  • [5] Advances in Pre-Training Distributed Word Representations
    Mikolov, Tomas
    Grave, Edouard
    Bojanowski, Piotr
    Puhrsch, Christian
    Joulin, Armand
    [J]. PROCEEDINGS OF THE ELEVENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2018), 2018, : 52 - 55
  • [6] Pre-training Mention Representations in Coreference Models
    Varkel, Yuval
    Globerson, Amir
    [J]. PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 8534 - 8540
  • [7] Multi-Modal Contrastive Pre-training for Recommendation
    Liu, Zhuang
    Ma, Yunpu
    Schubert, Matthias
    Ouyang, Yuanxin
    Xiong, Zhang
    [J]. PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2022, 2022, : 99 - 108
  • [8] Pre-training of Graph Augmented Transformers for Medication Recommendation
    Shang, Junyuan
    Ma, Tengfei
    Xiao, Cao
    Sun, Jimeng
    [J]. PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 5953 - 5959
  • [9] Graph Neural Pre-training for Recommendation with Side Information
    Liu, Siwei
    Meng, Zaiqiao
    Macdonald, Craig
    Ounis, Iadh
    [J]. ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2023, 41 (03)
  • [10] Pre-Training Audio Representations With Self-Supervision
    Tagliasacchi, Marco
    Gfeller, Beat
    Quitry, Felix de Chaumont
    Roblek, Dominik
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2020, 27 : 600 - 604