PEPT: Expert Finding Meets Personalized Pre-Training

被引:0
|
作者
Peng, Qiyao [1 ]
Xu, Hongyan [2 ]
Wang, Yinghui [3 ]
Liu, Hongtao [4 ]
Huo, Cuiying [5 ]
Wang, Wenjun [5 ,6 ]
机构
[1] Tianjin Univ, Sch New Media & Commun, Tianjin, Peoples R China
[2] Tianjin Univ, Coll Intelligence & Comp, Tianjin, Peoples R China
[3] Beijing Inst Control & Elect Technol, Key Lab Informat Syst & Technol, Beijing, Peoples R China
[4] Du Xiaoman Technol, Beijing, Peoples R China
[5] Tianjin Univ, Coll Intelligence & Comp, Tianjin, Peoples R China
[6] Hainan Trop Ocean Univ, YazhouBay Innovat Inst, Hainan, Peoples R China
关键词
Contrastive Learning;
D O I
10.1145/3690380
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Finding experts is essential in Community Question Answering (CQA) platforms as it enables the effective routing of questions to potential users who can provide relevant answers. The key is to personalized learning expert representations based on their historical answered questions, and accurately matching them with target questions. Recently, the applications of Pre-Trained Language Models (PLMs) have gained significant attraction due to their impressive capability to comprehend textual data, and are widespread used across various domains. There have been some preliminary works exploring the usability of PLMs in expert finding, such as pre-training expert or question representations. However, these models usually learn pure text representations of experts from histories, disregarding personalized and fine-grained expert modeling. For alleviating this, we present a personalized pre-training and fine-tuning paradigm, which could effectively learn expert interest and expertise simultaneously. Specifically, in our pre-training framework, we integrate historical answered questions of one expert with one target question, and regard it as a candidate-aware expert-level input unit. Then, we fuse expert IDs into the pre-training for guiding the model to model personalized expert representations, which can help capture the unique characteristics and expertise of each individual expert. Additionally, in our pre-training task, we design (1) a question-level masked language model task to learn the relatedness between histories, enabling the modeling of question-level expert interest; (2) a vote-oriented task to capture question-level expert expertise by predicting the vote score the expert would receive. Through our pre-training framework and tasks, our approach could holistically learn expert representations including interests and expertise. Our method has been extensively evaluated on six real-world CQA datasets, and the experimental results consistently demonstrate the superiority of our approach over competitive baseline methods.
引用
收藏
页数:26
相关论文
共 50 条
  • [1] Contrastive Pre-training for Personalized Expert Finding
    Peng, Qiyao
    Liu, Hongtao
    Lv, Zhepeng
    Ng, Qingay
    Wang, Wenjun
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 15797 - 15806
  • [2] FEDBERT: When Federated Learning Meets Pre-training
    Tian, Yuanyishu
    Wan, Yao
    Lyu, Lingjuan
    Yao, Dezhong
    Jin, Hai
    Sun, Lichao
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (04)
  • [3] General then Personal: Decoupling and Pre-training for Personalized Headline Generation
    Song, Yun-Zhu
    Chen, Yi-Syuan
    Wang, Lu
    Shuai, Hong-Han
    TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2023, 11 : 1588 - 1607
  • [4] SLIP: Self-supervision Meets Language-Image Pre-training
    Mu, Norman
    Kirillov, Alexander
    Wagner, David
    Xie, Saining
    COMPUTER VISION, ECCV 2022, PT XXVI, 2022, 13686 : 529 - 544
  • [5] Non-Contrastive Learning Meets Language-Image Pre-Training
    Zhou, Jinghao
    Dong, Li
    Gan, Zhe
    Wang, Lijuan
    Wei, Furu
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 11028 - 11038
  • [6] Conditional autoencoder pre-training and optimization algorithms for personalized care of hemophiliac patients
    Buche, Cedric
    Lasson, Francois
    Kerdelo, Sebastien
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2023, 6
  • [7] Pre-training with non-expert human demonstration for deep reinforcement learning
    De La Cruz, Gabriel V., Jr.
    Du, Yunshu
    Taylor, Matthew E.
    KNOWLEDGE ENGINEERING REVIEW, 2019, 34
  • [8] Finding a good initial configuration of parameters for restricted Boltzmann machine pre-training
    Xie, Chunzhi
    Lv, Jiancheng
    Li, Xiaojie
    SOFT COMPUTING, 2017, 21 (21) : 6471 - 6479
  • [9] Finding a good initial configuration of parameters for restricted Boltzmann machine pre-training
    Chunzhi Xie
    Jiancheng Lv
    Xiaojie Li
    Soft Computing, 2017, 21 : 6471 - 6479
  • [10] Multi-stage Pre-training over Simplified Multimodal Pre-training Models
    Liu, Tongtong
    Feng, Fangxiang
    Wang, Xiaojie
    59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), 2021, : 2556 - 2565