CollRec: Pre-Trained Language Models and Knowledge Graphs Collaborate to Enhance Conversational Recommendation System

被引:0
|
作者
Liu, Shuang [1 ]
Ao, Zhizhuo [1 ]
Chen, Peng [2 ]
Kolmanic, Simon [3 ]
机构
[1] Dalian Minzu Univ, Sch Comp Sci & Engn, Dalian 116600, Peoples R China
[2] Dalian Neusoft Univ Informat, Sch Comp & Software, Dalian 116023, Peoples R China
[3] Univ Maribor, Fac Elect Engn & Comp Sci, Maribor 2000, Slovenia
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Knowledge graphs; Oral communication; Task analysis; Recommender systems; Motion pictures; Costs; Accuracy; Large language models; Conversational recommendation system; knowledge graph; large language model; end-to-end generation; fine-tuning; ReDial; WebNLG; 2020; challenge;
D O I
10.1109/ACCESS.2024.3434720
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Existing conversational recommender systems (CRS) use insufficient generality in incorporating external information using knowledge graphs. The recommendation module and generation module are loosely connected during model training and shallowly integrated during inference. A simple switching or copying mechanism is used to merge recommended items into generated responses. These problems significantly degrade the recommendation performance. To alleviate this problem, we propose a novel unified framework for collaboratively enhancing conversational recommendations using pre-trained language models and knowledge graphs (CollRec). We use a fine-tuned pre-trained language model to efficiently extract knowledge graphs from conversational text descriptions, perform entity-based recommendations based on the generated graph nodes and edges, and fine-tune a large-scale pre-trained language model to generate fluent and diverse responses. Experimental results on the WebNLG 2020 Challenge dataset, ReDial dataset, and Reddit-Movie dataset show that our CollRec model significantly outperforms the state-of-the-art methods.
引用
收藏
页码:104663 / 104675
页数:13
相关论文
共 50 条
  • [41] Stealing Knowledge from Pre-trained Language Models for Federated Classifier Debiasing
    Zhu, Meilu
    Yang, Qiushi
    Gao, Zhifan
    Liu, Jun
    Yuan, Yixuan
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT X, 2024, 15010 : 685 - 695
  • [42] Measuring the Knowledge Acquisition-Utilization Gap in Pre-trained Language Models
    Kazemnejad, Amirhossein
    Rezagholizadeh, Mehdi
    Parthasarathi, Prasanna
    Chandar, Sarath
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 4305 - 4319
  • [43] A Study of Pre-trained Language Models in Natural Language Processing
    Duan, Jiajia
    Zhao, Hui
    Zhou, Qian
    Qiu, Meikang
    Liu, Meiqin
    2020 IEEE INTERNATIONAL CONFERENCE ON SMART CLOUD (SMARTCLOUD 2020), 2020, : 116 - 121
  • [44] From Cloze to Comprehension: Retrofitting Pre-trained Masked Language Models to Pre-trained Machine Reader
    Xu, Weiwen
    Li, Xin
    Zhang, Wenxuan
    Zhou, Meng
    Lam, Wai
    Si, Luo
    Bing, Lidong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [45] Pre-trained models for natural language processing: A survey
    Qiu XiPeng
    Sun TianXiang
    Xu YiGe
    Shao YunFan
    Dai Ning
    Huang XuanJing
    SCIENCE CHINA-TECHNOLOGICAL SCIENCES, 2020, 63 (10) : 1872 - 1897
  • [46] Analyzing Individual Neurons in Pre-trained Language Models
    Durrani, Nadir
    Sajjad, Hassan
    Dalvi, Fahim
    Belinkov, Yonatan
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 4865 - 4880
  • [47] Emotional Paraphrasing Using Pre-trained Language Models
    Casas, Jacky
    Torche, Samuel
    Daher, Karl
    Mugellini, Elena
    Abou Khaled, Omar
    2021 9TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION WORKSHOPS AND DEMOS (ACIIW), 2021,
  • [48] Impact of Morphological Segmentation on Pre-trained Language Models
    Westhelle, Matheus
    Bencke, Luciana
    Moreira, Viviane P.
    INTELLIGENT SYSTEMS, PT II, 2022, 13654 : 402 - 416
  • [49] Prompt Tuning for Discriminative Pre-trained Language Models
    Yao, Yuan
    Dong, Bowen
    Zhang, Ao
    Zhang, Zhengyan
    Xie, Ruobing
    Liu, Zhiyuan
    Lin, Leyu
    Sun, Maosong
    Wang, Jianyong
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 3468 - 3473
  • [50] A Close Look into the Calibration of Pre-trained Language Models
    Chen, Yangyi
    Yuan, Lifan
    Cui, Ganqu
    Liu, Zhiyuan
    Ji, Heng
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 1343 - 1367