Unified Knowledge Prompt Pre-training for Customer Service Dialogues

被引:1
|
作者
He, Keqing [1 ]
Wang, Jingang [1 ]
Sun, Chaobo [1 ]
Wu, Wei [1 ]
机构
[1] Meituan Grp, Beijing, Peoples R China
关键词
dialogue pre-training; knowledge; prompt;
D O I
10.1145/3511808.3557718
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Dialogue bots have been widely applied in customer service scenarios to provide timely and user-friendly experience. These bots must classify the appropriate domain of a dialogue, understand the intent of users, and generate proper responses. Existing dialogue pre-training models are designed only for several dialogue tasks and ignore weakly-supervised expert knowledge in customer service dialogues. In this paper, we propose a novel unified knowledge prompt pre-training framework, UFA (Unified Model For All Tasks), for customer service dialogues. We formulate all the tasks of customer service dialogues as a unified text-to-text generation task and introduce a knowledge-driven prompt strategy to jointly learn from a mixture of distinct dialogue tasks. We pre-train UFA on a large-scale Chinese customer service corpus collected from practical scenarios and get significant improvements on both natural language understanding (NLU) and natural language generation (NLG) benchmarks.
引用
收藏
页码:4009 / 4013
页数:5
相关论文
共 50 条
  • [1] ProQA: Structural Prompt-based Pre-training for Unified Question Answering
    Zhong, Wanjun
    Gao, Yifan
    Ding, Ning
    Qin, Yujia
    Liu, Zhiyuan
    Zhou, Ming
    Wang, Jiahai
    Yin, Jian
    Duan, Nan
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 4230 - 4243
  • [2] Unified pre-training for program understanding and generation
    Ahmad, Wasi Uddin
    Chakraborty, Saikat
    Ray, Baishakhi
    Chang, Kai-Wei
    arXiv, 2021,
  • [3] Unified Pre-training for Program Understanding and Generation
    Ahmad, Wasi Uddin
    Chakraborty, Saikat
    Ray, Baishakhi
    Chang, Kai-Wei
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 2655 - 2668
  • [4] Pre-training A Prompt Pool for Vision-Language Model
    Liu, Jun
    Gu, Yang
    Yang, Zhaohua
    Guo, Shuai
    Liu, Huaqiu
    Chen, Yiqiang
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [5] Pre-training language model incorporating domain-specific heterogeneous knowledge into a unified representation
    Zhu, Hongyin
    Peng, Hao
    Lyu, Zhiheng
    Hou, Lei
    Li, Juanzi
    Xiao, Jinghui
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 215
  • [6] Unified Multi-modal Pre-training for Few-shot Sentiment Analysis with Prompt-based Learning
    Yu, Yang
    Zhang, Dong
    Li, Shoushan
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022,
  • [7] PSP: Pre-training and Structure Prompt Tuning for Graph Neural Networks
    Ge, Qingqing
    Zhao, Zeyuan
    Liu, Yiding
    Cheng, Anfeng
    Li, Xiang
    Wang, Shuaiqiang
    Yin, Dawei
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, PT V, ECML PKDD 2024, 2024, 14945 : 423 - 439
  • [8] Align and Prompt: Video-and-Language Pre-training with Entity Prompts
    Li, Dongxu
    Li, Junnan
    Li, Hongdong
    Niebles, Juan Carlos
    Hoi, Steven C. H.
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 4943 - 4953
  • [9] A unified pre-training and adaptation framework for combinatorial optimization on graphs
    Zeng, Ruibin
    Lei, Minglong
    Niu, Lingfeng
    Cheng, Lan
    SCIENCE CHINA-MATHEMATICS, 2024, 67 (06) : 1439 - 1456
  • [10] A unified pre-training and adaptation framework for combinatorial optimization on graphs
    Ruibin Zeng
    Minglong Lei
    Lingfeng Niu
    Lan Cheng
    Science China(Mathematics), 2024, 67 (06) : 1439 - 1456