Neural response generation for task completion using conversational knowledge graph

被引:1
|
作者
Ahmad, Zishan [1 ]
Ekbal, Asif [1 ]
Sengupta, Shubhashis [2 ]
Bhattacharyya, Pushpak [3 ]
机构
[1] Indian Inst Technol Patna, Dept Comp Sci & Engn, AI NLP ML Lab, Patna, Bihar, India
[2] Accenture, Accenture Technol Labs, Bangalore, Karnataka, India
[3] Indian Inst Technol, Dept Comp Sci & Technol, Mumbai, Maharashtra, India
来源
PLOS ONE | 2023年 / 18卷 / 02期
关键词
D O I
10.1371/journal.pone.0269856
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Effective dialogue generation for task completion is challenging to build. The task requires the response generation system to generate the responses consistent with intent and slot values, have diversity in response and be able to handle multiple domains. The response also needs to be context relevant with respect to the previous utterances in the conversation. In this paper, we build six different models containing Bi-directional Long Short Term Memory (Bi-LSTM) and Bidirectional Encoder Representations from Transformers (BERT) based encoders. To effectively generate the correct slot values, we implement a copy mechanism at the decoder side. To capture the conversation context and the current state of the conversation we introduce a simple heuristic to build a conversational knowledge graph. Using this novel algorithm we are able to capture important aspects in a conversation. This conversational knowledge-graph is then used by our response generation model to generate more relevant and consistent responses. Using this knowledge-graph we do not need the entire utterance history, rather only the last utterance to capture the conversational context. We conduct experiments showing the effectiveness of the knowledge-graph in capturing the context and generating good response. We compare these results against hierarchical-encoder-decoder models and show that the use of triples from the conversational knowledge-graph is an effective method to capture context and the user requirement. Using this knowledge-graph we show an average performance gain of 0.75 BLEU score across different models. Similar results also hold true across different manual evaluation metrics.
引用
收藏
页数:18
相关论文
共 50 条
  • [41] A comprehensive overview of knowledge graph completion
    Shen, Tong
    Zhang, Fu
    Cheng, Jingwei
    KNOWLEDGE-BASED SYSTEMS, 2022, 255
  • [42] A survey of inductive knowledge graph completion
    Liang, Xinyu
    Si, Guannan
    Li, Jianxin
    Tian, Pengxin
    An, Zhaoliang
    Zhou, Fengyu
    NEURAL COMPUTING & APPLICATIONS, 2024, 36 (08): : 3837 - 3858
  • [43] Overview of Knowledge Graph Completion Methods
    Zhang, Wenhao
    Xu, Zhenshun
    Liu, Na
    Wang, Zhenbiao
    Tang, Zengjin
    Wang, Zheng'an
    Computer Engineering and Applications, 2024, 60 (12) : 61 - 73
  • [44] A survey of inductive knowledge graph completion
    Xinyu Liang
    Guannan Si
    Jianxin Li
    Pengxin Tian
    Zhaoliang An
    Fengyu Zhou
    Neural Computing and Applications, 2024, 36 : 3837 - 3858
  • [45] Temporal Knowledge Graph Completion: A Survey
    Cai, Borui
    Xiang, Yong
    Gao, Longxiang
    Zhang, He
    Li, Yunfeng
    Li, Jianxin
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 6545 - 6553
  • [46] Knowledge Graph Completion Technology Research
    Shi, Yongqi
    Lu, Yiwei
    Yang, Ruopeng
    Yang, Yuantao
    2022 INTERNATIONAL CONFERENCE ON COMPUTING, ROBOTICS AND SYSTEM SCIENCES, ICRSS, 2022, : 68 - 73
  • [47] Hyperbolic Knowledge Graph Embeddings for Knowledge Base Completion
    Kolyvakis, Prodromos
    Kalousis, Alexandros
    Kiritsis, Dimitris
    SEMANTIC WEB (ESWC 2020), 2020, 12123 : 199 - 214
  • [48] CAFE: Knowledge graph completion using neighborhood-aware features
    Borrego, Agustin
    Ayala, Daniel
    Hernandez, Inma
    Rivero, Carlos R.
    Ruiz, David
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2021, 103
  • [49] Improving Knowledge Graph Completion Using Soft Rules and Adversarial Learning
    TANG, Caifang
    RAO, Yuan
    YU, Hualei
    SUN, Ling
    CHENG, Jiamin
    WANG, Yutian
    CHINESE JOURNAL OF ELECTRONICS, 2021, 30 (04) : 623 - 633
  • [50] GCATRL: Using deep reinforcement learning to optimize knowledge graph completion
    Zhang, Liping
    Xu, Minming
    Li, Song
    KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2025, 19 (03): : 790 - 810