Knowledge-Grounded Response Generation with Deep Attentional Latent-Variable Model

被引:15
|
作者
Ye, Hao-Tong [1 ]
Lo, Kai-Ling [1 ]
Su, Shang-Yu [1 ]
Chen, Yun-Nung [1 ]
机构
[1] Natl Taiwan Univ, 1,Sec 4,Roosevelt Rd, Taipei 10617, Taiwan
来源
COMPUTER SPEECH AND LANGUAGE | 2020年 / 63卷 / 63期
关键词
Knowledge-grounded; Response generation; Variational model;
D O I
10.1016/j.csl.2020.101069
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
End-to-end dialogue generation has achieved promising results without using handcrafted features and attributes specific to each task and corpus. However, one of the fatal drawbacks in such approaches is that they are unable to generate informative utterances, so it limits their usage from some real-world conversational applications. In order to tackle this issue, this paper attempts to generate diverse and informative responses with a variational generation model, which contains a joint attention mechanism conditioning on the information from both dialogue contexts and extra knowledge. The experiments on benchmark DSTC7 data show that the proposed method generates responses with more grounded knowledge and improve the diversity of generated language. (c) 2020 Elsevier Ltd. All rights reserved.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] CoLV: A Collaborative Latent Variable Model for Knowledge-Grounded Dialogue Generation
    Zhan, Haolan
    Shen, Lei
    Chen, Hongshen
    Zhang, Hainan
    [J]. 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 2250 - 2261
  • [2] Approximation of Response Knowledge Retrieval in Knowledge-grounded Dialogue Generation
    Zheng, Wen
    Milic-Frayling, Natasa
    Zhou, Ke
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020,
  • [3] Knowledge-Grounded Dialogue Generation with a Unified Knowledge Representation
    Li, Yu
    Peng, Baolin
    Shen, Yelong
    Mao, Yi
    Liden, Lars
    Yu, Zhou
    Gao, Jianfeng
    [J]. NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 206 - 218
  • [4] Deep Latent-Variable Kernel Learning
    Liu, Haitao
    Ong, Yew-Soon
    Jiang, Xiaomo
    Wang, Xiaofang
    [J]. IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (10) : 10276 - 10289
  • [5] Retrieval-Free Knowledge-Grounded Dialogue Response Generation with Adapters
    Xu, Yan
    Ishii, Etsuko
    Cahyawijaya, Samuel
    Liu, Zihan
    Winata, Genta Indra
    Madotto, Andrea
    Su, Dan
    Fung, Pascale
    [J]. PROCEEDINGS OF THE SECOND DIALDOC WORKSHOP ON DOCUMENT-GROUNDED DIALOGUE AND CONVERSATIONAL QUESTION ANSWERING (DIALDOC 2022), 2022, : 93 - 107
  • [6] Retrieval-Augmented Response Generation for Knowledge-Grounded Conversation in the Wild
    Ahn, Yeonchan
    Lee, Sang-Goo
    Shim, Junho
    Park, Jaehui
    [J]. IEEE ACCESS, 2022, 10 : 131374 - 131385
  • [7] Graph-Structured Context Understanding for Knowledge-grounded Response Generation
    Li, Yanran
    Li, Wenjie
    Wang, Zhitao
    [J]. SIGIR '21 - PROCEEDINGS OF THE 44TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2021, : 1930 - 1934
  • [8] Deliberation Selector for Knowledge-Grounded Conversation Generation
    Zhao, Huan
    Wang, Yiqing
    Li, Bo
    Wang, Song
    Zhang, Zixing
    Zha, Xupeng
    [J]. PRICAI 2022: TRENDS IN ARTIFICIAL INTELLIGENCE, PT III, 2022, 13631 : 226 - 239
  • [9] A Knowledge-Grounded Neural Conversation Model
    Ghazvininejad, Marjan
    Brockett, Chris
    Chang, Ming-Wei
    Dolan, Bill
    Gao, Jianfeng
    Yih, Wen-tau
    Galley, Michel
    [J]. THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 5110 - 5117
  • [10] A Model of Cross-Lingual Knowledge-Grounded Response Generation for Open-Domain Dialogue Systems
    Kim, San
    Jang, Jin Yea
    Jung, Minyoung
    Shin, Saim
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 352 - 365