A Global–Local Attentive Relation Detection Model for Knowledge-Based Question Answering

被引:10
|
作者
Qiu C. [1 ]
Zhou G. [2 ]
Cai Z. [3 ]
Søgaard A. [4 ,5 ]
机构
[1] School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan
[2] School of Computer Science, Central China Normal University, Wuhan
[3] School of Computer Science, China University of Geosciences, Wuhan
[4] Department of Computer Science, University of Copenhagen, Copenhagen
[5] Google Research, Copenhagen
来源
基金
中国国家自然科学基金;
关键词
Knowledge base (KB); natural language processing; question answering; text mining;
D O I
10.1109/TAI.2021.3068697
中图分类号
学科分类号
摘要
Knowledge-based question answering (KBQA) is an essential but challenging task for artificial intelligence and natural language processing. A key challenge pertains to the design of effective algorithms for relation detection. Conventional methods model questions and candidate relations separately through the knowledge bases (KBs) without considering the rich word-level interactions between them. This approach may result in local optimal results. This article presents a global–local attentive relation detection model (GLAR) that utilizes the local module to learn the features of word-level interactions and employs the global module to acquire nonlinear relationships between questions and their candidate relations located in KBs. This article also reports on the application of an end-to-end retrieval-based KBQA system incorporating the proposed relation detection model. Experimental results obtained on two datasets demonstrated GLAR’s remarkable performance in the relation detection task. Furthermore, the functioning of end-to-end KBQA systems was significantly improved through the relation detection model, whose results on both datasets outperformed even state-of-the-art methods. Impact Statement—Knowledge-based question answering (KBQA) aims at answering user questions posed over the knowledge bases (KBs). KBQA helps users access knowledge in the KBs more easily, and it works on two subtasks: entity mention detection and relation detection. While existing relation detection algorithms perform well on the global representation of questions and relations sequences, they ignore some local semantic information on interaction cases between them. The technology proposed in this article takes both global and local interactions into account. With superior improvement on two relation detection tasks and two KBQA end tasks, the technology provides more precise answers. It could be used in more applications, including intelligent customer service, intelligent finance, and others. © 2021 IEEE.
引用
收藏
页码:200 / 212
页数:12
相关论文
共 50 条
  • [31] Answering knowledge-based visual questions via the exploration of Question Purpose
    Song, Lingyun
    Li, Jianao
    Liu, Jun
    Yang, Yang
    Shang, Xuequn
    Sun, Mingxuan
    PATTERN RECOGNITION, 2023, 133
  • [32] A knowledge-based question answering system to provide cognitive assistance to radiologists
    Pillai, Anup
    Katouzian, Amin
    Kanjaria, Karina
    Shivade, Chaitanya
    Jadhav, Ashutosh
    Bendersky, Marina
    Mukherjee, Vandana
    Syeda-Mahmood, Tanveer
    MEDICAL IMAGING 2019: IMAGING INFORMATICS FOR HEALTHCARE, RESEARCH, AND APPLICATIONS, 2019, 10954
  • [33] Rich Visual Knowledge-Based Augmentation Network for Visual Question Answering
    Zhang, Liyang
    Liu, Shuaicheng
    Liu, Donghao
    Zeng, Pengpeng
    Li, Xiangpeng
    Song, Jingkuan
    Gao, Lianli
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (10) : 4362 - 4373
  • [34] Multimodal Inverse Cloze Task for Knowledge-Based Visual Question Answering
    Lerner, Paul
    Ferret, Olivier
    Guinaudeau, Camille
    ADVANCES IN INFORMATION RETRIEVAL, ECIR 2023, PT I, 2023, 13980 : 569 - 587
  • [35] Cross-Modal Retrieval for Knowledge-Based Visual Question Answering
    Lerner, Paul
    Ferret, Olivier
    Guinaudeau, Camille
    ADVANCES IN INFORMATION RETRIEVAL, ECIR 2024, PT I, 2024, 14608 : 421 - 438
  • [36] Learning to Reason on Tree Structures for Knowledge-Based Visual Question Answering
    Li, Qifeng
    Tang, Xinyi
    Jian, Yi
    SENSORS, 2022, 22 (04)
  • [37] Caption matters: a new perspective for knowledge-based visual question answering
    Feng, Bin
    Ruan, Shulan
    Wu, Likang
    Liu, Huijie
    Zhang, Kai
    Zhang, Kun
    Liu, Qi
    Chen, Enhong
    KNOWLEDGE AND INFORMATION SYSTEMS, 2024, 66 (11) : 6975 - 7003
  • [38] IIU: Independent Inference Units for Knowledge-Based Visual Question Answering
    Li, Yili
    Yu, Jing
    Gai, Keke
    Xiong, Gang
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT IV, KSEM 2024, 2024, 14887 : 109 - 120
  • [39] Corpus-based pattern induction for a knowledge-based question answering approach
    Cimiano, Philipp
    Erdmann, Michael
    Ladwig, Guenter
    ICSC 2007: INTERNATIONAL CONFERENCE ON SEMANTIC COMPUTING, PROCEEDINGS, 2007, : 671 - +
  • [40] Mutual Relation Detection for Complex Question Answering over Knowledge Graph
    Zhang, Qifan
    Tong, Peihao
    Yao, Junjie
    Wang, Xiaoling
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS (DASFAA 2020), PT II, 2020, 12113 : 623 - 631