CLRN: A reasoning network for multi-relation question answering over Cross-lingual Knowledge Graphs

被引:0
|
作者
Tan, Yiming [1 ,3 ]
Zhang, Xinyu [2 ,3 ]
Chen, Yongrui [2 ,3 ]
Ali, Zafar [2 ,3 ]
Hua, Yuncheng [4 ]
Qi, Guilin [1 ,2 ,3 ]
机构
[1] Southeast Univ, Sch Cyber Sci & Engn, Nanjing, Peoples R China
[2] Southeast Univ, Sch Comp Sci & Engn, Nanjing, Peoples R China
[3] Southeast Univ, Key Lab Comp Network & Informat Integrat, Minist Educ, Nanjing, Peoples R China
[4] Monash Univ, Melbourne, Australia
关键词
Question answering; Cross-lingual knowledge graphs; Multi-hop reasoning; Entity alignment;
D O I
10.1016/j.eswa.2023.120721
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cross-lingual Knowledge Graphs-based Question Answering (CLKGQA) requires the question answering (QA) system to combine the knowledge graphs (KGs) in different languages to obtain answers to input questions. In previous works, the common idea is to merge Cross-lingual Knowledge Graphs (CLKGs) into a single KG through aligned entity pairs and then treat it as a traditional KG-based QA. However, as demonstrated by Tan et al. (2023), existing Entity Alignment (EA) models cannot generate highly accurate aligned entity pairs for CLKGs. Therefore, two issues need to be addressed in the CLKGQA task: (1) Remove the dependency of the QA model on the fused KG; (2) Improve the performance of the EA model in obtaining aligned entity pairs from locally isomorphic CLKGs. To solve the above two issues, this paper presents Cross-lingual Reasoning Network (CLRN), a novel multi-hop QA model that allows switching knowledge graphs at any stage of the multi-hop reasoning. Furthermore, we establish an iterative framework that combines CLRN and EA model, in which CLRN is used for extracting potential alignment triple pairs from CLKGs during the QA process. The extracted triple pairs provide pseudo-aligned entities, and the additional aligned entity pairs are used to mine missing relations between entities in CLKGs. These pseudo-aligned entity pairs and relations improve the performance of the EA model, resulting in higher accuracy in QA. Extensive experiments demonstrate the effectiveness of the proposed model, which outperforms the baseline approaches. Through iterative enhancement, the performance of the EA model has also been improved by > 1.0 % in Hit@1 and Hit@10, and the improvement is statistically significant in the confidence interval of ������ < 0.01. Moreover, our work discusses the correlation between QA and EA from the side of QA, which has reference value for the follow-up exploration of related communities. We have open-sourced our dataset and code, which is available at the URL https://github.com/tan92hl/Cross-lingual-Reasoning-Network-for-CLKGQA.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Stepwise Reasoning for Multi-Relation Question Answering over Knowledge Graph with Weak Supervision
    Qiu, Yunqi
    Wang, Yuanzhuo
    Jin, Xiaolong
    Zhang, Kun
    PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM '20), 2020, : 474 - 482
  • [2] A Translation Model-Based Question Answering Approach over Cross-Lingual Knowledge Graphs
    Ji, Jiangzhou
    He, Yaohan
    Li, Jinlong
    CCKS 2022 - EVALUATION TRACK, 2022, 1711 : 39 - 46
  • [3] Cross-Lingual Question Answering over Knowledge Base as Reading Comprehension
    Zhang, Chen
    Lai, Yuxuan
    Feng, Yansong
    Shen, Xingyu
    Du, Haowei
    Zhao, Dongyan
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 2439 - 2452
  • [4] QAGCN: Answering Multi-relation Questions via Single-Step Implicit Reasoning over Knowledge Graphs
    Wang, Ruijie
    Rossetto, Luca
    Cochez, Michael
    Bernstein, Abraham
    SEMANTIC WEB, PT I, ESWC 2024, 2024, 14664 : 41 - 58
  • [5] Deep Cognitive Reasoning Network for Multi-hop Question Answering over Knowledge Graphs
    Cai, Jianyu
    Zhang, Zhanqiu
    Wu, Feng
    Wang, Jie
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 219 - 229
  • [6] Robust cross-lingual knowledge base question answering via knowledge distillation
    Wang, Shaofei
    Dang, Depeng
    DATA TECHNOLOGIES AND APPLICATIONS, 2021, 55 (05) : 661 - 681
  • [7] Applying Wikipedia's multilingual knowledge to Cross-Lingual question answering
    Ferrandez, Sergio
    Toral, Antonio
    Ferrandez, Oscar
    Ferrandez, Antonio
    Munoz, Rafael
    NATURAL LANGUAGE PROCESSING AND INFORMATION SYSTEMS, PROCEEDINGS, 2007, 4592 : 352 - +
  • [8] An improving reasoning network for complex question answering over temporal knowledge graphs
    Songlin Jiao
    Zhenfang Zhu
    Wenqing Wu
    Zicheng Zuo
    Jiangtao Qi
    Wenling Wang
    Guangyuan Zhang
    Peiyu Liu
    Applied Intelligence, 2023, 53 : 8195 - 8208
  • [9] An improving reasoning network for complex question answering over temporal knowledge graphs
    Jiao, Songlin
    Zhu, Zhenfang
    Wu, Wenqing
    Zuo, Zicheng
    Qi, Jiangtao
    Wang, Wenling
    Zhang, Guangyuan
    Liu, Peiyu
    APPLIED INTELLIGENCE, 2023, 53 (07) : 8195 - 8208
  • [10] xGQA: Cross-Lingual Visual Question Answering
    Pfeiffer, Jonas
    Geigle, Gregor
    Kamath, Aishwarya
    Steitz, Jan-Martin O.
    Roth, Stefan
    Vulic, Ivan
    Gurevych, Iryna
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 2497 - 2511