A reinforcement learning formulation to the complex question answering problem

被引:20
|
作者
Chali, Yllias [1 ]
Hasan, Sadid A. [2 ]
Mojahid, Mustapha [3 ]
机构
[1] Univ Lethbridge, Lethbridge, AB T1K 3M4, Canada
[2] Philips Res North Amer, Briarcliff Manor, NY 10510 USA
[3] IRIT, Toulouse, France
基金
加拿大自然科学与工程研究理事会;
关键词
Complex question answering; Multi-document summarization; Reinforcement learning; Reward function; User interaction modeling;
D O I
10.1016/j.ipm.2015.01.002
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We use extractive multi-document summarization techniques to perform complex question answering and formulate it as a reinforcement learning problem. Given a set of complex questions, a list of relevant documents per question, and the corresponding human generated summaries (i.e. answers to the questions) as training data, the reinforcement learning module iteratively learns a number of feature weights in order to facilitate the automatic generation of summaries i.e. answers to previously unseen complex questions. A reward function is used to measure the similarities between the candidate (machine generated) summary sentences and the abstract summaries. In the training stage, the learner iteratively selects the important document sentences to be included in the candidate summary, analyzes the reward function and updates the related feature weights accordingly. The final weights are used to generate summaries as answers to unseen complex questions in the testing stage. Evaluation results show the effectiveness of our system. We also incorporate user interaction into the reinforcement learner to guide the candidate summary sentence selection process. Experiments reveal the positive impact of the user interaction component on the reinforcement learning framework. (C) 2015 Elsevier Ltd. All rights reserved.
引用
收藏
页码:252 / 272
页数:21
相关论文
共 50 条
  • [1] Machine learning for query formulation in question answering
    Monz, Christof
    [J]. NATURAL LANGUAGE ENGINEERING, 2011, 17 : 425 - 454
  • [2] ARL: An adaptive reinforcement learning framework for complex question answering over knowledge base
    Zhang, Qixuan
    Weng, Xinyi
    Zhou, Guangyou
    Zhang, Yi
    Huang, Jimmy Xiangji
    [J]. INFORMATION PROCESSING & MANAGEMENT, 2022, 59 (03)
  • [3] Integrating Question Rewriting in Conversational Question Answering: A Reinforcement Learning Approach
    Ishii, Etsuko
    Wilie, Bryan
    Xu, Yan
    Cahyawijaya, Samuel
    Fung, Pascale
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022): STUDENT RESEARCH WORKSHOP, 2022, : 55 - 66
  • [4] Few-Shot Complex Knowledge Base Question Answering via Meta Reinforcement Learning
    Hua, Yuncheng
    Li, Yuan-Fang
    Haffari, Gholamreza
    Qi, Guilin
    Wu, Tongtong
    [J]. PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 5827 - 5837
  • [5] An Open Domain Question Answering System Trained by Reinforcement Learning
    Afrae, Bghiel
    Mohamed, Ben Ahmed
    Abdelhakim, Anouar Boudhir
    [J]. SUSTAINABLE SMART CITIES AND TERRITORIES, 2022, 253 : 129 - 138
  • [6] Complex Question Answering: Unsupervised Learning Approaches and Experiments
    Chali, Yllias
    Joty, Shafiq R.
    Hasan, Sadid A.
    [J]. JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2009, 35 : 1 - 47
  • [7] Reinforcement Learning Inference Techniques for Knowledge Graph Constrained Question Answering
    Bi, Xin
    Nie, Hao-Jie
    Zhao, Xiang-Guo
    Yuan, Ye
    Wang, Guo-Ren
    [J]. Ruan Jian Xue Bao/Journal of Software, 2023, 34 (10):
  • [8] Web-based unsupervised learning for query formulation in question answering
    Wang, YC
    Wu, JC
    Liang, T
    Chang, JS
    [J]. NATURAL LANGUAGE PROCESSING - IJCNLP 2005, PROCEEDINGS, 2005, 3651 : 519 - 529
  • [9] Learning When Not to Answer: A Ternary Reward Structure for Reinforcement Learning based Question Answering
    Godin, Frederic
    Kumar, Anjishnu
    Mittal, Arpit
    [J]. 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES(NAACL HLT 2019), VOL. 2 (INDUSTRY PAPERS), 2019, : 122 - 129
  • [10] Answer formulation for question-answering
    Kosseim, L
    Plamondon, L
    Guillemette, LJ
    [J]. ADVANCES IN ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2003, 2671 : 24 - 34