HIERARCHICAL REINFORCEMENT LEARNING WITH ADVANTAGE FUNCTION FOR ENTITY RELATION EXTRACTION

被引:0
|
作者
Zhu, Xianchao [1 ,2 ]
Zhu, William [2 ]
机构
[1] School of Artificial Intelligence and Big Data, Henan University of Technology, Zhengzhou,450001, China
[2] Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu, Chengdu,610054, China
来源
基金
中国国家自然科学基金;
关键词
Learning systems - Reinforcement learning - Semantics;
D O I
暂无
中图分类号
学科分类号
摘要
Unlike the traditional pipeline methods, the joint extraction approaches use a single model to distill the entities and semantic relations between entities from the unstructured texts and achieve better performances. A pioneering work, HRL-RE, uses a hierarchical reinforcement learning model to distill entities and relations that decompose the entire extraction process into a high-level relationship extraction and a low-level entity identification. HRL-RE makes the extraction of entities and relations more accurate while solving overlapped entities and relations to a certain extent. However, this method has not achieved satisfactory results in dealing with overlapped entities and relations in sentences. One reason is that learning a policy is usually inefficient, and the other one is the high variance of gradient estimators. In this paper, we propose a new method, Advantage Hierarchical Reinforcement Learning for Entity Relation Extraction (AHRL-ERE), which combines the HRL-RE model with a new advantage function to distill entities and relations from the structureless text. Specifically, based on the reference value of the policy function in the high-level subtask, we construct a new advantage function. Then, we combine this advantage function with the value function of the strategy in the low-level subtask to form a new value function. This new value function can immediately evaluate the current policy, so our AHR-ERE method can correct the direction of the policy gradient update in time, thereby making policy learning efficient. Moreover, our advantage function subtracts the reference value of the high-level policy value function from the low-level policy value function so that AHRL-ERE can decrease the variance of the gradient estimator. Thus our AHRL-ERE method is more effective for extracting overlapped entities and relations from the unstructured text. Experiments on the diffusely used datasets demonstrate that our proposed algorithm has better manifestation than the existing approaches do. ©2022 Journal of Applied and Numerical Optimization.
引用
收藏
页码:393 / 404
相关论文
共 50 条
  • [1] A Hierarchical Framework for Relation Extraction with Reinforcement Learning
    Takanobu, Ryuichi
    Zhang, Tianyang
    Liu, Jiexi
    Huang, Minlie
    [J]. THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 7072 - 7079
  • [2] Joint Entity and Relation Extraction Based on Reinforcement Learning
    Zhou, Xin
    Liu, Luping
    Luo, Xiaodong
    Chen, Haiqiang
    Qing, Linbo
    He, Xiaohai
    [J]. IEEE ACCESS, 2019, 7 : 125688 - 125699
  • [3] Bootstrapping Joint Entity and Relation Extraction with Reinforcement Learning
    Xia, Min
    Cheng, Xiang
    Su, Sen
    Kuang, Ming
    Li, Gang
    [J]. WEB INFORMATION SYSTEMS ENGINEERING - WISE 2022, 2022, 13724 : 418 - 432
  • [4] Joint Entity and Relation Extraction with a Hybrid Transformer and Reinforcement Learning Based Model
    Xiao, Ya
    Tan, Chengxiang
    Fan, Zhijie
    Xu, Qian
    Zhu, Wenye
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 9314 - 9321
  • [5] BIRL: Bidirectional-Interaction Reinforcement Learning Framework for Joint Relation and Entity Extraction
    Wang, Yashen
    Zhang, Huanhuan
    [J]. DATABASE SYSTEMS FOR ADVANCED APPLICATIONS (DASFAA 2021), PT II, 2021, 12682 : 483 - 499
  • [6] Hierarchical Advantage for Reinforcement Learning in Parameterized Action Space
    Hu, Zhejie
    Kaneko, Tomoyuki
    [J]. 2021 IEEE CONFERENCE ON GAMES (COG), 2021, : 816 - 823
  • [7] Reinforcement learning with multimodal advantage function for accurate advantage estimation in robot learning
    Park, Jonghyeok
    Han, Soohee
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 126
  • [8] Relation Extraction with Deep Reinforcement Learning
    Zhang, Hongjun
    Feng, Yuntian
    Hao, Wenning
    Chen, Gang
    Jin, Dawei
    [J]. IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2017, E100D (08) : 1893 - 1902
  • [9] Hierarchical Reinforcement Learning with Advantage-Based Auxiliary Rewards
    Li, Siyuan
    Wang, Rui
    Tang, Minxue
    Zhang, Chongjie
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [10] End-to-End Entity Linking with Hierarchical Reinforcement Learning
    Chen, Lihan
    Zhu, Tinghui
    Liu, Jingping
    Liang, Jiaqing
    Xiao, Yanghua
    [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 4, 2023, : 4173 - 4181