MRE: A Military Relation Extraction Model Based on BiGRU and Multi-Head Attention

被引:9
|
作者
Lu, Yiwei [1 ]
Yang, Ruopeng [1 ]
Jiang, Xuping [1 ]
Zhou, Dan [1 ]
Yin, Changsheng [1 ]
Li, Zizhuo [2 ]
机构
[1] Natl Univ Def Technol, Coll Informat & Commun, Wuhan 430019, Peoples R China
[2] Wuhan Univ, Elect Informat Sch, Wuhan 430000, Peoples R China
来源
SYMMETRY-BASEL | 2021年 / 13卷 / 09期
关键词
military relation extraction; bi-directional encoder representations from transformers (BERT); BiGRU; multi-head attention; ENTITY;
D O I
10.3390/sym13091742
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
A great deal of operational information exists in the form of text. Therefore, extracting operational information from unstructured military text is of great significance for assisting command decision making and operations. Military relation extraction is one of the main tasks of military information extraction, which aims at identifying the relation between two named entities from unstructured military texts. However, the traditional methods of extracting military relations cannot easily resolve problems such as inadequate manual features and inaccurate Chinese word segmentation in military fields, failing to make full use of symmetrical entity relations in military texts. With our approach, based on the pre-trained language model, we present a Chinese military relation extraction method, which combines the bi-directional gate recurrent unit (BiGRU) and multi-head attention mechanism (MHATT). More specifically, the conceptual foundation of our method lies in constructing an embedding layer and combining word embedding with position embedding, based on the pre-trained language model; the output vectors of BiGRU neural networks are symmetrically spliced to learn the semantic features of context, and they fuse the multi-head attention mechanism to improve the ability of expressing semantic information. On the military text corpus that we have built, we conduct extensive experiments. We demonstrate the superiority of our method over the traditional non-attention model, attention model, and improved attention model, and the comprehensive evaluation value F1-score of the model is improved by about 4%.
引用
收藏
页数:15
相关论文
共 50 条
  • [41] A fiber recognition framework based on multi-head attention mechanism
    Xu, Luoli
    Li, Fenying
    Chang, Shan
    [J]. TEXTILE RESEARCH JOURNAL, 2024, : 2629 - 2640
  • [42] Predicting disease genes based on multi-head attention fusion
    Linlin Zhang
    Dianrong Lu
    Xuehua Bi
    Kai Zhao
    Guanglei Yu
    Na Quan
    [J]. BMC Bioinformatics, 24
  • [43] Personalized federated learning based on multi-head attention algorithm
    Shanshan Jiang
    Meixia Lu
    Kai Hu
    Jiasheng Wu
    Yaogen Li
    Liguo Weng
    Min Xia
    Haifeng Lin
    [J]. International Journal of Machine Learning and Cybernetics, 2023, 14 : 3783 - 3798
  • [44] A new joint CTC-attention-based speech recognition model with multi-level multi-head attention
    Qin, Chu-Xiong
    Zhang, Wen-Lin
    Qu, Dan
    [J]. EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING, 2019, 2019 (01)
  • [45] Predicting disease genes based on multi-head attention fusion
    Zhang, Linlin
    Lu, Dianrong
    Bi, Xuehua
    Zhao, Kai
    Yu, Guanglei
    Quan, Na
    [J]. BMC BIOINFORMATICS, 2023, 24 (01)
  • [46] Multimodal sentiment analysis based on multi-head attention mechanism
    Xi, Chen
    Lu, Guanming
    Yan, Jingjie
    [J]. ICMLSC 2020: PROCEEDINGS OF THE 4TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND SOFT COMPUTING, 2020, : 34 - 39
  • [47] Reservoir fluid identification based on multi-head attention with UMAP
    Hua, Yuanpeng
    Gao, Guozhong
    He, Daxiang
    Wang, Gang
    Liu, Wenjun
    [J]. GEOENERGY SCIENCE AND ENGINEERING, 2024, 238
  • [48] Multi-Attention Cascade Model Based on Multi-Head Structure for Image-Text Retrieval
    Zhang, Haotian
    Wu, Wei
    Zhang, Meng
    [J]. 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [49] Weight estimation model for trucks integrating multi-head attention mechanism
    Gu, Ming-Chen
    Xiong, Hui-Yuan
    Liu, Zeng-Jun
    Luo, Qing-Yu
    Liu, Hong
    [J]. Jilin Daxue Xuebao (Gongxueban)/Journal of Jilin University (Engineering and Technology Edition), 2024, 54 (10): : 2771 - 2780
  • [50] Self Multi-Head Attention for Speaker Recognition
    India, Miquel
    Safari, Pooyan
    Hernando, Javier
    [J]. INTERSPEECH 2019, 2019, : 4305 - 4309