Self-attention enhanced CNNs with average margin loss for chinese zero pronoun resolution

被引:2
|
作者
Sun, Shi-jun [1 ]
机构
[1] Shanghai Univ, Inst Comp Engn & Sci, 99 Shangda Rd, Shanghai 200444, Peoples R China
关键词
Zero pronoun resolution; Self-attention; Convolutional neural networks; Natural language processing;
D O I
10.1007/s10489-021-02697-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent neural network methods for Chinese zero pronoun resolution explore multiple models for generating representation vectors for zero pronouns and their candidate antecedents. Typically, the representation of zero pronouns are generated by their contextual information since they are simply gaps, which makes it hard to express them. To better interpret zero pronouns and their candidate antecedents, we here introduce a convolutional neural networks with internal self-attention module for encoding them. With the help of the Multi-hop attention mechanism, our model is able to focus on informative parts of the associated contextual texts, which produces an effective way to capture these important information. In addition, we propose a novel average margin loss by averaging the candidate antecedents scores, making the model learning more reasonable. Experimental results on OntoNotes 5.0 dataset show that our model gains the best performance on the task of Chinese zero pronoun resolution.
引用
收藏
页码:5739 / 5750
页数:12
相关论文
共 50 条
  • [41] Image super-resolution reconstruction based on self-attention GAN
    Wang X.-S.
    Chao J.
    Cheng Y.-H.
    Kongzhi yu Juece/Control and Decision, 2021, 36 (06): : 1324 - 1332
  • [42] SELF-ATTENTION WITH RESTRICTED TIME CONTEXT AND RESOLUTION IN DNN SPEECH ENHANCEMENT
    Strake, Maximilian
    Behlke, Adrian
    Fingscheidt, Tim
    2022 INTERNATIONAL WORKSHOP ON ACOUSTIC SIGNAL ENHANCEMENT (IWAENC 2022), 2022,
  • [43] Combined Self-Attention Mechanism for Chinese Named Entity Recognition in Military
    Liao, Fei
    Ma, Liangli
    Pei, Jingjing
    Tan, Linshan
    FUTURE INTERNET, 2019, 11 (08):
  • [44] SELF-ATTENTION BASED PROSODIC BOUNDARY PREDICTION FOR CHINESE SPEECH SYNTHESIS
    Lu, Chunhui
    Zhang, Pengyuan
    Yan, Yonghong
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 7035 - 7039
  • [45] Multi-head enhanced self-attention network for novelty detection
    Zhang, Yingying
    Gong, Yuxin
    Zhu, Haogang
    Bai, Xiao
    Tang, Wenzhong
    PATTERN RECOGNITION, 2020, 107
  • [46] A Static Sign Language Recognition Method Enhanced with Self-Attention Mechanisms
    Wang, Yongxin
    Jiang, He
    Sun, Yutong
    Xu, Longqi
    SENSORS, 2024, 24 (21)
  • [47] Self-attention enhanced deep residual network for spatial image steganalysis
    Xie, Guoliang
    Ren, Jinchang
    Marshall, Stephen
    Zhao, Huimin
    Li, Rui
    Chen, Rongjun
    DIGITAL SIGNAL PROCESSING, 2023, 139
  • [48] Enhanced Self-Attention Network for Remote Sensing Building Change Detection
    Liang, Shike
    Hua, Zhen
    Li, Jinjiang
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2023, 16 : 4900 - 4915
  • [49] Face Super-Resolution Reconstruction Based on Self-Attention Residual Network
    Liu, Qing-Ming
    Jia, Rui-Sheng
    Zhao, Chao-Yue
    Liu, Xiao-Ying
    Sun, Hong-Mei
    Zhang, Xing-Li
    IEEE ACCESS, 2020, 8 : 4110 - 4121
  • [50] Zero-permutation jet-parton assignment using a self-attention network
    Jason S. H. Lee
    Inkyu Park
    Ian J. Watson
    Seungjin Yang
    Journal of the Korean Physical Society, 2024, 84 : 427 - 438