Improving AMR parsing by exploiting the dependency parsing as an auxiliary task

被引:2
|
作者
Wu, Taizhong [1 ]
Zhou, Junsheng [1 ]
Qu, Weiguang [1 ]
Gu, Yanhui [1 ]
Li, Bin [1 ]
Zhong, Huilin [1 ]
Long, Yunfei [2 ]
机构
[1] Nanjing Normal Univ, Sch Comp Sci & Technol, Nanjing, Peoples R China
[2] Univ Essex, Sch Comp Sci & Elect Engn, Colchester, Essex, England
关键词
Abstract meaning representations; Multi-task learning; Dependency-auxiliary AMR parser; Neural network;
D O I
10.1007/s11042-020-09967-3
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
meaning representations (AMRs) represent sentence semantics as rooted labeled directed acyclic graphs. Though there is a strong correlation between the AMR graph of a sentence and its corresponding dependency tree, the recent neural network AMR parsers do neglect the exploitation of dependency structure information. In this paper, we explore a novel approach to exploiting dependency structures for AMR parsing. Unlike traditional pipeline models, we treat dependency parsing as an auxiliary task for AMR parsing under the multi-task learning framework by sharing neural network parameters and selectively extracting syntactic representation by the attention mechanism. Particularly, to balance the gradients and focus on the AMR parsing task, we present a new dynamical weighting scheme in the loss function. The experimental results on the LDC2015E86 and LDC2017T10 dataset show that our dependency-auxiliary AMR parser significantly outperforms the baseline and its pipeline counterpart, and demonstrate that the neural AMR parsers can be greatly boosted with the help of effective methods of integrating syntax.
引用
收藏
页码:30827 / 30838
页数:12
相关论文
共 50 条
  • [21] Improving AMR Parsing with Sequence-to-Sequence Pre-training
    Xu, Dongqin
    Li, Junhui
    Zhu, Muhua
    Min Zhang
    Zhou, Guodong
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 2501 - 2511
  • [22] Multi-task Learning for Word Alignment and Dependency Parsing
    Liu, Shujie
    ARTIFICIAL INTELLIGENCE AND COMPUTATIONAL INTELLIGENCE, PT III, 2011, 7004 : 151 - 158
  • [23] EXPLOITING SUBTREES IN AUTO-PARSED DATA TO IMPROVE DEPENDENCY PARSING
    Chen, Wenliang
    Kazama, Jun'ichi
    Uchimoto, Kiyotaka
    Torisawa, Kentaro
    COMPUTATIONAL INTELLIGENCE, 2012, 28 (03) : 426 - 451
  • [24] AMR Parsing with Cache Transition Systems
    Peng, Xiaochang
    Gildea, Daniel
    Satta, Giorgio
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 4897 - 4904
  • [25] Exploiting meta features for dependency parsing and part-of-speech tagging
    Chen, Wenliang
    Zhang, Min
    Zhang, Yue
    Duan, Xiangyu
    ARTIFICIAL INTELLIGENCE, 2016, 230 : 173 - 191
  • [26] Ensembling Graph Predictions for AMR Parsing
    Lam, Hoang Thanh
    Picco, Gabriele
    Hou, Yufang
    Lee, Young-Suk
    Nguyen, Lam M.
    Phan, Dzung T.
    Lopez, Vanessa
    Astudillo, Ramon Fernandez
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [27] Incorporating EDS Graph for AMR Parsing
    Shou, Ziyi
    Lin, Fangzhen
    10TH CONFERENCE ON LEXICAL AND COMPUTATIONAL SEMANTICS (SEM 2021), 2021, : 202 - 211
  • [28] AMR Parsing with Latent Structural Information
    Zhou, Qiji
    Zhang, Yue
    Ji, Donghong
    Tang, Hao
    58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 4306 - 4319
  • [29] Hierarchical Curriculum Learning for AMR Parsing
    Wang, Peiyi
    Chen, Liang
    Liu, Tianyu
    Dai, Damai
    Cao, Yunbo
    Chang, Baobao
    Sui, Zhifang
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022): (SHORT PAPERS), VOL 2, 2022, : 333 - 339
  • [30] Stacked AMR Parsing with Silver Data
    Xia, Qingrong
    Li, Zhenghua
    Wang, Rui
    Zhang, Min
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 4729 - 4738