Multi-task learning with contextual hierarchical attention for Korean coreference resolution

被引:1
|
作者
Park, Cheoneum [1 ]
机构
[1] AIRS Co, Hyundai Motor Grp, Seoul, South Korea
关键词
coreference resolution; hierarchical model; head-final language; multi-task learning; pointer network;
D O I
10.4218/etrij.2021-0293
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Coreference resolution is a task in discourse analysis that links several headwords used in any document object. We suggest pointer networks-based coreference resolution for Korean using multi-task learning (MTL) with an attention mechanism for a hierarchical structure. As Korean is a head-final language, the head can easily be found. Our model learns the distribution by referring to the same entity position and utilizes a pointer network to conduct coreference resolution depending on the input headword. As the input is a document, the input sequence is very long. Thus, the core idea is to learn the word- and sentence-level distributions in parallel with MTL, while using a shared representation to address the long sequence problem. The suggested technique is used to generate word representations for Korean based on contextual information using pre-trained language models for Korean. In the same experimental conditions, our model performed roughly 1.8% better on CoNLL F1 than previous research without hierarchical structure.
引用
收藏
页码:93 / 104
页数:12
相关论文
共 50 条
  • [21] Multi-task Hierarchical Adversarial Inverse Reinforcement Learning
    Chen, Jiayu
    Tamboli, Dipesh
    Lan, Tian
    Aggarwal, Vaneet
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [22] Attention-Based Multi-Task Learning in Pharmacovigilance
    Zhang, Shinan
    Dev, Shantanu
    Voyles, Joseph
    Rao, Anand S.
    [J]. PROCEEDINGS 2018 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM), 2018, : 2324 - 2328
  • [23] Multiple Relational Attention Network for Multi-task Learning
    Zhao, Jiejie
    Du, Bowen
    Sun, Leilei
    Zhuang, Fuzhen
    Lv, Weifeng
    Xiong, Hui
    [J]. KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 1123 - 1131
  • [24] SEQUENTIAL CROSS ATTENTION BASED MULTI-TASK LEARNING
    Kim, Sunkyung
    Choi, Hyesong
    Min, Dongbo
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 2311 - 2315
  • [25] End-to-End Multi-Task Learning with Attention
    Liu, Shikun
    Johns, Edward
    Davison, Andrew J.
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 1871 - 1880
  • [26] Knowledge-enhanced Hierarchical Attention for Community Question Answering with Multi-task and Adaptive Learning
    Yang, Min
    Chen, Lei
    Chen, Xiaojun
    Wu, Qingyao
    Zhou, Wei
    Shen, Ying
    [J]. PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 5349 - 5355
  • [27] MULTI-TASK LEARNING WITH CROSS ATTENTION FOR KEYWORD SPOTTING
    Higuchil, Takuya
    Gupta, Anmol
    Dhir, Chandra
    [J]. 2021 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU), 2021, : 571 - 578
  • [28] A Deep Multi-task Contextual Attention Framework for Multi-modal Affect Analysis
    Akhtar, Md Shad
    Chauhan, Dushyant Singh
    Ekbal, Asif
    [J]. ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2020, 14 (03)
  • [29] Cross-task Attention Mechanism for Dense Multi-task Learning
    Lopes, Ivan
    Tuan-Hung Vu
    de Charette, Raoul
    [J]. 2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 2328 - 2337
  • [30] Enhanced task attention with adversarial learning for dynamic multi-task CNN
    School of Computer Engineering and Science, Shanghai University, China
    不详
    不详
    不详
    [J]. Pattern Recogn,