Multi-task learning with contextual hierarchical attention for Korean coreference resolution

被引:1
|
作者
Park, Cheoneum [1 ]
机构
[1] AIRS Co, Hyundai Motor Grp, Seoul, South Korea
关键词
coreference resolution; hierarchical model; head-final language; multi-task learning; pointer network;
D O I
10.4218/etrij.2021-0293
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Coreference resolution is a task in discourse analysis that links several headwords used in any document object. We suggest pointer networks-based coreference resolution for Korean using multi-task learning (MTL) with an attention mechanism for a hierarchical structure. As Korean is a head-final language, the head can easily be found. Our model learns the distribution by referring to the same entity position and utilizes a pointer network to conduct coreference resolution depending on the input headword. As the input is a document, the input sequence is very long. Thus, the core idea is to learn the word- and sentence-level distributions in parallel with MTL, while using a shared representation to address the long sequence problem. The suggested technique is used to generate word representations for Korean based on contextual information using pre-trained language models for Korean. In the same experimental conditions, our model performed roughly 1.8% better on CoNLL F1 than previous research without hierarchical structure.
引用
收藏
页码:93 / 104
页数:12
相关论文
共 50 条
  • [41] Hierarchical Multi-Task Word Embedding Learning for Synonym Prediction
    Fei, Hongliang
    Tan, Shulong
    Li, Ping
    [J]. KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 834 - 842
  • [42] Multi-Task Hierarchical Learning Based Network Traffic Analytics
    Barut, Onur
    Luo, Yan
    Zhang, Tong
    Li, Weigang
    Li, Peilong
    [J]. IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021), 2021,
  • [43] Multi-task Learning of Hierarchical Vision-Language Representation
    Duy-Kien Nguyen
    Okatani, Takayuki
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 10484 - 10493
  • [44] On Exploiting Network Topology for Hierarchical Coded Multi-Task Learning
    Hu, Haoyang
    Li, Songze
    Cheng, Minquan
    Ma, Shuai
    Shi, Yuanming
    Wu, Youlong
    [J]. IEEE TRANSACTIONS ON COMMUNICATIONS, 2024, 72 (08) : 4930 - 4944
  • [45] Hierarchical Multi-task Learning with Application to Wafer Quality Prediction
    He, Jingrui
    Zhu, Yada
    [J]. 12TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2012), 2012, : 290 - 298
  • [46] Hierarchical Multimodal Fusion Network with Dynamic Multi-task Learning
    Wang, Tianyi
    Chen, Shu-Ching
    [J]. 2021 IEEE 22ND INTERNATIONAL CONFERENCE ON INFORMATION REUSE AND INTEGRATION FOR DATA SCIENCE (IRI 2021), 2021, : 208 - 214
  • [47] Hierarchical Deep Multi-task Learning for Classification of Patient Diagnoses
    Malakouti, Salim
    Hauskrecht, Milos
    [J]. ARTIFICIAL INTELLIGENCE IN MEDICINE, AIME 2022, 2022, 13263 : 122 - 132
  • [48] Facial Expression Recognition by Regional Attention and Multi-task Learning
    Cui, Longlei
    Tian, Ying
    [J]. ENGINEERING LETTERS, 2021, 29 (03)
  • [49] An efficient multi-task learning CNN for driver attention monitoring
    Yang, Dawei
    Wang, Yan
    Wei, Ran
    Guan, Jiapeng
    Huang, Xiaohua
    Cai, Wei
    Jiang, Zhe
    [J]. JOURNAL OF SYSTEMS ARCHITECTURE, 2024, 148
  • [50] Attention-Oriented Deep Multi-Task Hash Learning
    Wang, Letian
    Meng, Ziyu
    Dong, Fei
    Yang, Xiao
    Xi, Xiaoming
    Nie, Xiushan
    [J]. ELECTRONICS, 2023, 12 (05)