Multi-task learning for historical text normalization: Size matters

被引:0
|
作者
Bollmann, Marcel [1 ]
Sogaard, Anders [1 ]
Bingel, Joachim [1 ]
机构
[1] Univ Copenhagen, Dept Comp Sci, Copenhagen, Denmark
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Historical text normalization suffers from small datasets that exhibit high variance, and previous work has shown that multi-task learning can be used to leverage data from related problems in order to obtain more robust models. Previous work has been limited to datasets from a specific language and a specific historical period, and it is not clear whether results generalize. It therefore remains an open problem, when historical text normalization benefits from multi-task learning. We explore the benefits of multi-task learning across 10 different datasets, representing different languages and periods. Our main finding-contrary to what has been observed for other NLP tasks-is that multi-task learning mainly works when target task data is very scarce.
引用
收藏
页码:19 / 24
页数:6
相关论文
共 50 条
  • [41] Calibrated Multi-Task Learning
    Nie, Feiping
    Hu, Zhanxuan
    Li, Xuelong
    [J]. KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, : 2012 - 2021
  • [42] Learning attention for historical text normalization by learning to pronounce
    Bollmann, Marcel
    Bingel, Joachim
    Sogaard, Anders
    [J]. PROCEEDINGS OF THE 55TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2017), VOL 1, 2017, : 332 - 344
  • [43] Parallel Multi-Task Learning
    Zhang, Yu
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2015, : 629 - 638
  • [44] Boosted multi-task learning
    Chapelle, Olivier
    Shivaswamy, Pannagadatta
    Vadrevu, Srinivas
    Weinberger, Kilian
    Zhang, Ya
    Tseng, Belle
    [J]. MACHINE LEARNING, 2011, 85 (1-2) : 149 - 173
  • [45] Distributed Multi-Task Learning
    Wang, Jialei
    Kolar, Mladen
    Srebro, Nathan
    [J]. ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 51, 2016, 51 : 751 - 760
  • [46] An overview of multi-task learning
    Yu Zhang
    Qiang Yang
    [J]. National Science Review, 2018, 5 (01) : 30 - 43
  • [47] Learning Sparse Task Relations in Multi-Task Learning
    Zhang, Yu
    Yang, Qiang
    [J]. THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 2914 - 2920
  • [48] Survey of Multi-Task Learning
    Zhang Y.
    Liu J.-W.
    Zuo X.
    [J]. 1600, Science Press (43): : 1340 - 1378
  • [49] A Survey on Multi-Task Learning
    Zhang, Yu
    Yang, Qiang
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2022, 34 (12) : 5586 - 5609
  • [50] MTAAL: Multi-Task Adversarial Active Learning for Medical Named Entity Recognition and Normalization
    Zhou, Baohang
    Cai, Xiangrui
    Zhang, Ying
    Guo, Wenya
    Yuan, Xiaojie
    [J]. THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 14586 - 14593