Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Processing

被引:0
|
作者
Huawei Technologies Co., Ltd. [1 ]
不详 [2 ]
不详 [3 ]
机构
来源
Proc. Conf. Empir. Methods Nat. Lang. Process., EMNLP | / 3135-3151期
关键词
Compendex;
D O I
暂无
中图分类号
学科分类号
摘要
Computational linguistics - Natural language processing systems
引用
收藏
相关论文
共 50 条
  • [11] Morphosyntactic Tagging with Pre-trained Language Models for Arabic and its Dialects
    Inoue, Go
    Khalifa, Salam
    Habash, Nizar
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 1708 - 1719
  • [12] Pre-Trained Language Models Augmented with Synthetic Scanpaths for Natural Language Understanding
    Deng, Shuwen
    Prasse, Paul
    Reich, David R.
    Scheffer, Tobias
    Jaeger, Lena A.
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 6500 - 6507
  • [13] A pre-trained BERT for Korean medical natural language processing
    Kim, Yoojoong
    Kim, Jong-Ho
    Lee, Jeong Moon
    Jang, Moon Joung
    Yum, Yun Jin
    Kim, Seongtae
    Shin, Unsub
    Kim, Young-Min
    Joo, Hyung Joon
    Song, Sanghoun
    SCIENTIFIC REPORTS, 2022, 12 (01)
  • [14] A pre-trained BERT for Korean medical natural language processing
    Yoojoong Kim
    Jong-Ho Kim
    Jeong Moon Lee
    Moon Joung Jang
    Yun Jin Yum
    Seongtae Kim
    Unsub Shin
    Young-Min Kim
    Hyung Joon Joo
    Sanghoun Song
    Scientific Reports, 12
  • [15] Temporal Effects on Pre-trained Models for Language Processing Tasks
    Agarwal, Oshin
    Nenkova, Ani
    TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2022, 10 : 904 - 921
  • [16] Pre-Trained Language Models and Their Applications
    Wang, Haifeng
    Li, Jiwei
    Wu, Hua
    Hovy, Eduard
    Sun, Yu
    ENGINEERING, 2023, 25 : 51 - 65
  • [17] Annotating Columns with Pre-trained Language Models
    Suhara, Yoshihiko
    Li, Jinfeng
    Li, Yuliang
    Zhang, Dan
    Demiralp, Cagatay
    Chen, Chen
    Tan, Wang-Chiew
    PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA (SIGMOD '22), 2022, : 1493 - 1503
  • [18] LaoPLM: Pre-trained Language Models for Lao
    Lin, Nankai
    Fu, Yingwen
    Yang, Ziyu
    Chen, Chuwei
    Jiang, Shengyi
    LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 6506 - 6512
  • [19] HinPLMs: Pre-trained Language Models for Hindi
    Huang, Xixuan
    Lin, Nankai
    Li, Kexin
    Wang, Lianxi
    Gan, Suifu
    2021 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING (IALP), 2021, : 241 - 246
  • [20] Deciphering Stereotypes in Pre-Trained Language Models
    Ma, Weicheng
    Scheible, Henry
    Wang, Brian
    Veeramachaneni, Goutham
    Chowdhary, Pratim
    Sung, Alan
    Koulogeorge, Andrew
    Wang, Lili
    Yang, Diyi
    Vosoughi, Soroush
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 11328 - 11345