Stronger Baselines for Grammatical Error Correction Using a Pretrained Encoder-Decoder Model

被引:0
|
作者
Katsumata, Satoru [1 ]
Komachi, Mamoru [1 ]
机构
[1] Tokyo Metropolitan Univ, Tokyo, Japan
基金
日本学术振兴会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Studies on grammatical error correction (GEC) have reported the effectiveness of pretraining a Seq2Seq model with a large amount of pseudodata. However, this approach requires time-consuming pretraining for GEC because of the size of the pseudodata. In this study, we explore the utility of bidirectional and auto-regressive transformers (BART) as a generic pretrained encoder-decoder model for GEC. With the use of this generic pretrained model for GEC, the time-consuming pretraining can be eliminated. We find that monolingual and multilingual BART models achieve high performance in GEC, with one of the results being comparable to the current strong results in English GEC. Our implementations are publicly available at GitHub(1).
引用
收藏
页码:827 / 832
页数:6
相关论文
共 50 条
  • [1] A Multilayer Convolutional Encoder-Decoder Neural Network for Grammatical Error Correction
    Chollampatt, Shamil
    Hwee Tou Ng
    [J]. THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 5755 - 5762
  • [2] An Automatic Grammar Error Correction Model Based on Encoder-Decoder Structure for English Texts
    Wang, Jiahao
    Huang, Guimin
    Wang, Yabing
    [J]. EAI ENDORSED TRANSACTIONS ON SCALABLE INFORMATION SYSTEMS, 2022, 10 (01)
  • [3] Encoder-Decoder Models Can Benefit from Pre-trained Masked Language Models in Grammatical Error Correction
    Kaneko, Masahiro
    Mita, Masato
    Kiyono, Shun
    Suzuki, Jun
    Inui, Kentaro
    [J]. 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 4248 - 4254
  • [4] Automatic Correction of Speech Recognized Mathematical Equations using Encoder-Decoder Attention Model
    Mounika, Y.
    Tarakaram, Y.
    Prasanna, Y. Lakshmi
    Gupta, Deepa
    Pati, Peeta Basa
    [J]. 2022 IEEE 19TH INDIA COUNCIL INTERNATIONAL CONFERENCE, INDICON, 2022,
  • [5] Beyond Grammatical Error Correction: Improving L1-influenced research writing in English using pre-trained encoder-decoder models
    Zomer, Gustavo
    Garcia, Ana-Frankenberg
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 2534 - 2540
  • [6] Data augmentation using pretrained models in japanese grammatical error correction
    Kato, Hideyoshi
    Okabe, Masaaki
    Kitano, Michiharu
    Yadohisa, Hiroshi
    [J]. Transactions of the Japanese Society for Artificial Intelligence, 2023, 38 (04)
  • [7] Filling gaps of cartographic polylines by using an encoder-decoder model
    Yu, Wenhao
    Chen, Yujie
    [J]. INTERNATIONAL JOURNAL OF GEOGRAPHICAL INFORMATION SCIENCE, 2022, 36 (11) : 2296 - 2321
  • [8] Automated tongue segmentation using deep encoder-decoder model
    Worapan Kusakunniran
    Punyanuch Borwarnginn
    Thanandon Imaromkul
    Kittinun Aukkapinyo
    Kittikhun Thongkanchorn
    Disathon Wattanadhirach
    Sophon Mongkolluksamee
    Ratchainant Thammasudjarit
    Panrasee Ritthipravat
    Pimchanok Tuakta
    Paitoon Benjapornlert
    [J]. Multimedia Tools and Applications, 2023, 82 : 37661 - 37686
  • [9] Learning Depth for Scene Reconstruction Using an Encoder-Decoder Model
    Tu, Xiaohan
    Xu, Cheng
    Liu, Siping
    Xie, Guoqi
    Huang, Jing
    Li, Renfa
    Yuan, Junsong
    [J]. IEEE ACCESS, 2020, 8 : 89300 - 89317
  • [10] Automated tongue segmentation using deep encoder-decoder model
    Kusakunniran, Worapan
    Borwarnginn, Punyanuch
    Imaromkul, Thanandon
    Aukkapinyo, Kittinun
    Thongkanchorn, Kittikhun
    Wattanadhirach, Disathon
    Mongkolluksamee, Sophon
    Thammasudjarit, Ratchainant
    Ritthipravat, Panrasee
    Tuakta, Pimchanok
    Benjapornlert, Paitoon
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (24) : 37661 - 37686