Improving Autoregressive Grammatical Error Correction with Non-autoregressive Models

被引:0
|
作者
Cao, Hang [1 ]
Cao, Zhiquan [1 ]
Hu, Chi [1 ]
Hou, Baoyu [1 ]
Xiao, Tong [1 ,2 ]
Zhu, Jingbo [1 ,2 ]
机构
[1] Northeastern Univ, NLP Lab, Sch Comp Sci & Engn, Shenyang, Peoples R China
[2] NiuTrans Res, Shenyang, Peoples R China
基金
美国国家科学基金会; 国家重点研发计划;
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Grammatical Error Correction (GEC) aims to correct grammatical errors in sentences. We find that autoregressive models tend to assign low probabilities to tokens that need corrections. Here we introduce additional signals to the training of GEC models so that these systems can learn to better predict at ambiguous positions. To do this, we use a non-autoregressive model as an auxiliary model, and develop a new regularization term of training by considering the difference in predictions between the autoregressive and non-autoregressive models. We experiment with this method on both English and Chinese GEC tasks. Experimental results show that our GEC system outperforms the baselines on all the data sets significantly.
引用
收藏
页码:12014 / 12027
页数:14
相关论文
共 50 条
  • [41] Non-Autoregressive Machine Translation as Constrained HMM
    Li, Haoran
    Jie, Zhanming
    Lui, Wei
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 12361 - 12372
  • [42] Non-Autoregressive Machine Translation with Latent Alignments
    Saharia, Chitwan
    Chan, William
    Saxena, Saurabh
    Norouzi, Mohammad
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 1098 - 1108
  • [43] Testing cointegrating coefficients in vector autoregressive error correction models
    Hansen, G
    Kim, JR
    Mittnik, S
    ECONOMICS LETTERS, 1998, 58 (01) : 1 - 5
  • [44] Streaming End-to-End ASR based on Blockwise Non-Autoregressive Models
    Wang, Tianzi
    Fujita, Yuya
    Chang, Xuankai
    Watanabe, Shinji
    INTERSPEECH 2021, 2021, : 3755 - 3759
  • [45] Non-Autoregressive End-to-End Neural Modeling for Automatic Pronunciation Error Detection
    Wadud, Md. Anwar Hussen
    Alatiyyah, Mohammed
    Mridha, M. F.
    APPLIED SCIENCES-BASEL, 2023, 13 (01):
  • [46] IMPROVING NON-AUTOREGRESSIVE END-TO-END SPEECH RECOGNITION WITH PRE-TRAINED ACOUSTIC AND LANGUAGE MODELS
    Deng, Keqi
    Yang, Zehui
    Watanabe, Shinji
    Higuchi, Yosuke
    Cheng, Gaofeng
    Zhang, Pengyuan
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 8522 - 8526
  • [47] An Experiment on Autoregressive and Threshold Autoregressive Models with Non-Gaussian Error with Application to Realized Volatility
    Zhang, Ziyi
    Li, Wai Keung
    ECONOMIES, 2019, 7 (02):
  • [48] Integrated Training for Sequence-to-Sequence Models Using Non-Autoregressive Transformer
    Tokarchuk, Evgeniia
    Rosendahl, Jan
    Wang, Weiyue
    Petrushkov, Pavel
    Lancewicki, Tomer
    Khadivi, Shahram
    Ney, Hermann
    IWSLT 2021: THE 18TH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE TRANSLATION, 2021, : 276 - 286
  • [49] NON-AUTOREGRESSIVE SEQUENCE-TO-SEQUENCE VOICE CONVERSION
    Hayashi, Tomoki
    Huang, Wen-Chin
    Kobayashi, Kazuhiro
    Toda, Tomoki
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 7068 - 7072
  • [50] Modeling Coverage for Non-Autoregressive Neural Machine Translation
    Shan, Yong
    Feng, Yang
    Shao, Chenze
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,