Research on self-training neural machine translation based on monolingual priority sampling

被引:0
|
作者
Zhang X. [1 ]
Pang L. [1 ]
Du X. [1 ]
Lu T. [1 ]
Xia Y. [1 ]
机构
[1] School of Computer Science (National Pilot Software Engineering School), Beijing University of Posts and Telecommunications, Beijing
来源
基金
中国国家自然科学基金;
关键词
data augmentation; machine translation; self-training; syntactic dependency; uncertainty;
D O I
10.11959/j.issn.1000-436x.2024066
中图分类号
学科分类号
摘要
To enhance the performance of neural machine translation (NMT) and ameliorate the detrimental impact of high uncertainty in monolingual data during the self-training process, a self-training NMT model based on priority sampling was proposed. Initially, syntactic dependency trees were constructed and the importance of monolingual tokenization was assessed using grammar dependency analysis. Subsequently, a monolingual lexicon was built, and priority was defined based on the importance of monolingual tokenization and uncertainty. Finally, monolingual priorities were computed, and sampling was carried out based on these priorities, consequently generating a synthetic parallel dataset for training the student NMT model. Experimental results on a large-scale subset of the WMT English to German dataset demonstrate that the proposed model effectively enhances NMT translation performance and mitigates the impact of high uncertainty on the model. © 2024 Editorial Board of Journal on Communications. All rights reserved.
引用
收藏
页码:65 / 72
页数:7
相关论文
共 40 条
  • [1] DEVLIN J, CHANG M W, LEE K, Et al., BERT: pre-training of deep bidirectional transformers for language understanding, (2018)
  • [2] SENNRICH R, HADDOW B, BIRCH A., Improving neural machine translation models with monolingual data, (2015)
  • [3] PHAM N L, NGUYEN V V, PHAM T V., A data augmentation method for English-Vietnamese neural machine translation, IEEE Access, 11, pp. 28034-28044, (2023)
  • [4] LAMAR A, KAYA Z., Measuring the impact of data augmentation methods for extremely low-resource NMT, Proceedings of the Sixth Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2023), pp. 101-109, (2023)
  • [5] CAI D, WANG Y, LI H Y, Et al., Neural machine translation with monolingual translation memory, (2021)
  • [6] LIU Y H, GU J T, GOYAL N, Et al., Multilingual denoising pre-training for neural machine translation, (2020)
  • [7] VAKHARIA P, VIGNESH S S, BASMATKAR P., Low-resource formality controlled NMT using pre-trained LM, Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023), pp. 321-329, (2023)
  • [8] EDUNOV S, OTT M, AULI M, Et al., Understanding back-translation at scale, (2018)
  • [9] HASSAN H, AUE A, CHEN C, Et al., Achieving human parity on automatic Chinese to English news translation, (2018)
  • [10] NG N, YEE K, BAEVSKI A, Et al., Facebook FAIR's WMT19 news translation task submission, (2019)