Microsoft Translator at WMT 2019: Towards Large-Scale Document-Level Neural Machine Translation

被引:0
|
作者
Junczys-Dowmunt, Marcin [1 ]
机构
[1] Microsoft, One Microsoft Way, Redmond, WA 98052 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper describes the Microsoft Translator submissions to the WMT19 news translation shared task for English-German. Our main focus is document-level neural machine translation with deep transformer models. We start with strong sentence-level baselines, trained on large-scale data created via data-filtering and noisy back-translation and find that back-translation seems to mainly help with translationese input. We explore fine-tuning techniques, deeper models and different ensembling strategies to counter these effects. Using document boundaries present in the authentic and synthetic parallel data, we create sequences of up to 1000 subword segments and train transformer translation models. We experiment with data augmentation techniques for the smaller authentic data with document-boundaries and for larger authentic data without boundaries. We further explore multi-task training for the incorporation of document-level source language monolingual data via the BERT-objective on the encoder and two-pass decoding for combinations of sentence-level and document-level systems. Based on preliminary human evaluation results, evaluators strongly prefer the document-level systems over our comparable sentence-level system. The document-level systems also seem to score higher than the human references in source-based direct assessment.
引用
收藏
页码:225 / 233
页数:9
相关论文
共 50 条
  • [41] Target-Side Augmentation for Document-Level Machine Translation
    Bao, Guangsheng
    Teng, Zhiyang
    Zhang, Yue
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 10725 - 10740
  • [42] A Simple and Effective Unified Encoder for Document-Level Machine Translation
    Ma, Shuming
    Zhang, Dongdong
    Zhou, Ming
    58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 3505 - 3511
  • [43] BLONDE: An Automatic Evaluation Metric for Document-level Machine Translation
    Jiang, Yuchen Eleanor
    Liu, Tianyu
    Ma, Shuming
    Zhang, Dongdong
    Yang, Jian
    Huang, Haoyang
    Sennrich, Rico
    Sachan, Mrinmaya
    Cotterell, Ryan
    Zhou, Ming
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 1550 - 1565
  • [44] DocEE: A Large-Scale and Fine-grained Benchmark for Document-level Event Extraction
    Tong, Meihan
    Xu, Bin
    Wang, Shuai
    Han, Meihuan
    Cao, Yixin
    Zhu, Jiangqi
    Chen, Siyu
    Hou, Lei
    Li, Juanzi
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 3970 - 3982
  • [45] Document-Level Machine Translation with Effective Batch-Level Context Representation
    Zhong, Kang
    Zhang, Jie
    Guo, Wu
    2024 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN 2024, 2024,
  • [46] A large-scale dataset for korean document-level relation extraction from encyclopedia texts
    Son, Suhyune
    Lim, Jungwoo
    Koo, Seonmin
    Kim, Jinsung
    Kim, Younghoon
    Lim, Youngsik
    Hyun, Dongseok
    Lim, Heuiseok
    APPLIED INTELLIGENCE, 2024, 54 (17-18) : 8681 - 8701
  • [47] Has Machine Translation Achieved Human Parity? A Case for Document-level Evaluation
    Laeubli, Samuel
    Sennrich, Rico
    Volk, Martin
    2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), 2018, : 4791 - 4796
  • [48] A Document-Level Machine Translation Quality Estimation Model Based on Centering Theory
    Chen, Yidong
    Zhong, Enjun
    Tong, Yiqi
    Qiu, Yanru
    Shi, Xiaodong
    MACHINE TRANSLATION, CCMT 2021, 2021, 1464 : 1 - 15
  • [49] Long-Short Term Masking Transformer: A Simple but Effective Baseline for Document-level Neural Machine Translation
    Zhang, Pei
    Chen, Boxing
    Ge, Niyu
    Fan, Kai
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 1081 - 1087
  • [50] Document-Level Machine Translation Evaluation Metrics Enhanced with Simplified Lexical Chain
    Gong, Zhengxian
    Zhou, Guodong
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, NLPCC 2015, 2015, 9362 : 396 - 403