On the linguistic representational power of neural machine translation models

被引:0
|
作者
Belinkov, Yonatan [1 ]
Durrani, Nadir [2 ]
Dalvi, Fahim [2 ]
Sajjad, Hassan [2 ]
Glass, James [3 ]
机构
[1] Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory, Harvard University, John F. Paulson School of Engineering and Applied Sciences, United States
[2] Qatar Computing Research Institute, HBKU Research Complex, Qatar
[3] Massachusetts Institute of Technology, Computer Science and Artificial Intelligence Laboratory, United States
来源
Computational Linguistics | 2020年 / 46卷 / 01期
关键词
Quality control - Computational linguistics - Morphology - Semantics - Syntactics - Computer aided language translation - Natural language processing systems - Deep neural networks;
D O I
10.1162/COLI_a_00367
中图分类号
学科分类号
摘要
Despite the recent success of deep neural networks in natural language processing and other spheres of artificial intelligence, their interpretability remains a challenge. We analyze the representations learned by neural machine translation (NMT) models at various levels of granularity and evaluate their quality through relevant extrinsic properties. In particular, we seek answers to the following questions: (i) How accurately is word structure captured within the learned representations, which is an important aspect in translating morphologically rich languages? (ii) Do the representations capture long-range dependencies, and effectively handle syntactically divergent languages? (iii) Do the representations capture lexical semantics? We conduct a thorough investigation along several parameters: (i) Which layers in the architecture capture each of these linguistic phenomena; (ii) How does the choice of translation unit (word, character, or subword unit) impact the linguistic properties captured by the underlying representations? (iii) Do the encoder and decoder learn differently and independently? (iv) Do the representations learned by multilingual NMT models capture the same amount of linguistic information as their bilingual counterparts? Our data-driven, quantitative evaluation illuminates important aspects in NMT models and their ability to capture various linguistic phenomena. We show that deep NMT models trained in an end-to-end fashion, without being provided any direct supervision during the training process, learn a non-trivial amount of linguistic information. Notable findings include the following observations: (i) Word morphology and part-of-speech information are captured at the lower layers of the model; (ii) In contrast, lexical semantics or non-local syntactic and semantic dependencies are better represented at the higher layers of the model; (iii) Representations learned using characters are more informed about word-morphology compared to those learned using subword units; and (iv) Representations learned by multilingual models are richer compared to bilingual models. © 2020 Association for Computational Linguistics.
引用
收藏
页码:1 / 52
相关论文
共 50 条
  • [1] On the Linguistic Representational Power of Neural Machine Translation Models
    Belinkov, Yonatan
    Durrani, Nadir
    Dalvi, Fahim
    Sajjad, Hassan
    Glass, James
    [J]. COMPUTATIONAL LINGUISTICS, 2020, 46 (01) : 1 - 52
  • [2] Effect of Linguistic Information in Neural Machine Translation
    Nakamura, Naomichi
    Isahara, Hitoshi
    [J]. 2017 4TH INTERNATIONAL CONFERENCE ON ADVANCED INFORMATICS, CONCEPTS, THEORY, AND APPLICATIONS (ICAICTA) PROCEEDINGS, 2017,
  • [3] Linguistic Knowledge-Aware Neural Machine Translation
    Li, Qiang
    Wong, Derek F.
    Chao, Lidia S.
    Zhu, Muhua
    Xiao, Tong
    Zhu, Jingbo
    Zhang, Min
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2018, 26 (12) : 2341 - 2354
  • [4] On the Sparsity of Neural Machine Translation Models
    Wang, Yong
    Wang, Longyue
    Li, Victor O. K.
    Tu, Zhaopeng
    [J]. PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 1060 - 1066
  • [5] Multilingual Neural Machine Translation: Can Linguistic Hierarchies Help?
    Saleh, Fahimeh
    Buntine, Wray
    Haffari, Gholamreza
    Du, Lan
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 1313 - 1330
  • [6] Linguistic knowledge-based vocabularies for Neural Machine Translation
    Casas, Noe
    Costa-jussa, Marta R.
    Fonollosa, Jose A. R.
    Alonso, Juan A.
    Fanlo, Ramon
    [J]. NATURAL LANGUAGE ENGINEERING, 2021, 27 (04) : 485 - 506
  • [7] The Unreasonable Volatility of Neural Machine Translation Models
    Fadaee, Marzieh
    Monz, Christof
    [J]. NEURAL GENERATION AND TRANSLATION, 2020, : 88 - 96
  • [8] Compact Personalized Models for Neural Machine Translation
    Wuebker, Joern
    Simianer, Patrick
    DeNero, John
    [J]. 2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), 2018, : 881 - 886
  • [9] Improving Chinese-Vietnamese Neural Machine Translation with Linguistic Differences
    Yu, Zhiqiang
    Yu, Zhengtao
    Xian, Yantuan
    Huang, Yuxin
    Guo, Junjun
    [J]. ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2022, 21 (02)
  • [10] Better Neural Machine Translation by Extracting Linguistic Information from BERT
    Shavarani, Hassan S.
    Sarkar, Anoop
    [J]. 16TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EACL 2021), 2021, : 2772 - 2783