On cross-lingual retrieval with multilingual text encoders

被引:0
|
作者
Robert Litschko
Ivan Vulić
Simone Paolo Ponzetto
Goran Glavaš
机构
[1] University of Mannheim,
[2] Language Technology Lab,undefined
[3] University of Cambridge,undefined
来源
Information Retrieval Journal | 2022年 / 25卷
关键词
Cross-lingual IR; Multilingual text encoders; Learning to Rank;
D O I
暂无
中图分类号
学科分类号
摘要
Pretrained multilingual text encoders based on neural transformer architectures, such as multilingual BERT (mBERT) and XLM, have recently become a default paradigm for cross-lingual transfer of natural language processing models, rendering cross-lingual word embedding spaces (CLWEs) effectively obsolete. In this work we present a systematic empirical study focused on the suitability of the state-of-the-art multilingual encoders for cross-lingual document and sentence retrieval tasks across a number of diverse language pairs. We first treat these models as multilingual text encoders and benchmark their performance in unsupervised ad-hoc sentence- and document-level CLIR. In contrast to supervised language understanding, our results indicate that for unsupervised document-level CLIR—a setup with no relevance judgments for IR-specific fine-tuning—pretrained multilingual encoders on average fail to significantly outperform earlier models based on CLWEs. For sentence-level retrieval, we do obtain state-of-the-art performance: the peak scores, however, are met by multilingual encoders that have been further specialized, in a supervised fashion, for sentence understanding tasks, rather than using their vanilla ‘off-the-shelf’ variants. Following these results, we introduce localized relevance matching for document-level CLIR, where we independently score a query against document sections. In the second part, we evaluate multilingual encoders fine-tuned in a supervised fashion (i.e., we learn to rank) on English relevance data in a series of zero-shot language and domain transfer CLIR experiments. Our results show that, despite the supervision, and due to the domain and language shift, supervised re-ranking rarely improves the performance of multilingual transformers as unsupervised base rankers. Finally, only with in-domain contrastive fine-tuning (i.e., same domain, only language transfer), we manage to improve the ranking quality. We uncover substantial empirical differences between cross-lingual retrieval results and results of (zero-shot) cross-lingual transfer for monolingual retrieval in target languages, which point to “monolingual overfitting” of retrieval models trained on monolingual (English) data, even if they are based on multilingual transformers.
引用
收藏
页码:149 / 183
页数:34
相关论文
共 50 条
  • [41] A system for supporting cross-lingual information retrieval
    Capstick, J
    Diagne, AK
    Erbach, G
    Uszkoreit, H
    Leisenberg, A
    Leisenberg, M
    INFORMATION PROCESSING & MANAGEMENT, 2000, 36 (02) : 275 - 289
  • [42] Cross-lingual Adaptation for Recipe Retrieval with Mixup
    Zhu, Bin
    Ngo, Chong-Wah
    Chen, Jingjing
    Chan, Wing-Kwong
    PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2022, 2022, : 258 - 267
  • [43] Cross-lingual Text Clustering in a Large System
    Schneider, Nicole R.
    Sankaranarayanan, Jagan
    Samet, Hanan
    PROCEEDINGS OF 2023 7TH INTERNATIONAL CONFERENCE ON NATURAL LANGUAGE PROCESSING AND INFORMATION RETRIEVAL, NLPIR 2023, 2023, : 1 - 11
  • [44] Cross-Lingual Speech-to-Text Summarization
    Pontes, Elvys Linhares
    Gonzalez-Gallardo, Carlos-Emiliano
    Torres-Moreno, Juan-Manuel
    Huet, Stephane
    MULTIMEDIA AND NETWORK INFORMATION SYSTEMS, 2019, 833 : 385 - 395
  • [45] Cross-lingual learning for text processing: A survey
    Pikuliak, Matus
    Simko, Marian
    Bielikova, Maria
    EXPERT SYSTEMS WITH APPLICATIONS, 2021, 165
  • [46] Cross-lingual embedding for cross-lingual question retrieval in low-resource community question answering
    HajiAminShirazi, Shahrzad
    Momtazi, Saeedeh
    MACHINE TRANSLATION, 2020, 34 (04) : 287 - 303
  • [47] Cross-lingual text filtering based on text concepts and kNN
    Li, SZ
    Su, WF
    Li, TQ
    Chen, HW
    PACLIC 17: Language, Information and Computation, Proceedings, 2003, : 166 - 173
  • [48] Multilingual Knowledge Graph Embeddings for Cross-lingual Knowledge Alignment
    Chen, Muhao
    Tian, Yingtao
    Yang, Mohan
    Zaniolo, Carlo
    PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 1511 - 1517
  • [49] Cross-Lingual Consistency of Factual Knowledge in Multilingual Language Models
    Qi, Jirui
    Fernandez, Raquel
    Bisazza, Arianna
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 10650 - 10666
  • [50] CROSS-LINGUAL AND MULTILINGUAL SPEECH EMOTION RECOGNITION ON ENGLISH AND FRENCH
    Neumann, Michael
    Ngoc Thang Vu
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 5769 - 5773