On cross-lingual retrieval with multilingual text encoders

被引:0
|
作者
Robert Litschko
Ivan Vulić
Simone Paolo Ponzetto
Goran Glavaš
机构
[1] University of Mannheim,
[2] Language Technology Lab,undefined
[3] University of Cambridge,undefined
来源
Information Retrieval Journal | 2022年 / 25卷
关键词
Cross-lingual IR; Multilingual text encoders; Learning to Rank;
D O I
暂无
中图分类号
学科分类号
摘要
Pretrained multilingual text encoders based on neural transformer architectures, such as multilingual BERT (mBERT) and XLM, have recently become a default paradigm for cross-lingual transfer of natural language processing models, rendering cross-lingual word embedding spaces (CLWEs) effectively obsolete. In this work we present a systematic empirical study focused on the suitability of the state-of-the-art multilingual encoders for cross-lingual document and sentence retrieval tasks across a number of diverse language pairs. We first treat these models as multilingual text encoders and benchmark their performance in unsupervised ad-hoc sentence- and document-level CLIR. In contrast to supervised language understanding, our results indicate that for unsupervised document-level CLIR—a setup with no relevance judgments for IR-specific fine-tuning—pretrained multilingual encoders on average fail to significantly outperform earlier models based on CLWEs. For sentence-level retrieval, we do obtain state-of-the-art performance: the peak scores, however, are met by multilingual encoders that have been further specialized, in a supervised fashion, for sentence understanding tasks, rather than using their vanilla ‘off-the-shelf’ variants. Following these results, we introduce localized relevance matching for document-level CLIR, where we independently score a query against document sections. In the second part, we evaluate multilingual encoders fine-tuned in a supervised fashion (i.e., we learn to rank) on English relevance data in a series of zero-shot language and domain transfer CLIR experiments. Our results show that, despite the supervision, and due to the domain and language shift, supervised re-ranking rarely improves the performance of multilingual transformers as unsupervised base rankers. Finally, only with in-domain contrastive fine-tuning (i.e., same domain, only language transfer), we manage to improve the ranking quality. We uncover substantial empirical differences between cross-lingual retrieval results and results of (zero-shot) cross-lingual transfer for monolingual retrieval in target languages, which point to “monolingual overfitting” of retrieval models trained on monolingual (English) data, even if they are based on multilingual transformers.
引用
收藏
页码:149 / 183
页数:34
相关论文
共 50 条
  • [1] On cross-lingual retrieval with multilingual text encoders
    Litschko, Robert
    Vulic, Ivan
    Ponzetto, Simone Paolo
    Glavas, Goran
    INFORMATION RETRIEVAL JOURNAL, 2022, 25 (02): : 149 - 183
  • [2] A multilingual text mining approach to web cross-lingual text retrieval
    Chau, RW
    Yeh, CH
    KNOWLEDGE-BASED SYSTEMS, 2004, 17 (5-6) : 219 - 227
  • [3] Unsupervised multilingual machine translation with pretrained cross-lingual encoders
    Shen, Yingli
    Bao, Wei
    Gao, Ge
    Zhou, Maoke
    Zhao, Xiaobing
    KNOWLEDGE-BASED SYSTEMS, 2024, 284
  • [4] Exploiting Wikipedia for cross-lingual and multilingual information retrieval
    Sorg, P.
    Cimiano, P.
    DATA & KNOWLEDGE ENGINEERING, 2012, 74 : 26 - 45
  • [5] Probing Cross-Lingual Lexical Knowledge from Multilingual Sentence Encoders
    Vulic, Ivan
    Glavas, Goran
    Liu, Fangyu
    Collier, Nigel
    Ponti, Edoardo Maria
    Korhonen, Anna
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 2089 - 2105
  • [6] Cross-lingual and Multilingual CLIP
    Carlsson, Fredrik
    Eisen, Philipp
    Rekathati, Faton
    Sahlgren, Magnus
    LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 6848 - 6854
  • [7] Adversarial Domain Adaptation for Cross-lingual Information Retrieval with Multilingual BERT
    Wang, Runchuan
    Zhang, Zhao
    Zhuang, Fuzhen
    Gao, Dehong
    Wei, Yi
    He, Qing
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 3498 - 3502
  • [8] Deep Multilabel Multilingual Document Learning for Cross-Lingual Document Retrieval
    Feng, Kai
    Huang, Lan
    Xu, Hao
    Wang, Kangping
    Wei, Wei
    Zhang, Rui
    ENTROPY, 2022, 24 (07)
  • [9] Cross-Lingual Validation of Multilingual Wordnets
    Tufis, Dan
    Ion, Radu
    Barbu, Eduard
    Barbu, Verginica
    GWC 2004: SECOND INTERNATIONAL WORDNET CONFERENCE, PROCEEDINGS, 2003, : 332 - 340
  • [10] Cross-Lingual Phrase Retrieval
    Zheng, Heqi
    Zhang, Xiao
    Chi, Zewen
    Huang, Heyan
    Yan, Tan
    Lan, Tian
    Wei, Wei
    Mao, Xian-Ling
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 4193 - 4204