On cross-lingual retrieval with multilingual text encoders

被引:0
|
作者
Robert Litschko
Ivan Vulić
Simone Paolo Ponzetto
Goran Glavaš
机构
[1] University of Mannheim,
[2] Language Technology Lab,undefined
[3] University of Cambridge,undefined
来源
Information Retrieval Journal | 2022年 / 25卷
关键词
Cross-lingual IR; Multilingual text encoders; Learning to Rank;
D O I
暂无
中图分类号
学科分类号
摘要
Pretrained multilingual text encoders based on neural transformer architectures, such as multilingual BERT (mBERT) and XLM, have recently become a default paradigm for cross-lingual transfer of natural language processing models, rendering cross-lingual word embedding spaces (CLWEs) effectively obsolete. In this work we present a systematic empirical study focused on the suitability of the state-of-the-art multilingual encoders for cross-lingual document and sentence retrieval tasks across a number of diverse language pairs. We first treat these models as multilingual text encoders and benchmark their performance in unsupervised ad-hoc sentence- and document-level CLIR. In contrast to supervised language understanding, our results indicate that for unsupervised document-level CLIR—a setup with no relevance judgments for IR-specific fine-tuning—pretrained multilingual encoders on average fail to significantly outperform earlier models based on CLWEs. For sentence-level retrieval, we do obtain state-of-the-art performance: the peak scores, however, are met by multilingual encoders that have been further specialized, in a supervised fashion, for sentence understanding tasks, rather than using their vanilla ‘off-the-shelf’ variants. Following these results, we introduce localized relevance matching for document-level CLIR, where we independently score a query against document sections. In the second part, we evaluate multilingual encoders fine-tuned in a supervised fashion (i.e., we learn to rank) on English relevance data in a series of zero-shot language and domain transfer CLIR experiments. Our results show that, despite the supervision, and due to the domain and language shift, supervised re-ranking rarely improves the performance of multilingual transformers as unsupervised base rankers. Finally, only with in-domain contrastive fine-tuning (i.e., same domain, only language transfer), we manage to improve the ranking quality. We uncover substantial empirical differences between cross-lingual retrieval results and results of (zero-shot) cross-lingual transfer for monolingual retrieval in target languages, which point to “monolingual overfitting” of retrieval models trained on monolingual (English) data, even if they are based on multilingual transformers.
引用
收藏
页码:149 / 183
页数:34
相关论文
共 50 条
  • [21] Semantic Cross-Lingual Information Retrieval
    Pourmahmoud, Solmaz
    Shamsfard, Mehrnoush
    23RD INTERNATIONAL SYMPOSIUM ON COMPUTER AND INFORMATION SCIENCES, 2008, : 80 - +
  • [22] Cross-lingual Distillation for Text Classification
    Xu, Ruochen
    Yang, Yiming
    PROCEEDINGS OF THE 55TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2017), VOL 1, 2017, : 1415 - 1425
  • [23] Fuzzy conceptual indexing for concept-based cross-lingual text retrieval
    Chau, R
    Yeh, CH
    IEEE INTERNET COMPUTING, 2004, 8 (05) : 14 - 21
  • [24] METTS: Multilingual Emotional Text-to-Speech by Cross-Speaker and Cross-Lingual Emotion Transfer
    Zhu, Xinfa
    Lei, Yi
    Li, Tao
    Zhang, Yongmao
    Zhou, Hongbin
    Lu, Heng
    Xie, Lei
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2024, 32 : 1506 - 1518
  • [25] A Learning to rank framework based on cross-lingual loss function for cross-lingual information retrieval
    Ghanbari, Elham
    Shakery, Azadeh
    APPLIED INTELLIGENCE, 2022, 52 (03) : 3156 - 3174
  • [26] mCLIP: Multilingual CLIP via Cross-lingual Transfer
    Chen, Guanhua
    Hou, Lu
    Chen, Yun
    Dai, Wenliang
    Shang, Lifeng
    Jiang, Xin
    Liu, Qun
    Pan, Jia
    Wang, Wenping
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 13028 - 13043
  • [27] Gender Bias in Multilingual Embeddings and Cross-Lingual Transfer
    Zhao, Jieyu
    Mukherjee, Subhabrata
    Hosseini, Saghar
    Chang, Kai-Wei
    Awadallah, Ahmed Hassan
    58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 2896 - 2907
  • [28] A Cross-Lingual Approach for Building Multilingual Sentiment Lexicons
    Naderalvojoud, Behzad
    Qasemizadeh, Behrang
    Kallmeyer, Laura
    Sezer, Ebru Akcapinar
    TEXT, SPEECH, AND DIALOGUE (TSD 2018), 2018, 11107 : 259 - 266
  • [29] Monolingual, multilingual and cross-lingual code comment classification
    Kostic, Marija
    Batanovic, Vuk
    Nikolic, Bosko
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 124
  • [30] Multilingual Ontology Merging Using Cross-lingual Matching
    Ibrahim, Shimaa
    Fathalla, Said
    Lehmann, Jens
    Jabeen, Hajira
    2020 IEEE/WIC/ACM INTERNATIONAL JOINT CONFERENCE ON WEB INTELLIGENCE AND INTELLIGENT AGENT TECHNOLOGY (WI-IAT 2020), 2020, : 113 - 120