Self-Augmented In-Context Learning for Unsupervised Word Translation

被引:0
|
作者
Li, Yaoyiran [1 ]
Korhonen, Anna [1 ]
Vulic, Ivan [1 ]
机构
[1] Univ Cambridge, Language Technol Lab, TAL, Cambridge, England
基金
英国科研创新办公室;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent work has shown that, while large language models (LLMs) demonstrate strong word translation or bilingual lexicon induction (BLI) capabilities in few-shot setups, they still cannot match the performance of 'traditional' mapping-based approaches in the unsupervised scenario where no seed translation pairs are available, especially for lower-resource languages. To address this challenge with LLMs, we propose self-augmented in-context learning (SAIL) for unsupervised BLI: starting from a zero-shot prompt, SAIL iteratively induces a set of high-confidence word translation pairs for in-context learning (ICL) from an LLM, which it then reapplies to the same LLM in the ICL fashion. Our method shows substantial gains over zero-shot prompting of LLMs on two established BLI benchmarks spanning a wide range of language pairs, also outperforming mapping-based baselines across the board. In addition to achieving state-of-the-art unsupervised BLI performance, we also conduct comprehensive analyses on SAIL and discuss its limitations.
引用
收藏
页码:743 / 753
页数:11
相关论文
共 50 条
  • [21] Learning To Retrieve Prompts for In-Context Learning
    Rubin, Ohad
    Herzig, Jonathan
    Berant, Jonathan
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 2655 - 2671
  • [22] Data Augmentation with In-Context Learning and Comparative Evaluation in Math Word Problem Solving
    Yigit G.
    Amasyali M.F.
    SN Computer Science, 5 (5)
  • [23] In-context learning of state estimators
    Busetto, R.
    Breschi, V.
    Forgione, M.
    Piga, D.
    Formentin, S.
    IFAC PAPERSONLINE, 2024, 58 (15): : 145 - 150
  • [24] Generative Calibration for In-context Learning
    Jiang, Zhongtao
    Zhang, Yuanzhe
    Liu, Cao
    Zhao, Jun
    Liu, Kang
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 2312 - 2333
  • [25] Distinguishability Calibration to In-Context Learning
    Li, Hongjing
    Yan, Hanqi
    Li, Yanran
    Qian, Li
    He, Yulan
    Gui, Lin
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 1385 - 1397
  • [26] SELF-AUGMENTED MULTI-MODAL FEATURE EMBEDDING
    Matsuo, Shinnosuke
    Uchida, Seiichi
    Iwana, Brian Kenji
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3995 - 3999
  • [27] Requirements Satisfiability with In-Context Learning
    Santos, Sarah
    Breaux, Travis
    Norton, Thomas
    Haghighi, Sara
    Ghanavati, Sepideh
    32ND IEEE INTERNATIONAL REQUIREMENTS ENGINEERING CONFERENCE, RE 2024, 2024, : 168 - 179
  • [28] Is Mamba Capable of In-Context Learning?
    Grazzi, Riccardo
    Siems, Julien
    Schrodi, Simon
    Brox, Thomas
    Hutter, Frank
    INTERNATIONAL CONFERENCE ON AUTOMATED MACHINE LEARNING, 2024, 256
  • [29] MultiAICL: Multi-task Tuning for Augmented In-Context Learning in Text Style Transfer
    Zhu, Linan
    Zhou, Zehai
    Chen, Xiangfan
    Guo, Xiaolei
    Kong, Xiangjie
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT III, NLPCC 2024, 2025, 15361 : 55 - 66
  • [30] Fully Unsupervised Machine Translation Using Context-Aware Word Translation and Denoising Autoencoder
    Chauhan, Shweta
    Daniel, Philemon
    Saxena, Shefali
    Sharma, Ayush
    APPLIED ARTIFICIAL INTELLIGENCE, 2022, 36 (01)