Self-Augmented In-Context Learning for Unsupervised Word Translation

被引:0
|
作者
Li, Yaoyiran [1 ]
Korhonen, Anna [1 ]
Vulic, Ivan [1 ]
机构
[1] Univ Cambridge, Language Technol Lab, TAL, Cambridge, England
基金
英国科研创新办公室;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent work has shown that, while large language models (LLMs) demonstrate strong word translation or bilingual lexicon induction (BLI) capabilities in few-shot setups, they still cannot match the performance of 'traditional' mapping-based approaches in the unsupervised scenario where no seed translation pairs are available, especially for lower-resource languages. To address this challenge with LLMs, we propose self-augmented in-context learning (SAIL) for unsupervised BLI: starting from a zero-shot prompt, SAIL iteratively induces a set of high-confidence word translation pairs for in-context learning (ICL) from an LLM, which it then reapplies to the same LLM in the ICL fashion. Our method shows substantial gains over zero-shot prompting of LLMs on two established BLI benchmarks spanning a wide range of language pairs, also outperforming mapping-based baselines across the board. In addition to achieving state-of-the-art unsupervised BLI performance, we also conduct comprehensive analyses on SAIL and discuss its limitations.
引用
收藏
页码:743 / 753
页数:11
相关论文
共 50 条
  • [1] Denoised Self-Augmented Learning for Social Recommendation
    Wang, Tianle
    Xia, Lianghao
    Huang, Chao
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 2324 - 2331
  • [2] SAIL: Self-Augmented Graph Contrastive Learning
    Yu, Lu
    Pei, Shichao
    Ding, Lizhong
    Zhou, Jun
    Li, Longfei
    Zhang, Chuxu
    Zhang, Xiangliang
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 8927 - 8935
  • [3] PERFORMANCE OF A SELF-AUGMENTED RAILGUN
    BURTON, RL
    WITHERSPOON, FD
    GOLDSTEIN, SA
    JOURNAL OF APPLIED PHYSICS, 1991, 70 (07) : 3907 - 3911
  • [4] Self-Augmented Contrastive Learning for Knowledge-aware Recommendation
    Chu, Guixin
    Guan, Xinye
    Shi, Yiran
    Zhang, Bangzuo
    Pu, Dongbing
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, DASFAA 2024, PT 3, 2025, 14852 : 261 - 276
  • [5] An Empirical Study of In-context Learning in LLMs for Machine Translation
    Chitale, Pranjal A.
    Gala, Jay
    Dabre, Raj
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 7384 - 7406
  • [6] An Empirical Study of In-context Learning in LLMs for Machine Translation
    Chitale, Pranjal A.
    Gala, Jay
    Dabre, Raj
    Proceedings of the Annual Meeting of the Association for Computational Linguistics, 2024, : 7384 - 7406
  • [7] In-Context In-Context Learning with Transformer Neural Processes
    Ashman, Matthew
    Diaconu, Cristiana
    Weller, Adrian
    Turner, Richard E.
    SYMPOSIUM ON ADVANCES IN APPROXIMATE BAYESIAN INFERENCE, 2024, 253 : 1 - 29
  • [8] Self-Augmented Heterogeneous Face Recognition
    Sun, Zongcai
    Fu, Chaoyou
    Luo, Mandi
    He, Ran
    2021 INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS (IJCB 2021), 2021,
  • [9] Self-Adaptive In-Context Learning: An Information Compression Perspective for In-Context Example Selection and Ordering
    Wu, Zhiyong
    Wang, Yaoxiang
    Ye, Jiacheng
    Kong, Lingpeng
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 1423 - 1436
  • [10] Learning Discrete Representations via Information Maximizing Self-Augmented Training
    Hu, Weihua
    Miyato, Takeru
    Tokui, Seiya
    Matsumoto, Eiichi
    Sugiya, Masashi
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70