Does GPT-3 Grasp Metaphors? Identifying Metaphor Mappings with Generative Language Models

被引:0
|
作者
Wachowiak, Lennart [1 ]
Gromann, Dagmar [2 ]
机构
[1] Kings Coll London, London, England
[2] Univ Vienna, Vienna, Austria
来源
PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1 | 2023年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Conceptual metaphors present a powerful cognitive vehicle to transfer knowledge structures from a source to a target domain. Prior neural approaches focus on detecting whether natural language sequences are metaphoric or literal. We believe that to truly probe metaphoric knowledge in pre-trained language models, their capability to detect this transfer should be investigated. To this end, this paper proposes to probe the ability of GPT-3 to detect metaphoric language and predict the metaphor's source domain without any pre-set domains. We experiment with different training sample configurations for fine-tuning and few-shot prompting on two distinct datasets. When provided 12 fewshot samples in the prompt, GPT-3 generates the correct source domain for a new sample with an accuracy of 65.15% in English and 34.65% in Spanish. GPT's most common error is a hallucinated source domain for which no indicator is present in the sentence. Other common errors include identifying a sequence as literal even though a metaphor is present and predicting the wrong source domain based on specific words in the sequence that are not metaphorically related to the target domain.
引用
收藏
页码:1018 / 1032
页数:15
相关论文
共 43 条
  • [31] At the intersection of humanity and technology: a technofeminist intersectional critical discourse analysis of gender and race biases in the natural language processing model GPT-3
    Palacios Barea, M. A.
    Boeren, D.
    Ferreira Goncalves, J. F.
    AI & SOCIETY, 2023, 40 (2) : 461 - 479
  • [32] Exploring Image Similarity through Generative Language Models: A Comparative Study of GPT-4 with Word Embeddings and Traditional Approaches
    Malla, Alejandro
    Omwenga, Maxwell M.
    Bera, Pallav Kumar
    2024 IEEE INTERNATIONAL CONFERENCE ON ELECTRO INFORMATION TECHNOLOGY, EIT 2024, 2024, : 275 - 279
  • [33] Does GPT-3 qualify as a co-author of a scientific paper publishable in peer-review journals according to the ICMJE criteria? A case study
    Osmanovic-Thunström A.
    Steingrimsson S.
    Discover Artificial Intelligence, 2023, 3 (01):
  • [34] GPT3Mix: Leveraging Large-scale Language Models for Text Augmentation
    Yoo, Kang Min
    Park, Dongju
    Kang, Jaewook
    Lee, Sang-Woo
    Park, Woomyeong
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 2225 - 2239
  • [35] 50% HUMAN A poetic interview with AI agents GPT-2 and 3 language models
    Klein, Ariel
    POETRY REVIEW, 2020, 110 (03): : 32 - 36
  • [36] Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish
    Ekgren, Ariel
    Gyllensten, Amaru Cuba
    Gogoulou, Evangelia
    Heiman, Alice
    Verlinden, Severine
    Ohman, Joey
    Carlsson, Fredrik
    Sahlgren, Magnus
    LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 3509 - 3518
  • [37] CAN ARTIFICIAL INTELLIGENCE (AI) LARGE LANGUAGE MODELS (LLMS) SUCH AS GENERATIVE PRE-TRAINED TRANSFORMER (GPT) BE USED TO AUTOMATE LITERATURE REVIEWS?
    Guerra, I
    Gallinaro, J.
    Rtveladze, K.
    Lambova, A.
    Asenova, E.
    VALUE IN HEALTH, 2023, 26 (12) : S410 - S411
  • [38] GPT-3-Powered Type Error Debugging: Investigating the Use of Large Language Models for Code Repair
    Ribeiro, Francisco
    Castro de Macedo, Jose Nuno
    Tsushima, Kanae
    Abreu, Rui
    Saraiva, Joao
    PROCEEDINGS OF THE 16TH ACM SIGPLAN INTERNATIONAL CONFERENCE ON SOFTWARE LANGUAGE ENGINEERING, SLE 2023, 2023, : 111 - 124
  • [39] ADVANCING SYSTEMATIC LITERATURE REVIEWS: A COMPARATIVE ANALYSIS OF LARGE LANGUAGE MODELS (CLAUDE SONNET 3.5, GEMINI FLASH 1.5, AND GPT-4) IN THE AUTOMATION ERA OF GENERATIVE AI
    Rai, P.
    Pandey, S.
    Attri, S.
    Singh, B.
    Kaur, R.
    VALUE IN HEALTH, 2024, 27 (12)
  • [40] How Does a Generative Large Language Model Perform on Domain-Specific Information Extraction?―A Comparison between GPT-4 and a Rule-Based Method on Band Gap Extraction
    Wang, Xin
    Huang, Liangliang
    Xu, Shuozhi
    Lu, Kun
    JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2024, 64 (20) : 7895 - 7904