Scaling and Adapting Large Language Models for Portuguese Open Information Extraction: A Comparative Study of Fine-Tuning and LoRA

被引:0
|
作者
Melo, Alan [1 ]
Cabral, Bruno [1 ]
Claro, Daniela Barreiro [1 ]
机构
[1] Univ Fed Bahia, FORMAS Res Ctr Data & Nat Language, Inst Comp, Salvador, BA, Brazil
来源
关键词
OpenIE; Language Model; Information Extraction;
D O I
10.1007/978-3-031-79035-5_30
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper comprehensively investigates the efficacy of different adaptation techniques for Large Language Models (LLMs) in the context of Open Information Extraction (OpenIE) for Portuguese. We compare Full Fine-Tuning (FFT) and Low-Rank Adaptation (LoRA) across a model with 0.5B parameters. Our study evaluates the impact of model size and adaptation method on OpenIE performance, considering precision, recall, and F1 scores, as well as computational efficiency during training and inference phases. We contribute to a high-performing LLM and novel insights into the trade-offs between model scale, adaptation technique, and cross-lingual transferability in the OpenIE task. Our findings reveal significant performance variations across different configurations, with LoRA demonstrating competitive results. We also analyze the linguistic nuances in the Portuguese OpenIE that pose challenges for models primarily trained on English data. This research advances our understanding of LLM adaptation for specialized NLP tasks and provides practical guidelines for deploying these models in resource-constrained and multilingual scenarios. Our work has implications for the broader cross-lingual open information extraction field and contributes to the ongoing discourse on efficient fine-tuning strategies for large pre-trained models.
引用
收藏
页码:427 / 441
页数:15
相关论文
共 50 条
  • [31] Parameter-efficient fine-tuning of large language models using semantic knowledge tuning
    Prottasha, Nusrat Jahan
    Mahmud, Asif
    Sobuj, Md. Shohanur Islam
    Bhat, Prakash
    Kowsher, Md
    Yousefi, Niloofar
    Garibay, Ozlem Ozmen
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [32] Fine-tuning large language models for domain adaptation: exploration of training strategies, scaling, model merging and synergistic capabilities
    Lu, Wei
    Luu, Rachel K.
    Buehler, Markus J.
    NPJ COMPUTATIONAL MATERIALS, 2025, 11 (01)
  • [33] OptimalMEE: Optimizing Large Language Models for Medical Event Extraction Through Fine-Tuning and Post-hoc Verification
    Sun, Yaoqian
    Wu, Dan
    Chen, Zikang
    Cai, Hailing
    An, Jiye
    ARTIFICIAL INTELLIGENCE IN MEDICINE, PT I, AIME 2024, 2024, 14844 : 303 - 311
  • [34] LLMADR: A Novel Method for Adverse Drug Reaction Extraction Based on Style Aligned Large Language Models Fine-Tuning
    Yin, Huazi
    Tang, Jintao
    Li, Shasha
    Wang, Ting
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT I, NLPCC 2024, 2025, 15359 : 470 - 482
  • [35] Unveiling the Power of Large Language Models: A Comparative Study of Retrieval-Augmented Generation, Fine-Tuning, and Their Synergistic Fusion for Enhanced Performance
    Budakoglu, Gulsum
    Emekci, Hakan
    IEEE ACCESS, 2025, 13 : 30936 - 30951
  • [36] Efficient Fine-Tuning Large Language Models for Knowledge-Aware Response Planning
    Minh Nguyen
    Kishan, K. C.
    Toan Nguyen
    Chadha, Ankit
    Thuy Vu
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT II, 2023, 14170 : 593 - 611
  • [37] Leveraging error-assisted fine-tuning large language models for manufacturing excellence
    Xia, Liqiao
    Li, Chengxi
    Zhang, Canbin
    Liu, Shimin
    Zheng, Pai
    ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2024, 88
  • [38] Fine-Tuning Large Language Models for Radiation Oncology, A Specialized Health Care Domain
    Wang, P.
    Liu, Z.
    Li, Y.
    Holmes, J.
    Shu, P.
    Zhang, L.
    Li, X.
    Li, Q.
    Vora, S. A.
    Patel, S. H.
    Sio, T. T. W.
    Liu, T.
    Liu, W.
    INTERNATIONAL JOURNAL OF RADIATION ONCOLOGY BIOLOGY PHYSICS, 2024, 120 (02): : E664 - E664
  • [39] Fine-Tuning Large Language Models for Radiation Oncology, a Highly Specialized Healthcare Domain
    Wang, P.
    Liu, Z.
    Li, Y.
    Holmes, J. M.
    Shu, P.
    Zhang, L.
    Li, X.
    Li, Q.
    Vora, S. A.
    Patel, S. H.
    Sio, T. T.
    Liu, T.
    Liu, W.
    MEDICAL PHYSICS, 2024, 51 (09) : 6590 - 6590
  • [40] Fine-Tuning Large Language Models to Improve Accuracy and Comprehensibility of Automated Code Review
    Yu, Yongda
    Rong, Guoping
    Shen, Haifeng
    Zhang, He
    Shao, Dong
    Wang, Min
    Wei, Zhao
    Xu, Yong
    Wang, Juhong
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2025, 34 (01)