Integrating Task Specific Information into Pretrained Language Models for Low Resource Fine Tuning

被引:0
|
作者
Wang, Rui [1 ]
Si, Shijing [1 ]
Wang, Guoyin [1 ,2 ]
Zhang, Lei [3 ]
Carin, Lawrence [1 ]
Henao, Ricardo [1 ]
机构
[1] Duke Univ, Durham, NC 27706 USA
[2] Amazon Alexa AI, Cambridge, MA USA
[3] Fidel Investments, Raleigh, NC USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Pretrained Language Models (PLMs) have improved the performance of natural language understanding in recent years. Such models are pretrained on large corpora, which encode the general prior knowledge of natural languages but are agnostic to information characteristic of downstream tasks. This often results in overfitting when fine-tuned with low resource datasets where task-specific information is limited. In this paper, we integrate label information as a task-specific prior into the self-attention component of pretrained BERT models. Experiments on several benchmarks and real-word datasets suggest that the proposed approach can largely improve the performance of pretrained models when finetuning with small datasets. The code repository is released in https://github.com/RayWangWR/BERT_label_embedding.
引用
收藏
页数:6
相关论文
共 50 条
  • [31] Exploring Large Language Models for Low-Resource IT Information Extraction
    Bhavya, Bhavya
    Isaza, Paulina Toro
    Deng, Yu
    Nidd, Michael
    Azad, Amar Prakash
    Shwartz, Larisa
    Zhai, ChengXiang
    2023 23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW 2023, 2023, : 1203 - 1212
  • [32] Task Residual for Tuning Vision-Language Models
    Yu, Tao
    Lu, Zhihe
    Jin, Xin
    Chen, Zhibo
    Wang, Xinchao
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 10899 - 10909
  • [33] Low resource language specific pre-processing and features for sentiment analysis task
    Meetei, Loitongbam Sanayai
    Singh, Thoudam Doren
    Borgohain, Samir Kumar
    Bandyopadhyay, Sivaji
    LANGUAGE RESOURCES AND EVALUATION, 2021, 55 (04) : 947 - 969
  • [34] Low resource language specific pre-processing and features for sentiment analysis task
    Loitongbam Sanayai Meetei
    Thoudam Doren Singh
    Samir Kumar Borgohain
    Sivaji Bandyopadhyay
    Language Resources and Evaluation, 2021, 55 : 947 - 969
  • [35] Fine-Tuning ASR models for Very Low-Resource Languages: A Study on Mvskoke
    Mainzinger, Julia
    Levow, Gina-Anne
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 4: STUDENT RESEARCH WORKSHOP, 2024, : 94 - 100
  • [36] Can Pretrained English Language Models Benefit Non-English NLP Systems in Low-Resource Scenarios?
    Chi, Zewen
    Huang, Heyan
    Liu, Luyang
    Bai, Yu
    Gao, Xiaoyan
    Mao, Xian-Ling
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2024, 32 : 1061 - 1074
  • [37] Parameter-Efficient Fine-Tuning of Large Pretrained Models for Instance Segmentation Tasks
    Baker, Nermeen Abou
    Rohrschneider, David
    Handmann, Uwe
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2024, 6 (04): : 2783 - 2807
  • [38] Multitask Fine Tuning on Pretrained Language Model for Retrieval-Based Question Answering in Automotive Domain
    Luo, Zhiyi
    Yan, Sirui
    Luo, Shuyun
    MATHEMATICS, 2023, 11 (12)
  • [39] How fine can fine-tuning be? Learning efficient language models
    Radiya-Dixit, Evani
    Wang, Xin
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 2435 - 2442
  • [40] Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning
    Wei, Colin
    Xie, Sang Michael
    Ma, Tengyu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,