Don't Stop Pretraining? Make Prompt-based Fine-tuning Powerful Learner

被引:0
|
作者
Shi, Zhengxiang [1 ]
Lipani, Aldo [1 ]
机构
[1] UCL, London, England
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Language models (LMs) trained on vast quantities of unlabelled data have greatly advanced the field of natural language processing (NLP). In this study, we re-visit the widely accepted notion in NLP that continued pre-training LMs on task-related texts improves the performance of fine-tuning (FT) in downstream tasks. Through experiments on eight single-sentence tasks and eight sentence-pair tasks in both semi-supervised and fully-supervised settings, we find that conventional continued pre-training does not consistently provide benefits and can even be detrimental for sentence-pair tasks or when prompt-based FT is used. To tackle these issues, we propose Prompt-based Continued Pre-training (PCP), which combines the idea of instruction tuning with conventional continued pre-training. Our approach aims to improve the performance of prompt-based FT by presenting both taskrelated texts and prompt templates to LMs through unsupervised pre-training objectives before fine-tuning for the target task. Our empirical evaluations on 21 benchmarks demonstrate that the PCP consistently improves the performance of state-of-the-art prompt-based FT approaches (up to 20.1% absolute) in both semi-supervised and fully-supervised settings, even with only hundreds of unlabelled examples. Additionally, prompt-based FT with the PCP outperforms state-of-the-art semi-supervised approaches with greater simplicity, eliminating the need for an iterative process and extra data augmentation. Our further analysis explores the performance lower bound of the PCP and reveals that the advantages of PCP persist across different sizes of models and datasets. Code is available at https://github.com/ZhengxiangShi/PowerfulPromptFT.
引用
收藏
页数:23
相关论文
共 21 条
  • [1] Comparing Prompt-Based and Standard Fine-Tuning for Urdu Text Classification
    Ullah, Faizad
    Azam, Ubaid
    Faheem, Ali
    Kamiran, Faisal
    Karim, Asim
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 6747 - 6754
  • [2] Voucher Abuse Detection with Prompt-based Fine-tuning on Graph Neural Networks
    Wen, Zhihao
    Fang, Yuan
    Liu, Yihan
    Guo, Yang
    Hao, Shuji
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 4864 - 4870
  • [3] Feature Normalization and Cartography-Based Demonstrations for Prompt-Based Fine-Tuning on Emotion-Related Tasks
    Hosseini, Mahshid
    Caragea, Cornelia
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 11, 2023, : 12881 - 12889
  • [4] Post-translational modification prediction via prompt-based fine-tuning of a GPT-2 model
    Shrestha, Palistha
    Kandel, Jeevan
    Tayara, Hilal
    Chong, Kil To
    NATURE COMMUNICATIONS, 2024, 15 (01)
  • [5] Why Don't Prompt-Based Fairness Metrics Correlate?
    Zayed, Abdelrahman
    Mordidou, Goncalo
    Baldini, Ioana
    Chanda, Sarath
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 9002 - 9019
  • [6] Foundations and Applications in Large-scale AI Models: Pre-training, Fine-tuning, and Prompt-based Learning
    Cheng, Derek
    Patel, Dhaval
    Pang, Linsey
    Mehta, Sameep
    Xie, Kexin
    Chi, Ed H.
    Liu, Wei
    Chawla, Nitesh
    Bailey, James
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 5853 - 5854
  • [7] LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-Tuning
    Abaskohi, Amirhossein
    Rothe, Sascha
    Yaghoobzadeh, Yadollah
    61ST CONFERENCE OF THE THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 2, 2023,
  • [8] Cold-Start Data Selection for Better Few-shot Language Model Fine-tuning: A Prompt-based Uncertainty Propagation Approach
    Yu, Yue
    Zhang, Rongzhi
    Xu, Ran
    Zhang, Jieyu
    Shen, Jiaming
    Zhang, Chao
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 2499 - 2521
  • [9] P3 Ranker: Mitigating the Gaps between Pre-training and Ranking Fine-tuning with Prompt-based Learning and Pre-finetuning
    Hu, Xiaomeng
    Yu, Shi
    Xiong, Chenyan
    Liu, Zhenghao
    Liu, Zhiyuan
    Yu, Ge
    PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22), 2022, : 1956 - 1962
  • [10] Prompt-Oriented Fine-Tuning Dual Bert for Aspect-Based Sentiment Analysis
    Yin, Wen
    Xu, Yi
    Liu, Cencen
    Zheng, Dezhang
    Wang, Qi
    Liu, Chuanjie
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PART X, 2023, 14263 : 505 - 517