Multi-Mask Label Mapping for Prompt-Based Learning

被引:0
|
作者
Qi, Jirui [1 ]
Zhang, Richong [1 ,2 ]
Kim, Jaein [1 ]
Chen, Junfan [1 ]
Qin, Wenyi [1 ]
Mao, Yongyi [3 ]
机构
[1] Beihang Univ, Sch Comp Sci & Engn, SKLSDE, Beijing, Peoples R China
[2] Zhongguancun Lab, Beijing, Peoples R China
[3] Univ Ottawa, Sch Elect Engn & Comp Sci, Ottawa, ON, Canada
基金
国家重点研发计划;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Prompt-based Learning has shown significant success in few-shot classification. The mainstream approach is to concatenate a template for the input text to transform the classification task into a cloze-type task where label mapping plays an important role in finding the ground-truth labels. While cur-rent label mapping methods only use the contexts in one single input, it could be crucial if wrong information is contained in the text. Specifically, it is proved in recent work that even the large language models like BERT/RoBERTa make classification decisions heavily dependent on a specific keyword regardless of the task or the context. Such a word is referred to as a lexical cue and if a misleading lexical cue is included in the instance it will lead the model to make a wrong prediction. We propose a multi-mask prompt-based approach with Multi-Mask Label Mapping (MMLM) to reduce the impact of misleading lexical cues by allowing the model to exploit multiple lexical cues. To satisfy the conditions of few-shot learning, an instance augmentation approach for the cloze-type model is proposed and the misleading cues are gradually excluded through training. We demonstrate the effectiveness of MMLM by both theoretical analysis and empirical studies, and show that MMLM outperforms other existing label mapping approaches.
引用
收藏
页码:13465 / 13473
页数:9
相关论文
共 50 条
  • [21] Prompt-based learning for few-shot class-incremental learning
    Yuan, Jicheng
    Chen, Hang
    Tian, Songsong
    Li, Wenfa
    Li, Lusi
    Ning, Enhao
    Zhang, Yugui
    ALEXANDRIA ENGINEERING JOURNAL, 2025, 120 : 287 - 295
  • [22] Prompt-Based Learning for Image Variation Using Single Image Multi-Scale Diffusion Models
    Park, Jiwon
    Jeong, Dasol
    Lee, Hyebean
    Han, Seunghee
    Paik, Joonki
    IEEE ACCESS, 2024, 12 : 158810 - 158823
  • [23] Aspect category sentiment analysis based on prompt-based learning with attention mechanism
    Ping, Zhichao
    Sang, Guoming
    Liu, Zhi
    Zhang, Yijia
    NEUROCOMPUTING, 2024, 565
  • [24] PROMPTDA : Label-guided Data Augmentation for Prompt-based Few Shot Learners
    Chen, Canyu
    Shu, Kai
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 562 - 574
  • [25] Prompt-based contrastive learning to combat the COVID-19 infodemic
    Peng, Zifan
    Li, Mingchen
    Wang, Yue
    Mo, Daniel Y.
    MACHINE LEARNING, 2025, 114 (01)
  • [26] CLAMP: Prompt-based Contrastive Learning for Connecting Language and Animal Pose
    Zhang, Xu
    Wang, Wen
    Chen, Zhe
    Xu, Yufei
    Zhang, Jing
    Tao, Dacheng
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 23272 - 23281
  • [27] Semantic-Guided Multi-mask Image Harmonization
    Ren, Xuqian
    Liu, Yifan
    COMPUTER VISION, ECCV 2022, PT XXXVII, 2022, 13697 : 564 - 579
  • [28] PromptCast: A New Prompt-Based Learning Paradigm for Time Series Forecasting
    Xue, Hao
    Salim, Flora D.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (11) : 6851 - 6864
  • [29] Contrastive Learning for Prompt-Based Few-Shot Language Learners
    Jian, Yiren
    Gao, Chongyang
    Vosoughi, Soroush
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 5577 - 5587
  • [30] When Prompt-based Incremental Learning Does Not Meet Strong Pretraining
    Tang, Yu-Ming
    Peng, Yi-Xing
    Zheng, Wei-Shi
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 1706 - 1716