Inverse is Better! Fast and Accurate Prompt for Few-shot Slot Tagging

被引:0
|
作者
Hou, Yutai [1 ]
Chen, Cheng [1 ]
Luo, Xianzhen [1 ]
Li, Bohan [1 ]
Che, Wanxiang [1 ]
机构
[1] Harbin Inst Technol, Res Ctr Social Comp & Informat Retrieval, Harbin, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Prompting methods recently achieve impressive success in few-shot learning. These methods modify input samples with prompt sentence pieces, and decode label tokens to map samples to corresponding labels. However, such a paradigm is very inefficient for the task of slot tagging. Since slot tagging samples are multiple consecutive words in a sentence, the prompting methods have to enumerate all n-grams token spans to find all the possible slots, which greatly slows down the prediction. To tackle this, we introduce an inverse paradigm for prompting. Different from the classic prompts mapping tokens to labels, we reversely predict slot values given slot types. Such inverse prompting only requires a one-turn prediction for each slot type and greatly speeds up the prediction. Besides, we propose a novel Iterative Prediction Strategy, from which the model learns to refine predictions by considering the relations between different slot types. We find, somewhat surprisingly, the proposed method not only predicts faster but also significantly improves the effect (improve over 6.1 F1 scores on 10-shot setting) and achieves new state-of-the-art performance.
引用
收藏
页码:637 / 647
页数:11
相关论文
共 50 条
  • [31] Towards using Few-Shot Prompt Learning for Automating Model Completion
    Ben Chaaben, Meriem
    Burgueno, Lola
    Sahraoui, Houari
    2023 IEEE/ACM 45TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING-NEW IDEAS AND EMERGING RESULTS, ICSE-NIER, 2023, : 7 - 12
  • [32] Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning
    Utama, Prasetya Ajie
    Moosavi, Nafise Sadat
    Sanh, Victor
    Gurevych, Iryna
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 9063 - 9074
  • [33] Combining hierarchical sparse representation with adaptive prompt for few-shot segmentation
    Luo, Xiaoliu
    Xie, Ting
    Qin, Weisen
    Duan, Zhao
    Tan, Jin
    Zhang, Taiping
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 260
  • [34] Cross-coupled prompt learning for few-shot image recognition☆
    Zhang, Fangyuan
    Wei, Rukai
    Xie, Yanzhao
    Wang, Yangtao
    Tan, Xin
    Ma, Lizhuang
    Tang, Maobin
    Fan, Lisheng
    DISPLAYS, 2024, 85
  • [35] Contrastive Learning for Prompt-Based Few-Shot Language Learners
    Jian, Yiren
    Gao, Chongyang
    Vosoughi, Soroush
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 5577 - 5587
  • [36] pDeep3: Toward More Accurate Spectrum Prediction with Fast Few-Shot Learning
    Tarn, Ching
    Zeng, Wen-Feng
    ANALYTICAL CHEMISTRY, 2021, 93 (14) : 5815 - 5822
  • [37] Hierarchical Prompt Tuning for Few-Shot Multi-Task Learning
    Liu, Jingping
    Chen, Tao
    Liang, Zujie
    Jiang, Haiyun
    Xiao, Yanghua
    Wei, Feng
    Qian, Yuxi
    Hao, Zhenghong
    Han, Bing
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 1556 - 1565
  • [38] PPT: Pre-trained Prompt Tuning for Few-shot Learning
    Gu, Yuxian
    Han, Xu
    Liu, Zhiyuan
    Huang, Minlie
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 8410 - 8423
  • [39] Ontology-enhanced Prompt-tuning for Few-shot Learning
    Ye, Hongbin
    Zhang, Ningyu
    Deng, Shumin
    Chen, Xiang
    Chen, Hui
    Xiong, Feiyu
    Chen, Xi
    Chen, Huajun
    PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, : 778 - 787
  • [40] Knowledge-Guided Prompt Learning for Few-Shot Text Classification
    Wang, Liangguo
    Chen, Ruoyu
    Li, Li
    ELECTRONICS, 2023, 12 (06)