Chain of Thought with Explicit Evidence Reasoning for Few-shot Relation Extraction

被引:0
|
作者
Ma, Xilai [1 ]
Li, Jing [1 ]
Zhang, Min [1 ]
机构
[1] Harbin Inst Technol, Shenzhen, Peoples R China
来源
FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023 | 2023年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Few-shot relation extraction involves identifying the type of relationship between two specific entities within a text, using a limited number of annotated samples. A variety of solutions to this problem have emerged by applying meta-learning and neural graph techniques which typically necessitate a training process for adaptation. Recently, the strategy of in-context learning has been demonstrating notable results without training. Few studies have already utilized in-context learning for zero-shot information extraction. Unfortunately, the evidence for inference is either not considered or implicitly modeled during the construction of chain-of-thought prompts. In this paper, we propose a novel approach for few-shot relation extraction using large language models, named CoT-ER, chain-of-thought with explicit evidence reasoning. In particular, CoT-ER first induces large language models to generate evidence using task-specific and concept-level knowledge. Then this evidence is explicitly incorporated into chain-of-thought prompting for relation extraction. Experimental results demonstrate that our CoT-ER approach (with 0% training data) achieves competitive performance compared to the fully-supervised (with 100% training data) state-of-the-art approach on the FewRel1.0 and FewRel2.0 datasets.
引用
收藏
页码:2334 / 2352
页数:19
相关论文
共 50 条
  • [41] Joint data augmentation and knowledge distillation for few-shot continual relation extraction
    Wei, Zhongcheng
    Zhang, Yunping
    Lian, Bin
    Fan, Yongjian
    Zhao, Jijun
    APPLIED INTELLIGENCE, 2024, 54 (04) : 3516 - 3528
  • [42] Interaction Information Guided Prototype Representation Rectification for Few-Shot Relation Extraction
    Ma, Xiaoqin
    Qin, Xizhong
    Liu, Junbao
    Ran, Wensheng
    ELECTRONICS, 2023, 12 (13)
  • [43] Label Verbalization and Entailment for Effective Zero- and Few-Shot Relation Extraction
    Sainz, Oscar
    de Lacalle, Oier Lopez
    Labaka, Gorka
    Barrena, Ander
    Agirre, Eneko
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 1199 - 1212
  • [44] Knowledge-enhanced meta-prompt for few-shot relation extraction
    Cui, Jinman
    Xu, Fu
    Wang, Xinyang
    Li, Yakun
    Qu, Xiaolong
    Yao, Lei
    Li, Dongmei
    COMPUTER SPEECH AND LANGUAGE, 2025, 91
  • [45] Modeling of Few-Shot Relation Extraction Based on Adaptive Self-Training
    Chen H.
    Zheng J.
    Cai F.
    Han Y.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2023, 60 (07): : 1581 - 1591
  • [46] DiffFSRE: Diffusion-Enhanced Prototypical Network for Few-Shot Relation Extraction
    Chen, Yang
    Shi, Bowen
    ENTROPY, 2024, 26 (05)
  • [47] SARF: Aliasing Relation-Assisted Self-Supervised Learning for Few-Shot Relation Reasoning
    Meng, Lingyuan
    Liang, Ke
    Xiao, Bin
    Zhou, Sihang
    Liu, Yue
    Liu, Meng
    Yang, Xihong
    Liu, Xinwang
    Li, Jinyan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (02) : 3587 - 3597
  • [48] Continual Few-Shot Relation Extraction with Prompt-Based Contrastive Learning
    Wu, Fei
    Zhang, Chong
    Tan, Zhen
    Xu, Hao
    Ge, Bin
    WEB AND BIG DATA, PT IV, APWEB-WAIM 2023, 2024, 14334 : 312 - 327
  • [49] Synergistic Anchored Contrastive Pre-training for Few-Shot Relation Extraction
    Luo, Da
    Gan, Yanglei
    Hou, Rui
    Lin, Run
    Liu, Qiao
    Cai, Yuxiang
    Gao, Wannian
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 17, 2024, : 18742 - 18750
  • [50] Joint data augmentation and knowledge distillation for few-shot continual relation extraction
    Zhongcheng Wei
    Yunping Zhang
    Bin Lian
    Yongjian Fan
    Jijun Zhao
    Applied Intelligence, 2024, 54 : 3516 - 3528