CATE: A Contrastive Pre-trained Model for Metaphor Detection with Semi-supervised Learning

被引:0
|
作者
Lin, Zhenxi [1 ,2 ]
Ma, Qianli [1 ]
Yan, Jiangyue [1 ]
Chen, Jieyu [3 ]
机构
[1] South China Univ Technol, Sch Comp Sci & Engn, Guangzhou, Peoples R China
[2] Tencent Jarvis Lab, Shenzhen, Peoples R China
[3] Hong Kong Polytech Univ, Dept English & Commun, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Metaphors are ubiquitous in natural language, and detecting them requires contextual reasoning about whether a semantic incongruence actually exists. Most existing work addresses this problem using pre-trained contextualized models. Despite their success, these models require a large amount of labeled data and are not linguistically-based. In this paper, we proposed a ContrAstive pre-Trained modEl (CAFE) for metaphor detection with semi-supervised learning. Our model first uses a pre-trained model to obtain a contextual representation of target words and employs a contrastive objective to promote an increased distance between target words' literal and metaphorical senses based on linguistic theories. Furthermore, we propose a simple strategy to collect large-scale candidate instances from the general corpus and generalize the model via self-training. Extensive experiments show that CATE achieves better performance against state-of-the-art baselines on several benchmark datasets.
引用
收藏
页码:3888 / 3898
页数:11
相关论文
共 50 条
  • [1] CSS-LM: A Contrastive Framework for Semi-Supervised Fine-Tuning of Pre-Trained Language Models
    Su, Yusheng
    Han, Xu
    Lin, Yankai
    Zhang, Zhengyan
    Liu, Zhiyuan
    Li, Peng
    Zhou, Jie
    Sun, Maosong
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 : 2930 - 2941
  • [2] Semi-supervised speaker verification system based on pre-trained models
    Li, Yishuang
    Chen, Zhicong
    Miao, Shiyu
    Su, Qi
    Li, Lin
    Hong, Qingyang
    [J]. Qinghua Daxue Xuebao/Journal of Tsinghua University, 2024, 64 (11): : 1936 - 1943
  • [3] Exploring One-Shot Semi-supervised Federated Learning with Pre-trained Diffusion Models
    Yang, Mingzhao
    Su, Shangchao
    Li, Bin
    Xue, Xiangyang
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 15, 2024, : 16325 - 16333
  • [4] Semi-supervised vanishing point detection with contrastive learning
    Wang, Yukun
    Gu, Shuo
    Liu, Yinbo
    Kong, Hui
    [J]. PATTERN RECOGNITION, 2024, 153
  • [5] Pre-trained Online Contrastive Learning for Insurance Fraud Detection
    Zhang, Rui
    Cheng, Dawei
    Yang, Jie
    Ouyang, Yi
    Wu, Xian
    Zheng, Yefeng
    Jiang, Changjun
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 20, 2024, : 22511 - 22519
  • [6] GALAXY: A Generative Pre-trained Model for Task-Oriented Dialog with Semi-supervised Learning and Explicit Policy Injection
    He, Wanwei
    Dai, Yinpei
    Zheng, Yinhe
    Wu, Yuchuan
    Cao, Zheng
    Liu, Dermot
    Jiang, Peng
    Yang, Min
    Huang, Fei
    Si, Luo
    Sun, Jian
    Li, Yongbin
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 10749 - 10757
  • [7] CONTRASTIVE SEMI-SUPERVISED LEARNING FOR ASR
    Xiao, Alex
    Fuegen, Christian
    Mohamed, Abdelrahman
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3870 - 3874
  • [8] Contrastive Regularization for Semi-Supervised Learning
    Lee, Doyup
    Kim, Sungwoong
    Kim, Ildoo
    Cheon, Yeongjae
    Cho, Minsu
    Han, Wook-Shin
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 3910 - 3919
  • [9] SSCL: Semi-supervised Contrastive Learning for Industrial Anomaly Detection
    Cai, Wei
    Gao, Jiechao
    [J]. PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT IV, 2024, 14428 : 100 - 112
  • [10] Syntax-guided Contrastive Learning for Pre-trained Language Model
    Zhang, Shuai
    Wang, Lijie
    Xiao, Xinyan
    Wu, Hua
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 2430 - 2440