Cluster-Level Contrastive Learning for Emotion Recognition in Conversations

被引:27
|
作者
Yang, Kailai [1 ,2 ]
Zhang, Tianlin [1 ,2 ]
Alhuzali, Hassan [3 ]
Ananiadou, Sophia [1 ,2 ]
机构
[1] Univ Manchester, NaCTeM, Manchester M13 9PL, England
[2] Univ Manchester, Dept Comp Sci, Manchester M13 9PL, England
[3] Umm Al Qura Univ, Coll Comp & Informat Syst, Mecca 24382, Saudi Arabia
基金
英国生物技术与生命科学研究理事会;
关键词
Emotion recognition; Prototypes; Linguistics; Task analysis; Semantics; Training; Adaptation models; Cluster-level contrastive learning; emotion recognition in conversations; pre-trained knowledge adapters; valence-arousal-dominance; DIALOGUE; FRAMEWORK;
D O I
10.1109/TAFFC.2023.3243463
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A key challenge for Emotion Recognition in Conversations (ERC) is to distinguish semantically similar emotions. Some works utilise Supervised Contrastive Learning (SCL) which uses categorical emotion labels as supervision signals and contrasts in high-dimensional semantic space. However, categorical labels fail to provide quantitative information between emotions. ERC is also not equally dependent on all embedded features in the semantic space, which makes the high-dimensional SCL inefficient. To address these issues, we propose a novel low-dimensional Supervised Cluster-level Contrastive Learning (SCCL) method, which first reduces the high-dimensional SCL space to a three-dimensional affect representation space Valence-Arousal-Dominance (VAD), then performs cluster-level contrastive learning to incorporate measurable emotion prototypes. To help modelling the dialogue and enriching the context, we leverage the pre-trained knowledge adapters to infuse linguistic and factual knowledge. Experiments show that our method achieves new state-of-the-art results with 69.81% on IEMOCAP, 65.7% on MELD, and 62.51% on DailyDialog datasets. The analysis also proves that the VAD space is not only suitable for ERC but also interpretable, with VAD prototypes enhancing its performance and stabilising the training of SCCL. In addition, the pre-trained knowledge adapters benefit the performance of the utterance encoder and SCCL. Our code is available at: https://github.com/SteveKGYang/SCCLI
引用
收藏
页码:3269 / 3280
页数:12
相关论文
共 50 条
  • [1] Supervised Adversarial Contrastive Learning for Emotion Recognition in Conversations
    Hu, Dou
    Bao, Yinan
    Wei, Lingwei
    Zhou, Wei
    Hu, Songlin
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 10835 - 10852
  • [2] Adversarial Cluster-Level and Global-Level Graph Contrastive Learning for node representation
    Tang, Qian
    Zhao, Yiji
    Wu, Hao
    Zhang, Lei
    KNOWLEDGE-BASED SYSTEMS, 2023, 279
  • [3] Learning chain for clause awareness: Triplex-contrastive learning for emotion recognition in conversations
    Liang, Jiazhen
    Li, Wai
    Zhong, Qingshan
    Huang, Jun
    Jiang, Dazhi
    Cambria, Erik
    INFORMATION SCIENCES, 2025, 705
  • [4] Context or Knowledge is Not Always Necessary: A Contrastive Learning Framework for Emotion Recognition in Conversations
    Tu, Geng
    Liang, Bin
    Mao, Ruibin
    Yang, Min
    Xu, Ruifeng
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 14054 - 14067
  • [5] Universal representation learning for multivariate time series using the instance-level and cluster-level supervised contrastive learning
    Moradinasab, Nazanin
    Sharma, Suchetha
    Bar-Yoseph, Ronen
    Radom-Aizik, Shlomit
    Bilchick, Kenneth C.
    Cooper, Dan M.
    Weltman, Arthur
    Brown, Donald E.
    DATA MINING AND KNOWLEDGE DISCOVERY, 2024, 38 (03) : 1493 - 1519
  • [6] Universal representation learning for multivariate time series using the instance-level and cluster-level supervised contrastive learning
    Nazanin Moradinasab
    Suchetha Sharma
    Ronen Bar-Yoseph
    Shlomit Radom-Aizik
    Kenneth C. Bilchick
    Dan M. Cooper
    Arthur Weltman
    Donald E. Brown
    Data Mining and Knowledge Discovery, 2024, 38 : 1493 - 1519
  • [7] Weighted cluster-level social emotion classification across domains
    Fu Lee Wang
    Zhengwei Zhao
    Gary Cheng
    Yanghui Rao
    Haoran Xie
    International Journal of Machine Learning and Cybernetics, 2023, 14 : 2385 - 2394
  • [8] Weighted cluster-level social emotion classification across domains
    Wang, Fu Lee
    Zhao, Zhengwei
    Cheng, Gary
    Rao, Yanghui
    Xie, Haoran
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2023, 14 (07) : 2385 - 2394
  • [9] Cluster-level Emotion Pattern Matching for Cross-Domain Social Emotion Classification
    Zhu, Endong
    Rao, Yanghui
    Xie, Haoran
    Liu, Yuwei
    Yin, Jian
    Wang, Fu Lee
    CIKM'17: PROCEEDINGS OF THE 2017 ACM CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, 2017, : 2435 - 2438
  • [10] CONTRASTIVE UNSUPERVISED LEARNING FOR SPEECH EMOTION RECOGNITION
    Li, Mao
    Yang, Bo
    Levy, Joshua
    Stolcke, Andreas
    Rozgic, Viktor
    Matsoukas, Spyros
    Papayiannis, Constantinos
    Bone, Daniel
    Wang, Chao
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 6329 - 6333