Emotion Recognition in Conversation with Multi-step Prompting Using Large Language Model

被引:2
|
作者
Hama, Kenta [1 ]
Otsuka, Atsushi [1 ]
Ishii, Ryo [1 ]
机构
[1] NTT Corp, NTT Digital Twin Comp Res Ctr, 29F Shinagawa Season Terrace,2-70 Konan 1 Chome, Tokyo 1080075, Japan
关键词
emotion recognition; large language model; few-shot learning; prompt engineering;
D O I
10.1007/978-3-031-61281-7_24
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Emotion recognition plays a crucial role in computer science, particularly in enhancing human-computer interactions. The process of emotion labeling remains time-consuming and costly, thereby impeding efficient dataset creation. Recently, large language models (LLMs) have demonstrated adaptability across a variety of tasks without requiring task-specific training. This indicates the potential of LLMs to recognize emotions even with fewer emotion labels. Therefore, we assessed the performance of an LLM in emotion recognition using two established datasets: MELD and IEMOCAP. Our findings reveal that for emotion labels with few training samples, the performance of the LLM approaches or even exceeds that of SPCL, a leading model specializing in text-based emotion recognition. In addition, inspired by the Chain of Thought, we incorporated a multi-step prompting technique into the LLM to further enhance its discriminative capacity between emotion labels. The results underscore the potential of LLMs to reduce the time and costs of emotion data labeling.
引用
收藏
页码:338 / 346
页数:9
相关论文
共 50 条
  • [1] Chinese Metaphor Recognition Using a Multi-stage Prompting Large Language Model
    Wang, Jie
    Wang, Jin
    Zhang, Xuejie
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT V, NLPCC 2024, 2025, 15363 : 234 - 246
  • [2] Multi-step Prompting for Few-shot Emotion-Grounded Conversations
    Firdaus, Mauajama
    Singh, Gopendra Vikram
    Ekbal, Asif
    Bhattacharyya, Pushpak
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 3886 - 3891
  • [3] Distilling Multi-Step Reasoning Capabilities into Smaller Language Model
    Yim, Yauwai
    Wang, Zirui
    2024 16TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND COMPUTING, ICMLC 2024, 2024, : 530 - 535
  • [4] Multi-step Iterative Automated Domain Modeling with Large Language Models
    Yang, Yujing
    Chen, Boqi
    Chen, Kua
    Mussbacher, Gunter
    Varro, Daniel
    ACM/IEEE 27TH INTERNATIONAL CONFERENCE ON MODEL DRIVEN ENGINEERING LANGUAGES AND SYSTEMS: COMPANION PROCEEDINGS, MODELS 2024, 2024, : 587 - 595
  • [5] INTERACTIVE EMOTION INFERENCE MODEL FOR EMOTION RECOGNITION IN CONVERSATION
    Qian, Y. A. N. J. U. N.
    Zhang, X. U. E. J. I. E.
    Wang, J. I. N.
    JOURNAL OF NONLINEAR AND CONVEX ANALYSIS, 2022, 23 (10) : 2175 - 2193
  • [6] PROMPTING LARGE LANGUAGE MODELS WITH SPEECH RECOGNITION ABILITIES
    Fathullah, Yassir
    Wu, Chunyang
    Lakomkin, Egor
    Jia, Junteng
    Shangguan, Yuan
    Li, Ke
    Guo, Jinxi
    Xiong, Wenhan
    Mahadeokar, Jay
    Kalinli, Ozlem
    Fuegen, Christian
    Seltzer, Mike
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2024), 2024, : 13351 - 13355
  • [7] INFORM : Information eNtropy based multi-step reasoning FOR large language Models
    Zhou, Chuyue
    You, Wangjie
    Li, Juntao
    Ye, Jing
    Chen, Kehai
    Zhang, Min
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 3565 - 3576
  • [8] Multimodal Speech Emotion Recognition Based on Large Language Model
    Fang, Congcong
    Jin, Yun
    Chen, Guanlin
    Zhang, Yunfan
    Li, Shidang
    Ma, Yong
    Xie, Yue
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2024, E107D (11) : 1463 - 1467
  • [9] MindMap: Constructing Evidence Chains for Multi-Step Reasoning in Large Language Models
    Wu, Yangyu
    Han, Xu
    Song, Wei
    Cheng, Miaomiao
    Li, Fei
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 17, 2024, : 19270 - 19278
  • [10] Multimodal Emotion Recognition with Vision-language Prompting and Modality Dropout
    Qi, Anbin
    Liu, Zhongliang
    Zhou, Xinyong
    Xiao, Jinba
    Zhang, Fengrun
    Gan, Qi
    Tao, Ming
    Zhang, Gaozheng
    Zhang, Lu
    PROCEEDINGS OF THE 2ND INTERNATIONAL WORKSHOP ON MULTIMODAL AND RESPONSIBLE AFFECTIVE COMPUTING, MRAC 2024, 2024, : 49 - 53