Evaluating emotional and subjective responses in synthetic art-related dialogues: A multi-stage framework with large language models

被引:2
|
作者
Luna-Jimenez, Cristina [1 ]
Gil-Martin, Manuel [1 ]
D'Haro, Luis Fernando [1 ]
Fernandez-Martinez, Fernando [1 ]
San-Segundo, Ruben [1 ]
机构
[1] Univ Politecn Madrid, Grp Tecnol Habla & Aprendizaje Automat THAU Grp, Informat Proc & Telecommun Ctr, ETSI Telecomunicac, Av Complutense 30, Madrid 28040, Spain
关键词
Data and text mining; Dialogues generation; Dialogues evaluation; Affective-computing;
D O I
10.1016/j.eswa.2024.124524
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The appearance of Large Language Models (LLM) has implied a qualitative step forward in the performance of conversational agents, and even in the generation of creative texts. However, previous applications of these models in generating dialogues neglected the impact of 'hallucinations' in the context of generating synthetic dialogues, thus omitting this central aspect in their evaluations. For this reason, we propose an opensource and flexible framework called GenEvalGPT framework: a comprehensive multi-stage evaluation strategy utilizing diverse metrics. The objective is two-fold: first, the goal is to assess the extent to which synthetic dialogues between a chatbot and a human align with the specified commands, determining the successful creation of these dialogues based on the provided specifications; and second, to evaluate various aspects of emotional and subjective responses. Assuming that dialogues to be evaluated were synthetically produced from specific profiles, the first evaluation stage utilizes LLMs to reconstruct the original templates employed in dialogue creation. The success of this reconstruction is then assessed in a second stage using lexical and semantic objective metrics. On the other hand, crafting a chatbot's behaviors demands careful consideration to encompass a diverse range of interactions it is meant to engage in. Synthetic dialogues play a pivotal role in this context, as they can be deliberately synthesized to emulate various behaviors. This is precisely the objective of the third stage: evaluating whether the generated dialogues adhere to the required aspects concerning emotional and subjective responses. To validate the capabilities of the proposed framework, we applied it to recognize whether the chatbot exhibited one of two distinct behaviors in the synthetically generated dialogues: being emotional and providing subjective responses, or remaining neutral. This evaluation will encompass traditional metrics and automatic metrics generated by the LLM. In our use case of art-related dialogues, our findings reveal that the capacity to recover templates or profiles is more effective for information or profile items that are objective and factual, in contrast to those related to mental states or subjective facts. For the emotional and subjective behavior assessment, rule-based metrics achieved a 79% of accuracy in detecting emotions or subjectivity (anthropic), and an 82% on the LLM automatic metrics. The combination of these metrics and stages could help to decide which of the generated dialogues should be maintained depending on the applied policy, which could vary from preserving between 57% to 93% of the initial dialogues.
引用
收藏
页数:17
相关论文
共 4 条
  • [1] Multi-stage guided code generation for Large Language Models
    Han, Yewei
    Lyu, Chen
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 139
  • [2] FEEL: A Framework for Evaluating Emotional Support Capability with Large Language Models
    Zhang, Huaiwen
    Chen, Yu
    Wang, Ming
    Feng, Shi
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT XIII, ICIC 2024, 2024, 14874 : 96 - 107
  • [3] LegalReasoner: A Multi-Stage Framework for Legal Judgment Prediction via Large Language Models and Knowledge Integration
    Wang, Xuran
    Zhang, Xinguang
    Hoo, Vanessa
    Shao, Zhouhang
    Zhang, Xuguang
    IEEE ACCESS, 2024, 12 : 166843 - 166854
  • [4] Evaluating the reliability of the responses of large language models to keratoconus-related questions
    Kayabasi, Mustafa
    Koksaldi, Seher
    Engin, Ceren Durmaz
    CLINICAL AND EXPERIMENTAL OPTOMETRY, 2024,