Emotional Paraphrasing Using Pre-trained Language Models

被引:2
|
作者
Casas, Jacky [1 ]
Torche, Samuel [1 ]
Daher, Karl [1 ]
Mugellini, Elena [1 ]
Abou Khaled, Omar [1 ]
机构
[1] HES SO Univ Appl Sci & Arts Western Switzerland, Fribourg, Switzerland
关键词
natural language processing; affective computing; artificial intelligence; paraphrasing; style transfer; emotions;
D O I
10.1109/ACIIW52867.2021.9666309
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Emotion style transfer is a recent and challenging problem in Natural Language Processing (NLP). Transformer-based language models are becoming extremely powerful, so one wonders if it would be possible to leverage them to perform emotion style transfer. So far, previous work has not used transformer-based models for this task. To address this task, we fine-tune a GPT-2 model with corrupted emotional data. This will train the model to increase the emotional intensity of the input sentence. Coupled with a paraphrasing model, we develop a system capable of transferring an emotion into a paraphrase. We conducted a qualitative study with human judges, as well as a quantitative evaluation. Although the paraphrase metrics show poor performance compared to the state of the art, the transfer of emotion proved to be effective, especially for the emotions fear, sadness, and disgust. The perception of these emotions were improved both in the automatic and human evaluations. Such technology can significantly facilitate the automatic creation of training sentences for natural language understanding (NLU) systems, but it can also be integrated into an emotional or empathic dialogue architecture.
引用
收藏
页数:7
相关论文
共 50 条
  • [1] Pre-Trained Language Models and Their Applications
    Wang, Haifeng
    Li, Jiwei
    Wu, Hua
    Hovy, Eduard
    Sun, Yu
    [J]. ENGINEERING, 2023, 25 : 51 - 65
  • [2] μBERT: Mutation Testing using Pre-Trained Language Models
    Degiovanni, Renzo
    Papadakis, Mike
    [J]. 2022 IEEE 15TH INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION WORKSHOPS (ICSTW 2022), 2022, : 160 - 169
  • [3] Devulgarization of Polish Texts Using Pre-trained Language Models
    Klamra, Cezary
    Wojdyga, Grzegorz
    Zurowski, Sebastian
    Rosalska, Paulina
    Kozlowska, Matylda
    Ogrodniczuk, Maciej
    [J]. COMPUTATIONAL SCIENCE, ICCS 2022, PT II, 2022, : 49 - 55
  • [4] MERGEDISTILL: Merging Pre-trained Language Models using Distillation
    Khanuja, Simran
    Johnson, Melvin
    Talukdar, Partha
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 2874 - 2887
  • [5] Issue Report Classification Using Pre-trained Language Models
    Colavito, Giuseppe
    Lanubile, Filippo
    Novielli, Nicole
    [J]. 2022 IEEE/ACM 1ST INTERNATIONAL WORKSHOP ON NATURAL LANGUAGE-BASED SOFTWARE ENGINEERING (NLBSE 2022), 2022, : 29 - 32
  • [6] Automated Assessment of Inferences Using Pre-Trained Language Models
    Yoo, Yongseok
    [J]. APPLIED SCIENCES-BASEL, 2024, 14 (09):
  • [7] Annotating Columns with Pre-trained Language Models
    Suhara, Yoshihiko
    Li, Jinfeng
    Li, Yuliang
    Zhang, Dan
    Demiralp, Cagatay
    Chen, Chen
    Tan, Wang-Chiew
    [J]. PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA (SIGMOD '22), 2022, : 1493 - 1503
  • [8] LaoPLM: Pre-trained Language Models for Lao
    Lin, Nankai
    Fu, Yingwen
    Yang, Ziyu
    Chen, Chuwei
    Jiang, Shengyi
    [J]. LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 6506 - 6512
  • [9] PhoBERT: Pre-trained language models for Vietnamese
    Dat Quoc Nguyen
    Anh Tuan Nguyen
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 1037 - 1042
  • [10] HinPLMs: Pre-trained Language Models for Hindi
    Huang, Xixuan
    Lin, Nankai
    Li, Kexin
    Wang, Lianxi
    Gan, Suifu
    [J]. 2021 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING (IALP), 2021, : 241 - 246