Encoding Emotional Information for Sequence-to-Sequence Response Generation

被引:0
|
作者
Chan, Yin Hei [1 ]
Lui, Andrew Kwok Fai [1 ]
机构
[1] Open Univ Hong Kong, Comp, Sch Sci & Technol, Hong Kong, Hong Kong, Peoples R China
关键词
chatbots; conversational agents; long short term memory; recurrent neural network; encoder-decoder framework; emotional response;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper introduces an alternative approach on embedding emotional information at the encoder stage of a sequence-to-sequence based emotional response generation. It explores different positioning and styles of the embedding, which represent associations of emotion with specific words or the whole sentence. The experiment was set up with standard dataset as well as dataset annotated with emotional classifiers. Preliminary results showed that this new approach should better represent sentence level emotional and work well with standard Recurrent Neural network (RNN) with Long Short Term Memory (LSTM) architecture.
引用
收藏
页码:113 / 116
页数:4
相关论文
共 50 条
  • [1] Data generation using sequence-to-sequence
    Joshi, Akshat
    Mehta, Kinal
    Gupta, Neha
    Valloli, Varun Kannadi
    [J]. 2018 IEEE RECENT ADVANCES IN INTELLIGENT COMPUTATIONAL SYSTEMS (RAICS), 2018, : 108 - 112
  • [2] Controllable Question Generation via Sequence-to-Sequence Neural Model with Auxiliary Information
    Cao, Zhen
    Tatinati, Sivanagaraja
    Khong, Andy W. H.
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [3] An Overview & Analysis of Sequence-to-Sequence Emotional Voice Conversion
    Yang, Zijiang
    Jing, Xin
    Triantafyllopoulos, Andreas
    Song, Meishu
    Aslan, Ilhan
    Schuller, Bjoern W.
    [J]. INTERSPEECH 2022, 2022, : 4915 - 4919
  • [4] Sequence-to-Sequence Emotional Voice Conversion With Strength Control
    Choi, Heejin
    Hahn, Minsoo
    [J]. IEEE ACCESS, 2021, 9 : 42674 - 42687
  • [5] Sequence-to-sequence AMR Parsing with Ancestor Information
    Yu, Chen
    Gildea, Daniel
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022): (SHORT PAPERS), VOL 2, 2022, : 571 - 577
  • [6] ReBoost: a retrieval-boosted sequence-to-sequence model for neural response generation
    Zhu, Yutao
    Dou, Zhicheng
    Nie, Jian-Yun
    Wen, Ji-Rong
    [J]. INFORMATION RETRIEVAL JOURNAL, 2020, 23 (01): : 27 - 48
  • [7] ReBoost: a retrieval-boosted sequence-to-sequence model for neural response generation
    Yutao Zhu
    Zhicheng Dou
    Jian-Yun Nie
    Ji-Rong Wen
    [J]. Information Retrieval Journal, 2020, 23 : 27 - 48
  • [8] Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
    Konstas, Ioannis
    Iyer, Srinivasan
    Yatskar, Mark
    Choi, Yejin
    Zettlemoyer, Luke
    [J]. PROCEEDINGS OF THE 55TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2017), VOL 1, 2017, : 146 - 157
  • [9] Myanmar News Headline Generation with Sequence-to-Sequence model
    Thu, Yamin
    Pa, Win Pa
    [J]. PROCEEDINGS OF 2020 23RD CONFERENCE OF THE ORIENTAL COCOSDA INTERNATIONAL COMMITTEE FOR THE CO-ORDINATION AND STANDARDISATION OF SPEECH DATABASES AND ASSESSMENT TECHNIQUES (ORIENTAL-COCOSDA 2020), 2020, : 117 - 122
  • [10] A Fuzzy Training Framework for Controllable Sequence-to-Sequence Generation
    Li, Jiajia
    Wang, Ping
    Li, Zuchao
    Liu, Xi
    Utiyama, Masao
    Sumita, Eiichiro
    Zhao, Hai
    Ai, Haojun
    [J]. IEEE ACCESS, 2022, 10 : 92467 - 92480