Towards Sentiment-Aware Multi-Modal Dialogue Policy Learning

被引:12
|
作者
Saha, Tulika [1 ]
Saha, Sriparna [1 ]
Bhattacharyya, Pushpak [1 ]
机构
[1] Indian Inst Technol Patna, Patna, Bihar, India
关键词
Multi-intent; Hierarchical reinforcement learning; Multi-modal; Sentiment; Policy learning; Task-oriented;
D O I
10.1007/s12559-020-09769-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Creation of task-oriented dialog/virtual agent (VA) capable of managing complex domain-specific user queries pertaining to multiple intents is difficult since the agent must deal with several subtasks simultaneously. Most end-to-end dialogue systems, however, only provide user semantics as inputs from texts into the learning process and neglect other useful user behaviour and information from other modalities such as images. This stresses the benefit of incorporating multi-modal inputs for eliciting user preference in the task. Also, sentiment of the user plays a significant role in achieving maximum user/customer satisfaction during the conversation. Thus, it is also important to incorporate users' sentiments during policy learning, especially when serving user's composite goals. For the creation of multi-modal VA aided with sentiment for conversations encompassing multi-intents, this paper introduces a new dataset, named Vis-SentiVA: Visual and Sentiment aided VA created from open-accessed conversational dataset. We present a hierarchical reinforcement learning (HRL) typically options-based VA to learn policies for serving multi-intent dialogues. Multi-modal information (texts and images) extraction to identify user's preference is incorporated in the learning framework. A combination of task-based and sentiment-based rewards is integrated in the hierarchical value functions for the VA to be user adaptive. Empirically, we show that all these aspects induced together in the learning framework play a vital role in acquiring higher dialogue task success and increased user contentment in the process of creating composite-natured VAs. This is the first effort in integrating sentiment-aware rewards in the multi-modal HRL framework. The paper highlights that it is indeed essential to include other modes of information extraction such as images and behavioural cues of the user such as sentiment to secure greater user contentment. This also helps in improving success of composite-natured VAs serving task-oriented dialogues.
引用
收藏
页码:246 / 260
页数:15
相关论文
共 50 条
  • [1] Towards Sentiment-Aware Multi-Modal Dialogue Policy Learning
    Tulika Saha
    Sriparna Saha
    Pushpak Bhattacharyya
    [J]. Cognitive Computation, 2022, 14 : 246 - 260
  • [2] Sentiment-Aware Multi-modal Recommendation on Tourist Attractions
    Wang, Junyi
    Bao, Bing-Kun
    Xu, Changsheng
    [J]. MULTIMEDIA MODELING (MMM 2019), PT I, 2019, 11295 : 3 - 16
  • [3] SAME: Sentiment-Aware Multi-Modal Embedding for Detecting Fake News
    Cui, Limeng
    Wang, Suhang
    Lee, Dongwon
    [J]. PROCEEDINGS OF THE 2019 IEEE/ACM INTERNATIONAL CONFERENCE ON ADVANCES IN SOCIAL NETWORKS ANALYSIS AND MINING (ASONAM 2019), 2019, : 41 - 48
  • [4] Towards a Sentiment-Aware Conversational Agent
    Dias, Isabel
    Rei, Ricardo
    Pereira, Patricia
    Coheur, Luisa
    [J]. PROCEEDINGS OF THE 22ND ACM INTERNATIONAL CONFERENCE ON INTELLIGENT VIRTUAL AGENTS, IVA 2022, 2022,
  • [5] Multi-modal Sentiment Feature Learning Based on Sentiment Signal
    Lin, Dazhen
    Li, Lingxiao
    Cao, Donglin
    Li, Shaozi
    [J]. 12TH CHINESE CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK AND SOCIAL COMPUTING (CHINESECSCW 2017), 2017, : 33 - 40
  • [6] Sentiment and Emotion-Aware Multi-Modal Complaint Identification
    Singh, Apoorva
    Dey, Soumyodeep
    Singha, Anamitra
    Saha, Sriparna
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 12163 - 12171
  • [7] Multi-Modal Dialogue Policy Learning for Dynamic and Co-operative Goal Setting
    Tiwari, Abhisek
    Saha, Tulika
    Saha, Sriparna
    Sengupta, Shubhashis
    Maitra, Anutosh
    Ramnani, Roshni
    Bhattacharyya, Pushpak
    [J]. 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [8] Scene-Aware Prompt for Multi-modal Dialogue Understanding and Generation
    Li, Bin
    Weng, Yixuan
    Ma, Ziyu
    Sun, Bin
    Li, Shutao
    [J]. NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, NLPCC 2022, PT II, 2022, 13552 : 179 - 191
  • [9] Multi-task & Multi-modal Sentiment Analysis Model Based on Aware Fusion
    Wu, Sisi
    Ma, Jing
    [J]. Data Analysis and Knowledge Discovery, 2023, 7 (10): : 74 - 84
  • [10] Learning Domain-specific Sentiment Lexicon with Supervised Sentiment-aware LDA
    Yang, Min
    Zhu, Dingju
    Mustafa, Rashed
    Chow, Kam-Pui
    [J]. 21ST EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE (ECAI 2014), 2014, 263 : 927 - +