Sentiment analysis in textual, visual and multimodal inputs using recurrent neural networks

被引:0
|
作者
Jitendra V. Tembhurne
Tausif Diwan
机构
[1] Indian Institute of Information Technology,Department of Computer Science & Engineering
来源
关键词
Sentiment analysis; Emotion detection; Deep learning; Recurrent neural network; Long short term memory; Gated recurrent unit;
D O I
暂无
中图分类号
学科分类号
摘要
Social networking platforms have witnessed tremendous growth of textual, visual, audio, and mix-mode contents for expressing the views or opinions. Henceforth, Sentiment Analysis (SA) and Emotion Detection (ED) of various social networking posts, blogs, and conversation are very useful and informative for mining the right opinions on different issues, entities, or aspects. The various statistical and probabilistic models based on lexical and machine learning approaches have been employed for these tasks. The emphasis was given to the improvement in the contemporary tools, techniques, models, and approaches, are reflected in majority of the literature. With the recent developments in deep neural networks, various deep learning models are being heavily experimented for the accuracy enhancement in the aforementioned tasks. Recurrent Neural Network (RNN) and its architectural variants such as Long Short Term Memory (LSTM) and Gated Recurrent Unit (GRU) comprise an important category of deep neural networks, basically adapted for features extraction in the temporal and sequential inputs. Input to SA and related tasks may be visual, textual, audio, or any combination of these, consisting of an inherent sequentially, we critically investigate the role of sequential deep neural networks in sentiment analysis of multimodal data. Specifically, we present an extensive review over the applicability, challenges, issues, and approaches for textual, visual, and multimodal SA using RNN and its architectural variants.
引用
收藏
页码:6871 / 6910
页数:39
相关论文
共 50 条
  • [1] Sentiment analysis in textual, visual and multimodal inputs using recurrent neural networks
    Tembhurne, Jitendra V.
    Diwan, Tausif
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (05) : 6871 - 6910
  • [2] VISUAL AND TEXTUAL SENTIMENT ANALYSIS USING DEEP FUSION CONVOLUTIONAL NEURAL NETWORKS
    Chen, Xingyue
    Wang, Yunhong
    Liu, Qingjie
    2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2017, : 1557 - 1561
  • [3] Visual and Textual Sentiment Analysis of a Microblog Using Deep Convolutional Neural Networks
    Yu, Yuhai
    Lin, Hongfei
    Meng, Jiana
    Zhao, Zhehuan
    ALGORITHMS, 2016, 9 (02)
  • [4] Joint Visual-Textual Sentiment Analysis with Deep Neural Networks
    You, Quanzeng
    Luo, Jiebo
    Jin, Hailin
    Yang, Jianchao
    MM'15: PROCEEDINGS OF THE 2015 ACM MULTIMEDIA CONFERENCE, 2015, : 1071 - 1074
  • [5] Multimodal Sentiment Analysis Using Deep Neural Networks
    Abburi, Harika
    Prasath, Rajendra
    Shrivastava, Manish
    Gangashetty, Suryakanth V.
    MINING INTELLIGENCE AND KNOWLEDGE EXPLORATION (MIKE 2016), 2017, 10089 : 58 - 65
  • [6] Sentiment Analysis Using Gated Recurrent Neural Networks
    Sachin S.
    Tripathi A.
    Mahajan N.
    Aggarwal S.
    Nagrath P.
    SN Computer Science, 2020, 1 (2)
  • [7] Arabic sentiment analysis using recurrent neural networks: a review
    Alhumoud, Sarah Omar
    Al Wazrah, Asma Ali
    ARTIFICIAL INTELLIGENCE REVIEW, 2022, 55 (01) : 707 - 748
  • [8] Arabic sentiment analysis using recurrent neural networks: a review
    Sarah Omar Alhumoud
    Asma Ali Al Wazrah
    Artificial Intelligence Review, 2022, 55 : 707 - 748
  • [9] Visual and Textual Sentiment Analysis of Brand-Related Social Media Pictures Using Deep Convolutional Neural Networks
    Paolanti, Marina
    Kaiser, Carolin
    Schallner, Rene
    Frontoni, Emanuele
    Zingaretti, Primo
    IMAGE ANALYSIS AND PROCESSING,(ICIAP 2017), PT I, 2017, 10484 : 402 - 413
  • [10] Fusing audio, visual and textual clues for sentiment analysis from multimodal content
    Poria, Soujanya
    Cambria, Erik
    Howard, Newton
    Huang, Guang-Bin
    Hussain, Amir
    NEUROCOMPUTING, 2016, 174 : 50 - 59