Multimodal model for the Spanish sentiment analysis in a tourism domain

被引:0
|
作者
Monsalve-Pulido, Julian [1 ]
Parra, Carlos Alberto [2 ]
Aguilar, Jose [3 ,4 ,5 ]
机构
[1] Univ Pedag & Tecnol Colombia, GIMI, Tunja, Colombia
[2] Pontificia Univ Javeriana, Bogota, Colombia
[3] Univ Los Andes, CEMISID, Merida, Venezuela
[4] Univ EAFIT, CIDITIC, Medellin, Colombia
[5] IMDEA Networks Inst, Madrid, Spain
关键词
Multimodal model; Sentiment analysis; Opinion mining; Spanish language; Tourism;
D O I
10.1007/s13278-024-01202-3
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The problem of sentiment analysis of tourism data focuses on the analysis of the multimodal characteristics of the data generated digitally by tourists on each platform or social network. Generally, their opinions have multimodal characteristics, since they combine text, images or numbers (ratings), which represents an important challenge in sentiment analysis that requires new models or multimodal data classification techniques. This work proposes a multimodal sentiment analysis model for data in Spanish in the tourism domain composed of four main phases (extraction, classification, fusion, visualization), and a transversal phase to evaluate the quality of the multimodal sentiment analysis process. Thus, the multimodal sentiment analysis model integrates a data quality model to improve multimodal sentiment analysis tasks, but in addition, the linguistic resource "SenticNet 5" is adapted to Spanish. The model was validated by applying various classification metrics, and the classification results were compared to a manually labeled dataset (TASS) using two machine learning classification algorithms. The first was Random Forest, where the manually labeled dataset has a 50% F1 score compared to the adapted SenticNet automatically generated dataset, which has a 71% F1 score measure and a 70% accuracy. The classification generated by SenticNet is 21% higher than that of the TASS data set. The second algorithm applied was Support Vector Machine (SVM), which classified the SenticNet-generated dataset with an F1 score of 72% versus the manually created dataset with 57.7% (14.3% more effective). In the fusion tests of the multimodal sentiment inputs, the accuracy results for text were 65%, for images 33%, and the fusion of both was 71%. In general, it was identified that the opinions made by users composed of text in Spanish and images improve polarity identification if an independent classification is carried out, and then apply a polarity fusion process.
引用
收藏
页数:18
相关论文
共 50 条
  • [21] Emotional boundaries and intensity aware model for incomplete multimodal sentiment analysis
    Zhang, Yuqing
    Xie, Dongliang
    Luo, Dawei
    Sun, Baosheng
    DIGITAL SIGNAL PROCESSING, 2025, 160
  • [22] A short video sentiment analysis model based on multimodal feature fusion
    Shi, Hongyu
    SYSTEMS AND SOFT COMPUTING, 2024, 6
  • [23] Adversarial attack evaluation and defense method for multimodal sentiment analysis model
    Fan F.
    Nie X.
    Deng X.
    Liu S.
    Huazhong Keji Daxue Xuebao (Ziran Kexue Ban)/Journal of Huazhong University of Science and Technology (Natural Science Edition), 2023, 51 (02): : 19 - 24
  • [24] General Debiasing for Multimodal Sentiment Analysis
    Sun, Teng
    Ni, Juntong
    Wang, Wenjie
    Jing, Liqiang
    Wei, Yinwei
    Nie, Liqiang
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 5861 - 5869
  • [25] Multimodal Phased Transformer for Sentiment Analysis
    Cheng, Junyan
    Fostiropoulos, Iordanis
    Boehm, Barry
    Soleymani, Mohammad
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 2447 - 2458
  • [26] A Optimized BERT for Multimodal Sentiment Analysis
    Wu, Jun
    Zhu, Tianliang
    Zhu, Jiahui
    Li, Tianyi
    Wang, Chunzhi
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2023, 19 (02)
  • [27] Sentiment analysis of multimodal twitter data
    Kumar, Akshi
    Garg, Geetanjali
    MULTIMEDIA TOOLS AND APPLICATIONS, 2019, 78 (17) : 24103 - 24119
  • [28] DEMUSA: DEMO FOR MULTIMODAL SENTIMENT ANALYSIS
    Hong, Soyeon
    Kim, Jeonghoon
    Lee, Donghoon
    Cho, Hyunsouk
    2022 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS (IEEE ICMEW 2022), 2022,
  • [29] A Multimodal Approach to Image Sentiment Analysis
    Gaspar, Antonio
    Alexandre, Luis A.
    INTELLIGENT DATA ENGINEERING AND AUTOMATED LEARNING - IDEAL 2019, PT I, 2019, 11871 : 302 - 309
  • [30] Sentiment analysis of multimodal twitter data
    Akshi Kumar
    Geetanjali Garg
    Multimedia Tools and Applications, 2019, 78 : 24103 - 24119