Position Perceptive Multi-Hop Fusion Network for Multimodal Aspect-Based Sentiment Analysis

被引:0
|
作者
Fan, Hao [1 ]
Chen, Junjie [1 ]
机构
[1] Inner Mongolia Agr Univ, Coll Comp & Informat Engn, Hohhot 010018, Peoples R China
来源
IEEE ACCESS | 2024年 / 12卷
基金
中国国家自然科学基金;
关键词
Feature extraction; Sentiment analysis; Task analysis; Visualization; Encoding; Bidirectional control; Social networking (online); Spread spectrum communication; Noise measurement; Multimodal; position perceptive; aspect-guided; multi-hop interactions; noise information;
D O I
10.1109/ACCESS.2024.3404261
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the increasing prevalence of multimodal user-generated content on social media, Multimodal Aspect-Based Sentiment Analysis (MABSA) has garnered significant attention in recent years. MABSA aims to classify the sentiment polarity of aspects mentioned in textual content by leveraging both textual and visual modalities. Previous studies have primarily focused on using transformer-based models to fuse information from different modalities. However, the presence of irrelevant information in images and text related to specific aspects can negatively impact sentiment analysis results. Many studies still face challenges in handling the noise introduced during the multimodal fusion process. To address this issue, we propose a novel Position-Perceptive Multi-hop Fusion Network (PPMFN) in this paper. Our method includes a position-perceptive module and an aspect-guided multi-hop interactive attention fusion module, aimed at enhancing modal position perception and aspect-guided multi-hop interactions to filter out irrelevant noise information. Typically, parts closer to the target are more relevant. The position-perceptive module is used to capture positional features from both images and text. The aspect-guided fusion module leverages a multi-hop interactive attention network to merge multimodal features while eliminating irrelevant information from both images and text. Extensive experimental results demonstrate that our framework consistently surpasses strong baseline models across two public datasets.
引用
收藏
页码:90586 / 90595
页数:10
相关论文
共 50 条
  • [1] Path-Enhanced Multi-hop Graph Attention Network for Aspect-based Sentiment Analysis
    Wang, Jiayi
    Yang, Lina
    Li, Xichun
    Meng, Zuqiang
    [J]. 2021 INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND COMPUTATIONAL INTELLIGENCE (CSCI 2021), 2021, : 92 - 97
  • [2] Dual-Perspective Fusion Network for Aspect-Based Multimodal Sentiment Analysis
    Wang, Di
    Tian, Changning
    Liang, Xiao
    Zhao, Lin
    He, Lihuo
    Wang, Quan
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 4028 - 4038
  • [3] MSFNet: modality smoothing fusion network for multimodal aspect-based sentiment analysis
    Xiang, Yan
    Cai, Yunjia
    Guo, Junjun
    [J]. FRONTIERS IN PHYSICS, 2023, 11
  • [4] Interactive Fusion Network with Recurrent Attention for Multimodal Aspect-based Sentiment Analysis
    Wang, Jun
    Wang, Qianlong
    Wen, Zhiyuan
    Liang, Xingwei
    Xu, Ruifeng
    [J]. ARTIFICIAL INTELLIGENCE, CICAI 2022, PT III, 2022, 13606 : 298 - 309
  • [5] Multi-grained fusion network with self-distillation for aspect-based multimodal sentiment analysis
    Yang, Juan
    Xiao, Yali
    Du, Xu
    [J]. KNOWLEDGE-BASED SYSTEMS, 2024, 293
  • [6] AMIFN: Aspect-guided multi-view interactions and fusion network for multimodal aspect-based sentiment analysis°
    Yang, Juan
    Xu, Mengya
    Xiao, Yali
    Du, Xu
    [J]. NEUROCOMPUTING, 2024, 573
  • [7] AMIFN: Aspect-guided multi-view interactions and fusion network for multimodal aspect-based sentiment analysis
    Yang, Juan
    Xu, Mengya
    Xiao, Yali
    Du, Xu
    [J]. Neurocomputing, 2024, 573
  • [8] Multi-hop Syntactic Graph Convolutional Networks for Aspect-Based Sentiment Classification
    Yin, Chang
    Zhou, Qing
    Ge, Liang
    Ou, Jiaojiao
    [J]. KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT (KSEM 2020), PT II, 2020, 12275 : 213 - 224
  • [9] Multi-level textual-visual alignment and fusion network for multimodal aspect-based sentiment analysis
    You Li
    Han Ding
    Yuming Lin
    Xinyu Feng
    Liang Chang
    [J]. Artificial Intelligence Review, 57
  • [10] Targeted Aspect-Based Multimodal Sentiment Analysis: An Attention Capsule Extraction and Multi-Head Fusion Network
    Gu, Donghong
    Wang, Jiaqian
    Cai, Shaohua
    Yang, Chi
    Song, Zhengxin
    Zhao, Haoliang
    Xiao, Luwei
    Wang, Hua
    [J]. IEEE ACCESS, 2021, 9 : 157329 - 157336