A Multimodal Graph Recommendation Method Based on Cross-Attention Fusion

被引:0
|
作者
Li, Kai [1 ]
Xu, Long [1 ]
Zhu, Cheng [1 ]
Zhang, Kunlun [1 ]
机构
[1] Natl Univ Def Technol, Natl Key Lab Informat Syst Engn, Changsha 410003, Peoples R China
关键词
multimodal graph; recommendation method; multimodal information purification; cross-attention mechanism; information fusion;
D O I
10.3390/math12152353
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
Research on recommendation methods using multimodal graph information presents a significant challenge within the realm of information services. Prior studies in this area have lacked precision in the purification and denoising of multimodal information and have insufficiently explored fusion methods. We introduce a multimodal graph recommendation approach leveraging cross-attention fusion. This model enhances and purifies multimodal information by embedding the IDs of items and their corresponding interactive users, thereby optimizing the utilization of such information. To facilitate better integration, we propose a cross-attention mechanism-based multimodal information fusion method, which effectively processes and merges related and differential information across modalities. Experimental results on three public datasets indicated that our model performed exceptionally well, demonstrating its efficacy in leveraging multimodal information.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Multimodal Cross-Attention Graph Network for Desire Detection
    Gu, Ruitong
    Wang, Xin
    Yang, Qinghong
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT IV, 2023, 14257 : 512 - 523
  • [2] Mention Recommendation for Multimodal Microblog with Cross-attention Memory Network
    Ma, Renfeng
    Zhang, Qi
    Wang, Jiawen
    Cui, Lizhen
    Huang, Xuanjing
    [J]. ACM/SIGIR PROCEEDINGS 2018, 2018, : 195 - 204
  • [3] CaEGCN: Cross-Attention Fusion Based Enhanced Graph Convolutional Network for Clustering
    Huo, Guangyu
    Zhang, Yong
    Gao, Junbin
    Wang, Boyue
    Hu, Yongli
    Yin, Baocai
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (04) : 3471 - 3483
  • [4] Cross attention fusion for knowledge graph optimized recommendation
    Huang, Weijian
    Wu, Jianhua
    Song, Weihu
    Wang, Zehua
    [J]. APPLIED INTELLIGENCE, 2022, 52 (09) : 10297 - 10306
  • [5] Cross attention fusion for knowledge graph optimized recommendation
    Weijian Huang
    Jianhua Wu
    Weihu Song
    Zehua Wang
    [J]. Applied Intelligence, 2022, 52 : 10297 - 10306
  • [6] Dense Graph Convolutional With Joint Cross-Attention Network for Multimodal Emotion Recognition
    Cheng, Cheng
    Liu, Wenzhe
    Feng, Lin
    Jia, Ziyu
    [J]. IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024, : 6672 - 6683
  • [7] Vocational Education Information Technology Based on Cross-Attention Fusion Knowledge Map Recommendation Algorithm
    Jiang, Peng
    [J]. JOURNAL OF INFORMATION & KNOWLEDGE MANAGEMENT, 2023, 22 (03)
  • [8] MSER: Multimodal speech emotion recognition using cross-attention with deep fusion
    Khan, Mustaqeem
    Gueaieb, Wail
    El Saddik, Abdulmotaleb
    Kwon, Soonil
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2024, 245
  • [9] Multimodal Dual Cross-Attention Fusion Strategy for Autonomous Garbage Classification System
    Xu, Huxiu
    Tang, Wei
    Li, Zhaoyang
    Qin, Kecheng
    Zou, Jun
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, : 13319 - 13329
  • [10] SCANET: Improving multimodal representation and fusion with sparse- and cross-attention for multimodal sentiment analysis
    Wang, Hao
    Yang, Mingchuan
    Li, Zheng
    Liu, Zhenhua
    Hu, Jie
    Fu, Ziwang
    Liu, Feng
    [J]. COMPUTER ANIMATION AND VIRTUAL WORLDS, 2022, 33 (3-4)