Collaborative recommendation model based on multi-modal multi-view attention network: Movie and literature cases

被引:4
|
作者
Hu, Zheng
Cai, Shi-Min [1 ]
Wang, Jun
Zhou, Tao
机构
[1] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Complex Lab, Chengdu 610054, Peoples R China
关键词
Recommender system; Multi-modal; Multi-view mechanism;
D O I
10.1016/j.asoc.2023.110518
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The existing collaborative recommendation models that use multi-modal information emphasize the representation of users' preferences but easily ignore the representation of users' dislikes. Nevertheless, modelling users' dislikes facilitates comprehensively characterizing user profiles. Thus, the representa-tion of users' dislikes should be integrated into the user modelling when we construct a collaborative recommendation model. In this paper, we propose a novel Collaborative Recommendation Model based on Multi-modal multi-view Attention Network (CRMMAN), in which the users are represented from both preference and dislike views. Specifically, the users' historical interactions are divided into positive and negative interactions, used to model the user's preference and dislike views, respectively. Furthermore, the semantic and structural information extracted from the scene is employed to enrich the item representation. We validate CRMMAN by designing contrast experiments based on two benchmark MovieLens-1M and Book-Crossing datasets. Movielens-1 m has about a million ratings, and Book-Crossing has about 300,000 ratings. Compared with the state-of-the-art knowledge-graph-based and multi-modal recommendation methods, the AUC, NDCG@5 and NDCG@10 are improved by 2.08%, 2.20% and 2.26% on average of two datasets. We also conduct controlled experiments to explore the effects of multi-modal information and multi-view mechanism. The experimental results show that both of them enhance the model's performance.(c) 2023 Elsevier B.V. All rights reserved.
引用
下载
收藏
页数:14
相关论文
共 50 条
  • [31] Multi-modal multi-view Bayesian semantic embedding for community question answering
    Sang, Lei
    Xu, Min
    Qian, ShengSheng
    Wu, Xindong
    NEUROCOMPUTING, 2019, 334 : 44 - 58
  • [32] Holistic Multi-Modal Memory Network for Movie Question Answering
    Wang, Anran
    Anh Tuan Luu
    Foo, Chuan-Sheng
    Zhu, Hongyuan
    Tay, Yi
    Chandrasekhar, Vijay
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 489 - 499
  • [33] Multi-view stereo network with point attention
    Zhao, Rong
    Gu, Zhuoer
    Han, Xie
    He, Ligang
    Sun, Fusheng
    Jiao, Shichao
    APPLIED INTELLIGENCE, 2023, 53 (22) : 26622 - 26636
  • [34] Multi-view stereo network with point attention
    Rong Zhao
    Zhuoer Gu
    Xie Han
    Ligang He
    Fusheng Sun
    Shichao Jiao
    Applied Intelligence, 2023, 53 : 26622 - 26636
  • [35] Multi-View Attention Network for Visual Dialog
    Park, Sungjin
    Whang, Taesun
    Yoon, Yeochan
    Lim, Heuiseok
    APPLIED SCIENCES-BASEL, 2021, 11 (07):
  • [36] MVC-HGAT: multi-view contrastive hypergraph attention network for session-based recommendation
    Yang, Fan
    Peng, Dunlu
    Applied Intelligence, 2025, 55 (01)
  • [37] Collaborative Multi-view Learning with Active Discriminative Prior for Recommendation
    Zhang, Qing
    Wang, Houfeng
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PART I, 2015, 9077 : 355 - 368
  • [38] Multi-view factorization machines for mobile app recommendation based on hierarchical attention
    Liang, Tingting
    Zheng, Lei
    Chen, Liang
    Wan, Yao
    Yu, Philip S.
    Wu, Jian
    KNOWLEDGE-BASED SYSTEMS, 2020, 187
  • [39] Autoencoder-Based Collaborative Attention GAN for Multi-Modal Image Synthesis
    Cao, Bing
    Cao, Haifang
    Liu, Jiaxu
    Zhu, Pengfei
    Zhang, Changqing
    Hu, Qinghua
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 995 - 1010
  • [40] Automated Multi-View Multi-Modal Assessment of COVID-19 Patients Using Reciprocal Attention and Biomedical Transform
    Li, Yanhan
    Zhao, Hongyun
    Gan, Tian
    Liu, Yang
    Zou, Lian
    Xu, Ting
    Chen, Xuan
    Fan, Cien
    Wu, Meng
    FRONTIERS IN PUBLIC HEALTH, 2022, 10