Dataset and Models for Item Recommendation Using Multi-Modal User Interactions

被引:0
|
作者
Bruun, Simone Borg [1 ]
Balog, Krisztian [2 ]
Maistro, Maria [1 ]
机构
[1] Univ Copenhagen, Copenhagen, Denmark
[2] Univ Stavanger, Stavanger, Norway
关键词
Recommender System; Multi-modal User Interactions; Missing Modalities;
D O I
10.1145/3626772.3657881
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
While recommender systems with multi-modal item representations (image, audio, and text), have been widely explored, learning recommendations from multi-modal user interactions (e.g., clicks and speech) remains an open problem. We study the case of multimodal user interactions in a setting where users engage with a service provider through multiple channels (website and call center). In such cases, incomplete modalities naturally occur, since not all users interact through all the available channels. To address these challenges, we publish a real-world dataset that allows progress in this under-researched area. We further present and benchmark various methods for leveraging multi-modal user interactions for item recommendations, and propose a novel approach that specifically deals with missing modalities by mapping user interactions to a common feature space. Our analysis reveals important interactions between the different modalities and that a frequently occurring modality can enhance learning from a less frequent one.
引用
收藏
页码:709 / 718
页数:10
相关论文
共 50 条
  • [41] Large Scale Multi-Lingual Multi-Modal Summarization Dataset
    Verma, Yash
    Jangra, Anubhav
    Kumar, Raghvendra
    Saha, Sriparna
    [J]. 17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 3620 - 3632
  • [42] A Multi-modal Multi-task based Approach for Movie Recommendation
    Raj, Subham
    Mondal, Prabir
    Chakder, Daipayan
    Saha, Sriparna
    Onoe, Naoyuki
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [43] Recommendation Based on Multimodal Information of User-Item Interactions
    Cai, Guoyong
    Chen, Nannan
    [J]. 2019 9TH INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND TECHNOLOGY (ICIST2019), 2019, : 288 - 293
  • [44] Exploiting Multi-Modal Interactions: A Unified Framework
    Li, Ming
    Xue, Xiao-Bing
    Zhou, Zhi-Hua
    [J]. 21ST INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI-09), PROCEEDINGS, 2009, : 1120 - 1125
  • [45] FARMI: A FrAmework for Recording Multi-Modal Interactions
    Jonell, Patrik
    Bystedt, Mattias
    Fallgren, Per
    Kontogiorgos, Dimosthenis
    Lopes, Jose
    Malisz, Zofia
    Mascarenhas, Samuel
    Oertel, Catharine
    Raveh, Eran
    Shore, Todd
    [J]. PROCEEDINGS OF THE ELEVENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2018), 2018, : 3969 - 3974
  • [46] Multi-modal Interaction System for Enhanced User Experience
    Jeong, Yong Mu
    Min, Soo Young
    Lee, Seung Eun
    [J]. COMPUTER APPLICATIONS FOR WEB, HUMAN COMPUTER INTERACTION, SIGNAL AND IMAGE PROCESSING AND PATTERN RECOGNITION, 2012, 342 : 287 - +
  • [47] Granular estimation of user cognitive workload using multi-modal physiological sensors
    Wang, Jingkun
    Stevens, Christopher
    Bennett, Winston
    Yu, Denny
    [J]. FRONTIERS IN NEUROERGONOMICS, 2024, 5
  • [48] Powervis: Empowering the user with a multi-modal visualization system
    Minghim, R
    deOliveira, MCF
    [J]. II WORKSHOP ON CYBERNETIC VISION, PROCEEDINGS, 1997, : 106 - 111
  • [49] Multi-Modal Interactions of Mixed Reality Framework
    Omary, Danah
    Mehta, Gayatri
    [J]. 17TH IEEE DALLAS CIRCUITS AND SYSTEMS CONFERENCE, DCAS 2024, 2024,
  • [50] Gesture Recognition on a New Multi-Modal Hand Gesture Dataset
    Schak, Monika
    Gepperth, Alexander
    [J]. PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION APPLICATIONS AND METHODS (ICPRAM), 2021, : 122 - 131