Dataset and Models for Item Recommendation Using Multi-Modal User Interactions

被引:0
|
作者
Bruun, Simone Borg [1 ]
Balog, Krisztian [2 ]
Maistro, Maria [1 ]
机构
[1] Univ Copenhagen, Copenhagen, Denmark
[2] Univ Stavanger, Stavanger, Norway
关键词
Recommender System; Multi-modal User Interactions; Missing Modalities;
D O I
10.1145/3626772.3657881
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
While recommender systems with multi-modal item representations (image, audio, and text), have been widely explored, learning recommendations from multi-modal user interactions (e.g., clicks and speech) remains an open problem. We study the case of multimodal user interactions in a setting where users engage with a service provider through multiple channels (website and call center). In such cases, incomplete modalities naturally occur, since not all users interact through all the available channels. To address these challenges, we publish a real-world dataset that allows progress in this under-researched area. We further present and benchmark various methods for leveraging multi-modal user interactions for item recommendations, and propose a novel approach that specifically deals with missing modalities by mapping user interactions to a common feature space. Our analysis reveals important interactions between the different modalities and that a frequently occurring modality can enhance learning from a less frequent one.
引用
收藏
页码:709 / 718
页数:10
相关论文
共 50 条
  • [21] MMChat: Multi-Modal Chat Dataset on Social Media
    Zheng, Yinhe
    Chen, Guanyi
    Liu, Xin
    Sun, Jian
    [J]. LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 5778 - 5786
  • [22] A multi-modal dataset for gait recognition under occlusion
    Na Li
    Xinbo Zhao
    [J]. Applied Intelligence, 2023, 53 : 1517 - 1534
  • [23] A longitudinal multi-modal dataset for dementia monitoring and diagnosis
    Gkoumas, Dimitris
    Wang, Bo
    Tsakalidis, Adam
    Wolters, Maria
    Purver, Matthew
    Zubiaga, Arkaitz
    Liakata, Maria
    [J]. LANGUAGE RESOURCES AND EVALUATION, 2024, 58 (03) : 883 - 902
  • [24] A New Multi-modal Dataset for Human Affect Analysis
    Wei, Haolin
    Monaghan, David S.
    O'Connor, Noel E.
    Scanlon, Patricia
    [J]. HUMAN BEHAVIOR UNDERSTANDING (HBU 2014), 2014, 8749 : 42 - 51
  • [25] A comprehensive video dataset for multi-modal recognition systems
    Handa A.
    Agarwal R.
    Kohli N.
    [J]. Data Science Journal, 2019, 18 (01):
  • [26] A new multi-modal dataset for human affect analysis
    [J]. 1600, Springer Verlag (8749):
  • [27] Multi-modal Representation Learning for Successive POI Recommendation
    Li, Lishan
    Liu, Ying
    Wu, Jianping
    He, Lin
    Ren, Gang
    [J]. ASIAN CONFERENCE ON MACHINE LEARNING, VOL 101, 2019, 101 : 441 - 456
  • [28] Task-Adversarial Adaptation for Multi-modal Recommendation
    Su, Hongzu
    Li, Jingjing
    Li, Fengling
    Zhu, Lei
    Lu, Ke
    Yang, Yang
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 6530 - 6538
  • [29] Towards Developing a Multi-Modal Video Recommendation System
    Pingali, Sriram
    Mondal, Prabir
    Chakder, Daipayan
    Saha, Sriparna
    Ghosh, Angshuman
    [J]. 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [30] Joint Representation Learning for Multi-Modal Transportation Recommendation
    Liu, Hao
    Li, Ting
    Hu, Renjun
    Fu, Yanjie
    Gu, Jingjing
    Xiong, Hui
    [J]. THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 1036 - 1043