Meta Auxiliary Learning for Top-K Recommendation

被引:1
|
作者
Li X. [1 ]
Ma C. [2 ]
Li G. [1 ]
Xu P. [1 ]
Liu C.H. [1 ]
Yuan Y. [1 ]
Wang G. [1 ]
机构
[1] Beijing Institute of Technology, School of Computer Science and Technology, Beijing
[2] City University of Hong Kong, Department of Computer Science
基金
中国国家自然科学基金;
关键词
auxiliary learning; implicit gradient; meta learning; Recommender systems;
D O I
10.1109/TKDE.2022.3223155
中图分类号
学科分类号
摘要
Recommender systems are playing a significant role in modern society to alleviate the information/choice overload problem, since Internet users may feel hard to identify the most favorite items or products from millions of candidates. Thanks to the recent successes in computer vision, auxiliary learning has become a powerful means to improve the performance of a target (primary) task. Even though helpful, the auxiliary learning scheme is still less explored in recommendation models. To integrate the auxiliary learning scheme, we propose a novel meta auxiliary learning framework to facilitate the recommendation model training, i.e., user and item latent representations. Specifically, we construct two self-supervised learning tasks, regarding both users and items, as auxiliary tasks to enhance the representation effectiveness of users and items. Then the auxiliary and primary tasks are further modeled as a meta learning paradigm to adaptively control the contribution of auxiliary tasks for improving the primary recommendation task. This is achieved by an implicit gradient method guaranteeing less time complexity compared with conventional meta learning methods. Via a comparison using four real-world datasets with a number of state-of-the-art methods, we show that the proposed model outperforms the best existing models on the Top-K recommendation by 3% to 23%. © 1989-2012 IEEE.
引用
收藏
页码:10857 / 10870
页数:13
相关论文
共 50 条
  • [1] Transfer learning from rating prediction to Top-k recommendation
    Ye, Fan
    Lu, Xiaobo
    Li, Hongwei
    Chen, Zhenyu
    PLOS ONE, 2024, 19 (03):
  • [2] Location-aware online learning for top-k recommendation
    Palovics, Robert
    Szalai, Peter
    Pap, Julia
    Frigo, Erzsebet
    Kocsis, Levente
    Benczur, Andras A.
    PERVASIVE AND MOBILE COMPUTING, 2017, 38 : 490 - 504
  • [3] Probabilistic Metric Learning with Adaptive Margin for Top-K Recommendation
    Ma, Chen
    Ma, Liheng
    Zhang, Yingxue
    Tang, Ruiming
    Liu, Xue
    Coates, Mark
    KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 1036 - 1044
  • [4] On Sampling Top-K Recommendation Evaluation
    Li, Dong
    Jin, Ruoming
    Gao, Jing
    Liu, Zhi
    KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 2114 - 2124
  • [5] Top-k Link Recommendation in Social Networks
    Song, Dongjin
    Meyer, David A.
    Tao, Dacheng
    2015 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2015, : 389 - 398
  • [6] MOOCRec: An Attention Meta-path Based Model for Top-K Recommendation in MOOC
    Sheng, Deming
    Yuan, Jingling
    Xie, Qing
    Luo, Pei
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT (KSEM 2020), PT I, 2020, 12274 : 280 - 288
  • [7] DeepAltTrip: Top-K Alternative Itineraries for Trip Recommendation
    Rashid, Syed Md. Mukit
    Ali, Mohammed Eunus
    Cheema, Muhammad Aamir
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (09) : 9433 - 9447
  • [8] Group Formation Based on Crowdsourced Top-k Recommendation
    Gao, Yunpeng
    Cai, Wei
    Liang, Kuiyang
    WEB AND BIG DATA, 2017, 10612 : 204 - 213
  • [9] Neural Variational Collaborative Filtering for Top-K Recommendation
    Deng, Xiaoyi
    Zhuang, Fuzhen
    TRENDS AND APPLICATIONS IN KNOWLEDGE DISCOVERY AND DATA MINING: PAKDD 2019 WORKSHOPS, 2019, 11607 : 352 - 364
  • [10] TRecSo: Enhancing Top-k Recommendation With Social Information
    Park, Chanyoung
    Kim, Donghyun
    Oh, Jinoh
    Yu, Hwanjo
    PROCEEDINGS OF THE 25TH INTERNATIONAL CONFERENCE ON WORLD WIDE WEB (WWW'16 COMPANION), 2016, : 89 - 90