Self-Supervised pre-training model based on Multi-view for MOOC Recommendation

被引:0
|
作者
Tian R. [1 ]
Cai J. [2 ]
Li C. [1 ,3 ]
Wang J. [1 ]
机构
[1] School of Information and Communication Engineering, Communication University of China
[2] State Key Laboratory of Media Audio & Video (Communication University of China), Ministry of Education
[3] State Key Laboratory of Media Convergence and Communication, Communication University of China
关键词
Contrastive learning; MOOC recommendation; Multi-view correlation; Prerequisite dependency;
D O I
10.1016/j.eswa.2024.124143
中图分类号
学科分类号
摘要
Recommendation strategies based on concepts of knowledge are gradually applied to personalized course recommendation to promote model learning from implicit feedback data. However, existing approaches typically overlook the prerequisite dependency between concepts, which is the significant basis for connecting courses, and they fail to effectively model the relationship between items and attributes of courses, leading to inadequate capturing of associations between data and ineffective integration of implicit semantics into sequence representations. In this paper, we propose Self-Supervised pre-training model based on Multi-view for MOOC Recommendation (SSM4MR) that exploits non-explicit but inherently correlated features to guide the representation learning of users’ course preferences. In particular, to keep the model from relying solely on course prediction loss and overmphasising on the final performance, we treat the concepts of knowledge, course items and learning paths as different views, then sufficiently model the intrinsic relevance among multi-view through formulating multiple specific self-supervised objectives. As such, our model enhances the sequence representation and ultimately achieves high-performance course recommendation. All the extensive experiments and analyses provide persuasive support for the superiority of the model design and the recommendation results. © 2024
引用
收藏
相关论文
共 50 条
  • [21] Representation Recovering for Self-Supervised Pre-training on Medical Images
    Yan, Xiangyi
    Naushad, Junayed
    Sun, Shanlin
    Han, Kun
    Tang, Hao
    Kong, Deying
    Ma, Haoyu
    You, Chenyu
    Xie, Xiaohui
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 2684 - 2694
  • [22] Reducing Domain mismatch in Self-supervised speech pre-training
    Baskar, Murali Karthick
    Rosenberg, Andrew
    Ramabhadran, Bhuvana
    Zhang, Yu
    INTERSPEECH 2022, 2022, : 3028 - 3032
  • [23] Dense Contrastive Learning for Self-Supervised Visual Pre-Training
    Wang, Xinlong
    Zhang, Rufeng
    Shen, Chunhua
    Kong, Tao
    Li, Lei
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 3023 - 3032
  • [24] Self-supervised VICReg pre-training for Brugada ECG detection
    Robert Ronan
    Constantine Tarabanis
    Larry Chinitz
    Lior Jankelson
    Scientific Reports, 15 (1)
  • [25] A Self-Supervised Pre-Training Method for Chinese Spelling Correction
    Su J.
    Yu S.
    Hong X.
    Huanan Ligong Daxue Xuebao/Journal of South China University of Technology (Natural Science), 2023, 51 (09): : 90 - 98
  • [26] Self-supervised pre-training on industrial time-series
    Biggio, Luca
    Kastanis, Iason
    2021 8TH SWISS CONFERENCE ON DATA SCIENCE, SDS, 2021, : 56 - 57
  • [27] Self-supervised Pre-training for Semantic Segmentation in an Indoor Scene
    Shrestha, Sulabh
    Li, Yimeng
    Kosecka, Jana
    2024 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS, WACVW 2024, 2024, : 625 - 635
  • [28] Multi-view Self-supervised Heterogeneous Graph Embedding
    Zhao, Jianan
    Wen, Qianlong
    Sun, Shiyu
    Ye, Yanfang
    Zhang, Chuxu
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2021: RESEARCH TRACK, PT II, 2021, 12976 : 319 - 334
  • [29] Self-Supervised Deep Multi-View Subspace Clustering
    Sun, Xiukun
    Cheng, Miaomiao
    Min, Chen
    Jing, Liping
    ASIAN CONFERENCE ON MACHINE LEARNING, VOL 101, 2019, 101 : 1001 - 1016
  • [30] Digging into Uncertainty in Self-supervised Multi-view Stereo
    Xu, Hongbin
    Zhou, Zhipeng
    Wang, Yali
    Kang, Wenxiong
    Sun, Baigui
    Li, Hao
    Qiao, Yu
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 6058 - 6067