Graph Knowledge Structure for Attentional Knowledge Tracing With Self-Supervised Learning

被引:0
|
作者
Liu, Zhaohui [1 ]
Liu, Sainan [1 ]
Gu, Weifeng [1 ]
机构
[1] Univ South China, Sch Comp Sci, Sch Software, Hengyang 421001, Hunan, Peoples R China
来源
IEEE ACCESS | 2025年 / 13卷
关键词
Knowledge engineering; Semantics; Data models; Predictive models; Attention mechanisms; Self-supervised learning; Analytical models; Deep learning; Computer science; Computational modeling; Intelligent systems; self-supervised learning; attention mechanisms; graph convolutional networks; knowledge tracing;
D O I
10.1109/ACCESS.2024.3521883
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As intelligent education advances and online learning becomes more prevalent, Knowledge Tracing (KT) has become increasingly important. KT assesses students' learning progress by analysing their historical performance in related exercises. Despite significant advances in the field, there are still shortcomings in two aspects: first, a lack of effective integration between exercises and knowledge points; second, an overemphasis on nodal information, neglecting deep semantic relationships. To address these, we propose a self-supervised learning approach that uses an enhanced heterogeneous graph attention network to represent and analyse complex relationships between exercises and knowledge points. We introduce an innovative surrogate view generation method to optimise the integration of local structural information and global semantics within the graph, addressing relational inductive bias. In addition, we incorporate the improved representation algorithm into the loss function to handle data sparsity, thereby improving prediction accuracy. Experiments on three real-world datasets show that our model outperforms baseline models.
引用
收藏
页码:10933 / 10943
页数:11
相关论文
共 50 条
  • [31] Self-supervised Graph Learning with Segmented Graph Channels
    Gao, Hang
    Li, Jiangmeng
    Zheng, Changwen
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2022, PT II, 2023, 13714 : 293 - 308
  • [32] Self-Supervised Contrastive Learning for Camera-to-Radar Knowledge Distillation
    Wang, Wenpeng
    Campbell, Bradford
    Munir, Sirajum
    2024 20TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING IN SMART SYSTEMS AND THE INTERNET OF THINGS, DCOSS-IOT 2024, 2024, : 154 - 161
  • [33] Self-Supervised Hypergraph Learning for Knowledge-Aware Social Recommendation
    Li, Munan
    Li, Jialong
    Yang, Liping
    Ding, Qi
    ELECTRONICS, 2024, 13 (07)
  • [34] Sentiment Knowledge Enhanced Self-supervised Learning for Multimodal Sentiment Analysis
    Qian, Fan
    Han, Jiqing
    He, Yongjun
    Zheng, Tieran
    Zheng, Guibin
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 12966 - 12978
  • [35] Image quality assessment based on self-supervised learning and knowledge distillation
    Sang, Qingbing
    Shu, Ziru
    Liu, Lixiong
    Hu, Cong
    Wu, Qin
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2023, 90
  • [36] EvolveNet: Adaptive Self-Supervised Continual Learning without Prior Knowledge
    Liu, Zhuang
    Song, Xiangrui
    Zhao, Sihuan
    Shi, Ya
    Yang, Dengfeng
    Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2024, 46 (08): : 3256 - 3266
  • [37] Graph Self-supervised Learning with Accurate Discrepancy Learning
    Kim, Dongki
    Baek, Jinheon
    Hwang, Sung Ju
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [38] Self-Supervised Graph Structure Learning for Cyber-Physical Systems
    Augustin, Jan Lukas
    Niggemann, Oliver
    IFAC PAPERSONLINE, 2024, 58 (04): : 204 - 209
  • [39] Self-Supervised Bidirectional Learning for Graph Matching
    Guo, Wenqi
    Zhang, Lin
    Tu, Shikui
    Xu, Lei
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 7784 - 7792
  • [40] Adaptive Self-Supervised Graph Representation Learning
    Gong, Yunchi
    36TH INTERNATIONAL CONFERENCE ON INFORMATION NETWORKING (ICOIN 2022), 2022, : 254 - 259