Curiosity-driven recommendation strategy for adaptive learning via deep reinforcement learning

被引:9
|
作者
Han, Ruijian [1 ]
Chen, Kani [1 ]
Tan, Chunxi [1 ]
机构
[1] Hong Kong Univ Sci & Technol, Dept Math, Kowloon, Clear Water Bay, Hong Kong, Peoples R China
关键词
adaptive learning; curiosity-driven exploration; Markov decision problem; recommendation system; reinforcement learning; ALGORITHMS;
D O I
10.1111/bmsp.12199
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
The design of recommendation strategies in the adaptive learning systems focuses on utilizing currently available information to provide learners with individual-specific learning instructions. As a critical motivate for human behaviours, curiosity is essentially the drive to explore knowledge and seek information. In a psychologically inspired view, we propose a curiosity-driven recommendation policy within the reinforcement learning framework, allowing for an efficient and enjoyable personalized learning path. Specifically, a curiosity reward from a well-designed predictive model is generated to model one's familiarity with the knowledge space. Given such curiosity rewards, we apply the actor-critic method to approximate the policy directly through neural networks. Numerical analyses with a large continuous knowledge state space and concrete learning scenarios are provided to further demonstrate the efficiency of the proposed method.
引用
收藏
页码:522 / 540
页数:19
相关论文
共 50 条
  • [1] Random curiosity-driven exploration in deep reinforcement learning
    Li, Jing
    Shi, Xinxin
    Li, Jiehao
    Zhang, Xin
    Wang, Junzheng
    [J]. NEUROCOMPUTING, 2020, 418 : 139 - 147
  • [2] Curiosity-driven Exploration in Reinforcement Learning
    Gregor, Michael d
    Spalek, Juraj
    [J]. 2014 ELEKTRO, 2014, : 435 - 440
  • [3] Curiosity-Driven Reinforcement Learning with Homeostatic Regulation
    de Abril, Ildefons Magrans
    Kanai, Ryota
    [J]. 2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [4] CURIOSITY-DRIVEN REINFORCEMENT LEARNING FOR DIALOGUE MANAGEMENT
    Wesselmann, Paula
    Wu, Yen-Chen
    Gasic, Milica
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 7210 - 7214
  • [5] ATTENTION-BASED CURIOSITY-DRIVEN EXPLORATION IN DEEP REINFORCEMENT LEARNING
    Reizinger, Patrik
    Szemenyei, Marton
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 3542 - 3546
  • [6] Seeking Visual Discomfort: Curiosity-driven Representations for Reinforcement Learning
    Aljalbout, Elie
    Ulmer, Maximilian
    Triebel, Rudolph
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2022), 2022, : 3591 - 3597
  • [7] Curiosity-driven phonetic learning
    Moulin-Frier, Clement
    Oudeyer, Pierre-Yves
    [J]. 2012 IEEE INTERNATIONAL CONFERENCE ON DEVELOPMENT AND LEARNING AND EPIGENETIC ROBOTICS (ICDL), 2012,
  • [8] Automatic Web Testing Using Curiosity-Driven Reinforcement Learning
    Zheng, Yan
    Liu, Yi
    Xie, Xiaofei
    Liu, Yepang
    Ma, Lei
    Hao, Jianye
    Liu, Yang
    [J]. 2021 IEEE/ACM 43RD INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2021), 2021, : 423 - 435
  • [9] Aggressive Quadrotor Flight Using Curiosity-Driven Reinforcement Learning
    Sun, Qiyu
    Fang, Jinbao
    Zheng, Wei Xing
    Tang, Yang
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2022, 69 (12) : 13838 - 13848
  • [10] Curiosity-Driven Class-Incremental Learning via Adaptive Sample Selection
    Hu, Qinghua
    Gao, Yucong
    Cao, Bing
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (12) : 8660 - 8673