ATTENTION-BASED CURIOSITY-DRIVEN EXPLORATION IN DEEP REINFORCEMENT LEARNING

被引:0
|
作者
Reizinger, Patrik [1 ]
Szemenyei, Marton [1 ]
机构
[1] Budapest Univ Technol & Econ, Dept Control Engn & Informat Technol, Budapest, Hungary
关键词
Reinforcement Learning; curiosity; exploration; attention;
D O I
10.1109/icassp40776.2020.9054546
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Reinforcement Learning enables to train an agent via interaction with the environment. However, in the majority of real-world scenarios, the extrinsic feedback is sparse or not sufficient, thus intrinsic reward formulations are needed to successfully train the agent. This work investigates and extends the paradigm of curiosity-driven exploration. Our aim is to develop means for the better incorporation of state-and/or action-dependent information into existing intrinsic reward formulations. First, a probabilistic approach is taken to exploit the advantages of the attention mechanism, which is successfully applied in other domains of Deep Learning. Combining them, we propose new methods, such as Attention-aided Advantage Actor-Critic, an extension of the Actor-Critic framework. Second, another curiosity-based approach - Intrinsic Curiosity Module - is extended. The proposed model utilizes attention to emphasize features for the dynamic models within Intrinsic Curiosity Module, moreover, we also modify the loss function, resulting in a new curiosity formulation, which we call rational curiosity (RCM).
引用
收藏
页码:3542 / 3546
页数:5
相关论文
共 50 条
  • [1] Random curiosity-driven exploration in deep reinforcement learning
    Li, Jing
    Shi, Xinxin
    Li, Jiehao
    Zhang, Xin
    Wang, Junzheng
    [J]. NEUROCOMPUTING, 2020, 418 : 139 - 147
  • [2] Curiosity-driven Exploration in Reinforcement Learning
    Gregor, Michael d
    Spalek, Juraj
    [J]. 2014 ELEKTRO, 2014, : 435 - 440
  • [3] Episodic Multi-agent Reinforcement Learning with Curiosity-driven Exploration
    Zheng, Lulu
    Chen, Jiarui
    Wang, Jianhao
    He, Jiamin
    Hu, Yujing
    Chen, Yingfeng
    Fan, Changjie
    Gao, Yang
    Zhang, Chongjie
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [4] Curiosity-driven Exploration for Cooperative Multi-Agent Reinforcement Learning
    Xu, Fanchao
    Kaneko, Tomoyuki
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [5] Curiosity-driven recommendation strategy for adaptive learning via deep reinforcement learning
    Han, Ruijian
    Chen, Kani
    Tan, Chunxi
    [J]. BRITISH JOURNAL OF MATHEMATICAL & STATISTICAL PSYCHOLOGY, 2020, 73 (03): : 522 - 540
  • [6] Curiosity-Driven Reinforcement Learning with Homeostatic Regulation
    de Abril, Ildefons Magrans
    Kanai, Ryota
    [J]. 2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [7] CURIOSITY-DRIVEN REINFORCEMENT LEARNING FOR DIALOGUE MANAGEMENT
    Wesselmann, Paula
    Wu, Yen-Chen
    Gasic, Milica
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 7210 - 7214
  • [8] Accelerating Reinforcement Learning-Based CCSL Specification Synthesis Using Curiosity-Driven Exploration
    Hu, Ming
    Zhang, Min
    Mallet, Frederic
    Fu, Xin
    Chen, Mingsong
    [J]. IEEE TRANSACTIONS ON COMPUTERS, 2023, 72 (05) : 1431 - 1446
  • [9] Humans monitor learning progress in curiosity-driven exploration
    Ten, Alexandr
    Kaushik, Pramod
    Oudeyer, Pierre-Yves
    Gottlieb, Jacqueline
    [J]. NATURE COMMUNICATIONS, 2021, 12 (01)
  • [10] Humans monitor learning progress in curiosity-driven exploration
    Alexandr Ten
    Pramod Kaushik
    Pierre-Yves Oudeyer
    Jacqueline Gottlieb
    [J]. Nature Communications, 12