Cognitively inspired reinforcement learning architecture and its application to giant-swing motion control

被引:4
|
作者
Uragami, Daisuke [1 ]
Takahashi, Tatsuji [2 ]
Matsuo, Yoshiki [1 ]
机构
[1] Tokyo Univ Technol, Sch Comp Sci, Hachioji, Tokyo 1920982, Japan
[2] Tokyo Denki Univ, Sch Sci & Technol, Hiki, Saitama 3500394, Japan
基金
日本学术振兴会;
关键词
Q-learning; Exploration-exploitation dilemma; Bio-inspired computing; Cognitive bias; Loosely symmetric model; Acrobot; Multi-armed bandit problems; ACQUISITION; MODEL; BEHAVIOR; MAP;
D O I
10.1016/j.biosystems.2013.11.002
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Many algorithms and methods in artificial intelligence or machine learning were inspired by human cognition. As a mechanism to handle the exploration-exploitation dilemma in reinforcement learning, the loosely symmetric (LS) value function that models causal intuition of humans was proposed (Shinohara et al., 2007). While LS shows the highest correlation with causal induction by humans, it has been reported that it effectively works in multi-armed bandit problems that form the simplest class of tasks representing the dilemma. However, the scope of application of LS was limited to the reinforcement learning problems that have K actions with only one state (K-armed bandit problems). This study proposes LS-Q learning architecture that can deal with general reinforcement learning tasks with multiple states and delayed reward. We tested the learning performance of the new architecture in giant-swing robot motion learning, where uncertainty and unknown-ness of the environment is huge. In the test, the help of ready-made internal models or functional approximation of the state space were not given. The simulations showed that while the ordinary Q-learning agent does not reach giant-swing motion because of stagnant loops (local optima with low rewards), LS-Q escapes such loops and acquires giant-swing. It is confirmed that the smaller number of states is, in other words, the more coarse-grained the division of states and the more incomplete the state observation is, the better LS-Q performs in comparison with Q-learning. We also showed that the high performance of LS-Q depends comparatively little on parameter tuning and learning time. This suggests that the proposed method inspired by human cognition works adaptively in real environments. (C) 2013 Elsevier Ireland Ltd. All rights reserved.
引用
收藏
页码:1 / 9
页数:9
相关论文
共 50 条
  • [21] Control of giant swing motion of a two-link horizontal bar gymnastic robot
    Ono, K
    Yamamoto, K
    Imadu, A
    ADVANCED ROBOTICS, 2001, 15 (04) : 449 - 465
  • [22] Control of giant swing motion of a two-link underactuated horizontal bar robot
    Ono, K
    Yamamoto, K
    Imadu, A
    2000 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2000), VOLS 1-3, PROCEEDINGS, 2000, : 1676 - 1683
  • [23] Decentralized reinforcement learning control and emergence of motion patterns
    Svinin, M
    Yamada, K
    Okhura, K
    Ueda, K
    SENSOR FUSION AND DECENTRALIZED CONTROL IN ROBOTIC SYSTEMS, 1998, 3523 : 223 - 234
  • [24] Quadrotor motion control using deep reinforcement learning
    Jiang, Zifei
    Lynch, Alan F.
    JOURNAL OF UNMANNED VEHICLE SYSTEMS, 2021, 9 (04) : 234 - 251
  • [25] Subequivariant Reinforcement Learning Framework for Coordinated Motion Control
    Wang, Haoyu
    Tan, Xiaoyu
    Qiu, Xihe
    Qu, Chao
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2024, 2024, : 2112 - 2118
  • [26] Motion control for laser machining via reinforcement learning
    Xie, Yunhui
    Praeger, Matthew
    Grant-Jacob, James A.
    Eason, Robert W.
    Mills, Ben
    OPTICS EXPRESS, 2022, 30 (12) : 20963 - 20979
  • [27] Reinforcement learning and its application to the game of Go
    Chen X.-G.
    Yu Y.
    Yu, Yang (yuy@nju.edu.cn), 2016, Science Press (42): : 685 - 695
  • [28] Application of reinforcement learning for active noise control
    Hoseini Sabzevari, Seyed Amir
    Moavenian, Majid
    TURKISH JOURNAL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCES, 2017, 25 (04) : 2606 - 2613
  • [29] Application of reinforcement learning to dexterous robot control
    Bucak, IO
    Zohdy, MA
    PROCEEDINGS OF THE 1998 AMERICAN CONTROL CONFERENCE, VOLS 1-6, 1998, : 1405 - 1409
  • [30] Application of reinforcement learning to RC helicopter control
    Murao, H
    Tamaki, H
    Kitamura, S
    SICE 2003 ANNUAL CONFERENCE, VOLS 1-3, 2003, : 2306 - 2309