Real-time Motion Generation for Imaginary Creatures Using Hierarchical Reinforcement Learning

被引:0
|
作者
Ogaki, Keisuke [1 ]
Nakamura, Masayoshi [1 ]
机构
[1] DWANGO Co Ltd, Tokyo, Japan
关键词
Reinforcement Learning; Q-Learning; Neural Network;
D O I
10.1145/3214822.3214826
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Describing the motions of imaginary original creatures is an essential part of animations and computer games. One approach to generate such motions involves finding an optimal motion for approaching a goal by using the creatures' body and motor skills. Currently, researchers are employing deep reinforcement learning (DeepRL) to find such optimal motions. Some end-to-end DeepRL approaches learn the policy function, which outputs target pose for each joint according to the environment. In our study, we employed a hierarchical approach with a separate DeepRL decision maker and simple exploration-based sequence maker, and an action token, through which these two layers can communicate. By optimizing these two functions independently, we can achieve a light, fast-learning system available on mobile devices. In addition, we propose another technique to learn the policy at a faster pace with the help of a heuristic rule. By treating the heuristic rule as an additional action token, we can naturally incorporate it via Q-learning. The experimental results show that creatures can achieve better performance with the use of both heuristics and DeepRL than by using them independently.
引用
收藏
页数:2
相关论文
共 50 条
  • [1] Real-Time IDS Using Reinforcement Learning
    Sagha, Hesam
    Shouraki, Saeed Bagheri
    Khasteh, Hosein
    Dehghani, Mahdi
    [J]. 2008 INTERNATIONAL SYMPOSIUM ON INTELLIGENT INFORMATION TECHNOLOGY APPLICATION, VOL II, PROCEEDINGS, 2008, : 593 - +
  • [2] Real-time optimization using reinforcement learning
    Powell, By Kody M.
    Machalek, Derek
    Quah, Titus
    [J]. COMPUTERS & CHEMICAL ENGINEERING, 2020, 143
  • [3] Hierarchical Reinforcement-Learning for Real-Time Scheduling of Agile Satellites
    Ren, Lili
    Ning, Xin
    Li, Jiayin
    [J]. IEEE ACCESS, 2020, 8 : 220523 - 220532
  • [4] Real-Time Reinforcement Learning
    Ramstedt, Simon
    Pal, Christopher
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [5] Motion generation of virtual human with hierarchical reinforcement learning
    Mukai, T
    Kuriyama, S
    Kaneko, T
    [J]. ELECTRONICS AND COMMUNICATIONS IN JAPAN PART III-FUNDAMENTAL ELECTRONIC SCIENCE, 2004, 87 (11): : 34 - 43
  • [6] Real-Time Object Navigation With Deep Neural Networks and Hierarchical Reinforcement Learning
    Staroverov, Aleksey
    Yudin, Dmitry A.
    Belkin, Ilya
    Adeshkin, Vasily
    Solomentsev, Yaroslav K.
    Panov, Aleksandr I.
    [J]. IEEE ACCESS, 2020, 8 : 195608 - 195621
  • [7] Benchmarking Real-Time Reinforcement Learning
    Thodoroff, Pierre
    Li, Wenyu
    Lawrence, Neil D.
    [J]. NEURIPS 2021 WORKSHOP ON PRE-REGISTRATION IN MACHINE LEARNING, VOL 181, 2021, 181 : 26 - 41
  • [8] Real-time Energy Management of Microgrid Using Reinforcement Learning
    Bi, Wenzheng
    Shu, Yuankai
    Dong, Wei
    Yang, Qiang
    [J]. 2020 19TH INTERNATIONAL SYMPOSIUM ON DISTRIBUTED COMPUTING AND APPLICATIONS FOR BUSINESS ENGINEERING AND SCIENCE (DCABES 2020), 2020, : 38 - 41
  • [9] Real-time Motion Planning for Robotic Teleoperation Using Dynamic-goal Deep Reinforcement Learning
    Kamali, Kaveh
    Bonev, Ilian A.
    Desrosiers, Christian
    [J]. 2020 17TH CONFERENCE ON COMPUTER AND ROBOT VISION (CRV 2020), 2020, : 182 - 189
  • [10] Learning rate free reinforcement learning for real-time motion control using a value-gradient based policy
    van Rooijen, J. C.
    Grondman, I.
    Babuska, R.
    [J]. MECHATRONICS, 2014, 24 (08) : 966 - 974