Integration of Q-learning and Behavior Network Approach with Hierarchical Task Network Planning for Dynamic Environments

被引:0
|
作者
Sung, Yunsick [2 ]
Cho, Kyungeun [1 ]
Um, Kyhyun [1 ]
机构
[1] Dongguk Univ, Dept Multimedia Engn, Seoul 100715, South Korea
[2] Dongguk Univ, Dept Game Engn, Grad Sch, Seoul 100715, South Korea
关键词
Q-learning; Hierarchical task network; Behavior network;
D O I
暂无
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
The problem of automated planning by diverse virtual agents, cooperating or acting independently, in a virtual environment is commonly resolved by using hierarchical task network (HTN) planning, Q-learning, and the behavior network approach. Each agent must plan its tasks in consideration of the movements of other agents to achieve its goals. HTN planning involves decomposing goal tasks into primitive and compound tasks. However, the time required to perform this decomposition drastically increases with the number of virtual agents and with substantial changes in the environment. This can be addressed by combining HTN planning with Q-learning. However, dynamic changes in the environment can still prevent planned primitive tasks from being performed. Thus, to increase the goal achievement probability, an approach to adapt to dynamic environments is required. This paper proposes the use of the behavior network approach as well The proposed integrated approach was applied to racing car simulation in which a virtual agent selected and executed sequential actions in real time. When comparing to the traditional HTN, the proposed method shows the result better than the traditional HTN about 142%. Therefore we could verify that the proposed method can perform primitive task considering dynamic environment.
引用
收藏
页码:2079 / 2090
页数:12
相关论文
共 50 条
  • [21] Dual deep Q-learning network guiding a multiagent path planning approach for virtual fire emergency scenarios
    Wen Zhou
    Chen Zhang
    Siyuan Chen
    Applied Intelligence, 2023, 53 : 21858 - 21874
  • [22] A Deep Q-Learning Network for Dynamic Constraint-Satisfied Service Composition
    Yu, Xuezhi
    Ye, Chunyang
    Li, Bingzhuo
    Zhou, Hui
    Huang, Mengxing
    INTERNATIONAL JOURNAL OF WEB SERVICES RESEARCH, 2020, 17 (04) : 55 - 75
  • [23] Dual deep Q-learning network guiding a multiagent path planning approach for virtual fire emergency scenarios
    Zhou, Wen
    Zhang, Chen
    Chen, Siyuan
    APPLIED INTELLIGENCE, 2023, 53 (19) : 21858 - 21874
  • [24] Hierarchical Multimodal Fusion Network with Dynamic Multi-task Learning
    Wang, Tianyi
    Chen, Shu-Ching
    2021 IEEE 22ND INTERNATIONAL CONFERENCE ON INFORMATION REUSE AND INTEGRATION FOR DATA SCIENCE (IRI 2021), 2021, : 208 - 214
  • [25] A Dynamic Hidden Forwarding Path Planning Method Based on Improved Q-Learning in SDN Environments
    Chen, Yun
    Lv, Kun
    Hu, Changzhen
    SECURITY AND COMMUNICATION NETWORKS, 2018,
  • [26] Distribution Network Reconfiguration Based on NoisyNet Deep Q-Learning Network
    Wang, Beibei
    Zhu, Hong
    Xu, Honghua
    Bao, Yuqing
    Di, Huifang
    IEEE ACCESS, 2021, 9 : 90358 - 90365
  • [27] A hierarchical neuronal network for planning behavior
    Dehaene, S
    Changeux, JP
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 1997, 94 (24) : 13293 - 13298
  • [28] A hierarchical neuronal network for planning behavior
    Proc Natl Acad Sci USA, 24 (13293):
  • [29] Application of artificial neural network based on Q-learning for mobile robot path planning
    Li, Caihong
    Zhang, Jingyuan
    Li, Yibin
    2006 IEEE INTERNATIONAL CONFERENCE ON INFORMATION ACQUISITION, VOLS 1 AND 2, CONFERENCE PROCEEDINGS, 2006, : 978 - 982
  • [30] Hierarchical Task Network Planning with Task Insertion and State Constraints
    Xiao, Zhanhao
    Herzig, Andreas
    Perrussel, Laurent
    Wan, Hai
    Su, Xiaoheng
    PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 4463 - 4469