Integration of Q-learning and Behavior Network Approach with Hierarchical Task Network Planning for Dynamic Environments

被引:0
|
作者
Sung, Yunsick [2 ]
Cho, Kyungeun [1 ]
Um, Kyhyun [1 ]
机构
[1] Dongguk Univ, Dept Multimedia Engn, Seoul 100715, South Korea
[2] Dongguk Univ, Dept Game Engn, Grad Sch, Seoul 100715, South Korea
关键词
Q-learning; Hierarchical task network; Behavior network;
D O I
暂无
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
The problem of automated planning by diverse virtual agents, cooperating or acting independently, in a virtual environment is commonly resolved by using hierarchical task network (HTN) planning, Q-learning, and the behavior network approach. Each agent must plan its tasks in consideration of the movements of other agents to achieve its goals. HTN planning involves decomposing goal tasks into primitive and compound tasks. However, the time required to perform this decomposition drastically increases with the number of virtual agents and with substantial changes in the environment. This can be addressed by combining HTN planning with Q-learning. However, dynamic changes in the environment can still prevent planned primitive tasks from being performed. Thus, to increase the goal achievement probability, an approach to adapt to dynamic environments is required. This paper proposes the use of the behavior network approach as well The proposed integrated approach was applied to racing car simulation in which a virtual agent selected and executed sequential actions in real time. When comparing to the traditional HTN, the proposed method shows the result better than the traditional HTN about 142%. Therefore we could verify that the proposed method can perform primitive task considering dynamic environment.
引用
收藏
页码:2079 / 2090
页数:12
相关论文
共 50 条
  • [1] BIOINSPIRED NEURAL NETWORK-BASED Q-LEARNING APPROACH FOR ROBOT PATH PLANNING IN UNKNOWN ENVIRONMENTS
    Ni, Jianjun
    Li, Xinyun
    Hua, Mingang
    Yang, Simon X.
    INTERNATIONAL JOURNAL OF ROBOTICS & AUTOMATION, 2016, 31 (06): : 464 - 474
  • [2] Integration of Deep Q-Learning with a Grasp Quality Network for Robot Grasping in Cluttered Environments
    Huang, Chih-Yung
    Shao, Yu-Hsiang
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2024, 110 (03)
  • [3] A DEEP Q-LEARNING NETWORK FOR SHIP STOWAGE PLANNING PROBLEM
    Shen, Yifan
    Zhao, Ning
    Xia, Mengjue
    Du, Xueqiang
    POLISH MARITIME RESEARCH, 2017, 24 : 102 - 109
  • [4] Q-learning with a growing RBF network for behavior learning in mobile robotics
    Li, J
    Duckett, T
    PROCEEDINGS OF THE SIXTH IASTED INTERNATIONAL CONFERENCE ON ROBOTICS AND APPLICATIONS, 2005, : 273 - 278
  • [5] Bioinspired neural network-based Q-learning approach for robot path planninginunknown environments
    Ni, Jianjun
    Li, Xinyun
    Hua, Mingang
    Yang, Simon X.
    International Journal of Robotics and Automation, 2016, 31 (06): : 464 - 474
  • [6] Player collaboration in virtual environments using hierarchical task network planning
    Masato, Daniele
    Chalmers, Stuart
    Preece, Alun
    APPLICATIONS AND INNOVATIONS IN INTELLIGENT SYSTEMS XV, 2008, : 203 - 216
  • [7] Local Path Planning: Dynamic Window Approach With Q-Learning Considering Congestion Environments for Mobile Robot
    Kobayashi, Masato
    Zushi, Hiroka
    Nakamura, Tomoaki
    Motoi, Naoki
    IEEE ACCESS, 2023, 11 : 96733 - 96742
  • [8] Dynamic Task Division and Allocation in Mobile Edge Computing Systems: A Latency Oriented Approach via Deep Q-Learning Network
    Tan, Pengcheng
    Li, Yang
    Dai, Minghui
    Wu, Yuan
    2022 IEEE 23RD INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE SWITCHING AND ROUTING (IEEE HPSR), 2022, : 252 - 259
  • [9] DHQN: a Stable Approach to Remove Target Network from Deep Q-learning Network
    Yang, Guang
    Li, Yang
    Fei, Di'an
    Huang, Tian
    Li, Qingyun
    Chen, Xingguo
    2021 IEEE 33RD INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2021), 2021, : 1474 - 1479
  • [10] Hierarchical task network planning as satisfiability
    Mali, AD
    RECENT ADVANCES IN AI PLANNING, 2000, 1809 : 122 - 134