Integration of Q-learning and Behavior Network Approach with Hierarchical Task Network Planning for Dynamic Environments

被引:0
|
作者
Sung, Yunsick [2 ]
Cho, Kyungeun [1 ]
Um, Kyhyun [1 ]
机构
[1] Dongguk Univ, Dept Multimedia Engn, Seoul 100715, South Korea
[2] Dongguk Univ, Dept Game Engn, Grad Sch, Seoul 100715, South Korea
关键词
Q-learning; Hierarchical task network; Behavior network;
D O I
暂无
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
The problem of automated planning by diverse virtual agents, cooperating or acting independently, in a virtual environment is commonly resolved by using hierarchical task network (HTN) planning, Q-learning, and the behavior network approach. Each agent must plan its tasks in consideration of the movements of other agents to achieve its goals. HTN planning involves decomposing goal tasks into primitive and compound tasks. However, the time required to perform this decomposition drastically increases with the number of virtual agents and with substantial changes in the environment. This can be addressed by combining HTN planning with Q-learning. However, dynamic changes in the environment can still prevent planned primitive tasks from being performed. Thus, to increase the goal achievement probability, an approach to adapt to dynamic environments is required. This paper proposes the use of the behavior network approach as well The proposed integrated approach was applied to racing car simulation in which a virtual agent selected and executed sequential actions in real time. When comparing to the traditional HTN, the proposed method shows the result better than the traditional HTN about 142%. Therefore we could verify that the proposed method can perform primitive task considering dynamic environment.
引用
收藏
页码:2079 / 2090
页数:12
相关论文
共 50 条
  • [31] A novel dynamic integration approach for multiple load forecasts based on Q-learning algorithm
    Ma, Minhua
    Jin, Bingjie
    Luo, Shuxin
    Guo, Shaoqing
    Huang, Hongwei
    INTERNATIONAL TRANSACTIONS ON ELECTRICAL ENERGY SYSTEMS, 2020, 30 (07):
  • [32] Deep Q-Learning-Based Dynamic Network Slicing and Task Offloading in Edge Network
    Chiang, Yao
    Hsu, Chih-Ho
    Chen, Guan-Hao
    Wei, Hung-Yu
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2023, 20 (01): : 369 - 384
  • [33] A path planning approach for unmanned surface vehicles based on dynamic and fast Q-learning
    Hao, Bing
    Du, He
    Yan, Zheping
    OCEAN ENGINEERING, 2023, 270
  • [34] A novel hierarchical task network planning approach for multi-objective optimization
    Li, Minglei
    Liu, Xingjun
    Jiang, Guoyin
    Liu, Wenping
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 251
  • [35] HIERARCHICAL TASK NETWORK APPROACH FOR TIME AND BUDGET CONSTRAINED CONSTRUCTION PROJECT PLANNING
    Liu, Dian
    Wang, Hong-wei
    Li, Heng
    Wang, Johnny
    Khallaf, Mohamed
    TECHNOLOGICAL AND ECONOMIC DEVELOPMENT OF ECONOMY, 2019, 25 (03) : 472 - 495
  • [36] The Integration of Personal Learning Environments & Open Network Learning Environments
    Tu, Chih-Hsiung
    Sujo-Montes, Laura
    Yen, Cherng-Jyh
    Chan, Junn-Yih
    Blocher, Michael
    TECHTRENDS, 2012, 56 (03) : 13 - 19
  • [37] A Prediction and Learning Based Approach to Network Selection in Dynamic Environments
    Li, Xiaohong
    Cao, Ru
    Hao, Jianye
    Feng, Zhiyong
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2017, PT I, 2017, 10613 : 92 - 100
  • [38] Route planning for last-mile deliveries using mobile parcel lockers: A hybrid q-learning network approach
    Liu, Yubin
    Ye, Qiming
    Escribano-Macias, Jose
    Feng, Yuxiang
    Candela, Eduardo
    Angeloudis, Panagiotis
    TRANSPORTATION RESEARCH PART E-LOGISTICS AND TRANSPORTATION REVIEW, 2023, 177
  • [39] Q-Learning Algorithm Based on Incremental RBF Network
    Hu Y.
    Li D.
    He Y.
    Han J.
    Jiqiren/Robot, 2019, 41 (05): : 562 - 573
  • [40] A type of Q-learning method based on Elman network
    Liu Chang-you
    Sun Guang-yu
    Proceedings of 2004 Chinese Control and Decision Conference, 2004, : 562 - 564