Hierarchical fuzzy ART for Q-learning and its application in air combat simulation

被引:5
|
作者
Zhou Y. [1 ]
Ma Y. [1 ]
Song X. [1 ]
Gong G. [1 ]
机构
[1] School of Automation Science and Electrical Engineering, Beihang University, XueYuan Road No. 37, HaiDian District, Beijing
来源
| 1600年 / World Scientific卷 / 08期
关键词
air combat simulation; Fuzzy ART; Q-learning; value function approximation;
D O I
10.1142/S1793962317500520
中图分类号
学科分类号
摘要
Value function approximation plays an important role in reinforcement learning (RL) with continuous state space, which is widely used to build decision models in practice. Many traditional approaches require experienced designers to manually specify the formulization of the approximating function, leading to the rigid, non-adaptive representation of the value function. To address this problem, a novel Q-value function approximation method named 'Hierarchical fuzzy Adaptive Resonance Theory' (HiART) is proposed in this paper. HiART is based on the Fuzzy ART method and is an adaptive classification network that learns to segment the state space by classifying the training input automatically. HiART begins with a highly generalized structure where the number of the category nodes is limited, which is beneficial to speed up the learning process at the early stage. Then, the network is refined gradually by creating the attached sub-networks, and a layered network structure is formed during this process. Based on this adaptive structure, HiART alleviates the dependence on expert experience to design the network parameter. The effectiveness and adaptivity of HiART are demonstrated in the Mountain Car benchmark problem with both fast learning speed and low computation time. Finally, a simulation application example of the one versus one air combat decision problem illustrates the applicability of HiART. © 2017 World Scientific Publishing Company.
引用
收藏
相关论文
共 50 条
  • [41] Fuzzy Q-learning in continuous state and action space
    Xu M.-L.
    Xu W.-B.
    Journal of China Universities of Posts and Telecommunications, 2010, 17 (04): : 100 - 109
  • [42] Regenerative braking system modeling by fuzzy Q-Learning
    Maia, Ricardo
    Mendes, Jerome
    Araujo, Rui
    Silva, Marco
    Nunes, Urbano
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2020, 93
  • [44] Design of a fuzzy logic controller with Evolutionary Q-Learning
    Kim, Min-Soeng
    Lee, Ju-Jang
    INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2006, 12 (04): : 369 - 381
  • [45] Dynamic fuzzy Q-Learning and control of mobile robots
    Deng, C
    Er, MJ
    Xu, J
    2004 8TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION, VOLS 1-3, 2004, : 2336 - 2341
  • [46] Fuzzy Rule Interpolation-based Q-learning
    Vincze, David
    Kovacs, Szilveszter
    SACI: 2009 5TH INTERNATIONAL SYMPOSIUM ON APPLIED COMPUTATIONAL INTELLIGENCE AND INFORMATICS, 2009, : 45 - 49
  • [47] Initialization of Q-values by fuzzy rules for accelerating Q-learning
    Oh, CH
    Nakashima, T
    Ishibuchi, H
    IEEE WORLD CONGRESS ON COMPUTATIONAL INTELLIGENCE, 1998, : 2051 - 2056
  • [48] Dynamic scheduling with fuzzy clustering based Q-learning
    Wang, Guo-Lei
    Lin, Lin
    Zhong, Shi-Sheng
    Jisuanji Jicheng Zhizao Xitong/Computer Integrated Manufacturing Systems, CIMS, 2009, 15 (04): : 751 - 757
  • [49] Dynamic Fuzzy Q-Learning with Facility of Tuning and Removing Fuzzy Rules
    Hosoya, Yu
    Umano, Motohide
    2012 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ-IEEE), 2012,
  • [50] Fuzzy Electricity Management System with Anomaly Detection and Fuzzy Q-Learning
    Syu, Jia-Hao
    Lin, Jerry Chun-Wei
    Srivastava, Gautam
    2023 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, FUZZ, 2023,