Hierarchical extreme learning machine based reinforcement learning for goal localization

被引:1
|
作者
AlDahoul, Nouar [1 ]
Htike, Zaw Zaw [1 ]
Akmeliawati, Rini [1 ]
机构
[1] Int Islamic Univ Malaysia, Dept Mechatron Engn, Kuala Lumpur, Malaysia
关键词
D O I
10.1088/1757-899X/184/1/012055
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
The objective of goal localization is to find the location of goals in noisy environments. Simple actions are performed to move the agent towards the goal. The goal detector should be capable of minimizing the error between the predicted locations and the true ones. Few regions need to be processed by the agent to reduce the computational effort and increase the speed of convergence. In this paper, reinforcement learning (RL) method was utilized to find optimal series of actions to localize the goal region. The visual data, a set of images, is high dimensional unstructured data and needs to be represented efficiently to get a robust detector. Different deep Reinforcement models have already been used to localize a goal but most of them take long time to learn the model. This long learning time results from the weights fine tuning stage that is applied iteratively to find an accurate model. Hierarchical Extreme Learning Machine (H-ELM) was used as a fast deep model that doesn't fine tune the weights. In other words, hidden weights are generated randomly and output weights are calculated analytically. H-ELM algorithm was used in this work to find good features for effective representation. This paper proposes a combination of Hierarchical Extreme learning machine and Reinforcement learning to find an optimal policy directly from visual input. This combination outperforms other methods in terms of accuracy and learning speed. The simulations and results were analysed by using MATLAB.
引用
收藏
页数:7
相关论文
共 50 条
  • [41] Goal Space Abstraction in Hierarchical Reinforcement Learning via Set-Based Reachability Analysis
    Zadem, Mehdi
    Mover, Sergio
    Nguyen, Sao Mai
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON DEVELOPMENT AND LEARNING, ICDL, 2023, : 423 - 428
  • [42] GHGC: Goal-based Hierarchical Group Communication in Multi-Agent Reinforcement Learning
    Jiang, Hao
    Shi, Dianxi
    Xue, Chao
    Wang, Yajie
    Wang, Gongju
    Zhang, Yongjun
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2020, : 3507 - 3514
  • [43] Extreme Learning Machine and AdaBoost-Based Localization Using CSI and RSSI
    Yan, Jun
    Ma, Chuanhui
    Kang, Bin
    Wu, Xiaohuan
    Liu, Huaping
    [J]. IEEE COMMUNICATIONS LETTERS, 2021, 25 (06) : 1906 - 1910
  • [44] Correlation based Extreme Learning Machine
    Shukla, Sanyam
    Yadav, R. N.
    Naktode, Lokesh
    [J]. 2016 9TH INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ESYSTEMS ENGINEERING (DESE 2016), 2016, : 268 - 272
  • [45] Voting based extreme learning machine
    Cao, Jiuwen
    Lin, Zhiping
    Huang, Guang-Bin
    Liu, Nan
    [J]. INFORMATION SCIENCES, 2012, 185 (01) : 66 - 77
  • [46] Ensemble Based Extreme Learning Machine
    Liu, Nan
    Wang, Han
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2010, 17 (08) : 754 - 757
  • [47] Goal Recognition as Reinforcement Learning
    Amado, Leonardo
    Mirsky, Reuth
    Meneguzzi, Felipe
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 9644 - 9651
  • [48] Emotion-based hierarchical reinforcement learning
    Zhou, WD
    Coggins, R
    [J]. DESIGN AND APPLICATION OF HYBRID INTELLIGENT SYSTEMS, 2003, 104 : 951 - 960
  • [49] Hierarchical memory-based reinforcement learning
    Hernandez-Gardiol, N
    Mahadevan, S
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 13, 2001, 13 : 1047 - 1053
  • [50] HIERARCHICAL DEFECT DETECTION BASED ON REINFORCEMENT LEARNING
    Fang, Fen
    Xu, Qianli
    Lim, Joo-Hwee
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 791 - 795