Hierarchical extreme learning machine based reinforcement learning for goal localization

被引:1
|
作者
AlDahoul, Nouar [1 ]
Htike, Zaw Zaw [1 ]
Akmeliawati, Rini [1 ]
机构
[1] Int Islamic Univ Malaysia, Dept Mechatron Engn, Kuala Lumpur, Malaysia
关键词
D O I
10.1088/1757-899X/184/1/012055
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
The objective of goal localization is to find the location of goals in noisy environments. Simple actions are performed to move the agent towards the goal. The goal detector should be capable of minimizing the error between the predicted locations and the true ones. Few regions need to be processed by the agent to reduce the computational effort and increase the speed of convergence. In this paper, reinforcement learning (RL) method was utilized to find optimal series of actions to localize the goal region. The visual data, a set of images, is high dimensional unstructured data and needs to be represented efficiently to get a robust detector. Different deep Reinforcement models have already been used to localize a goal but most of them take long time to learn the model. This long learning time results from the weights fine tuning stage that is applied iteratively to find an accurate model. Hierarchical Extreme Learning Machine (H-ELM) was used as a fast deep model that doesn't fine tune the weights. In other words, hidden weights are generated randomly and output weights are calculated analytically. H-ELM algorithm was used in this work to find good features for effective representation. This paper proposes a combination of Hierarchical Extreme learning machine and Reinforcement learning to find an optimal policy directly from visual input. This combination outperforms other methods in terms of accuracy and learning speed. The simulations and results were analysed by using MATLAB.
引用
收藏
页数:7
相关论文
共 50 条
  • [1] Utilizing hierarchical extreme learning machine based reinforcement learning for object sorting
    AlDahoul, Nouar
    Htike, ZawZaw
    [J]. INTERNATIONAL JOURNAL OF ADVANCED AND APPLIED SCIENCES, 2019, 6 (01): : 106 - 113
  • [2] Reinforcement Learning Based on Extreme Learning Machine
    Pan, Jie
    Wang, Xuesong
    Cheng, Yuhu
    Cao, Ge
    [J]. EMERGING INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, 2012, 304 : 80 - 86
  • [3] Receding Horizon Cache and Extreme Learning Machine Based Reinforcement Learning
    Shao, Zhifei
    Er, Meng Joo
    Huang, Guang-Bin
    [J]. 2012 12TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS & VISION (ICARCV), 2012, : 1591 - 1596
  • [4] Affine Transformation Based Hierarchical Extreme Learning Machine
    Ma, Rongzhi
    Cao, Jiuwen
    Wang, Tianlei
    Lai, Xiaoping
    [J]. 2020 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2020,
  • [5] Hierarchical Extreme Learning Machine for Unsupervised Representation Learning
    Zhu, Wentao
    Miao, Jun
    Qing, Laiyun
    Huang, Guang-Bin
    [J]. 2015 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2015,
  • [6] Hybrid Hierarchical Extreme Learning Machine
    Li, Meiyi
    Wang, Changfei
    Sun, Qingshuai
    [J]. PROCEEDINGS OF 2018 INTERNATIONAL CONFERENCE ON MATHEMATICS AND ARTIFICIAL INTELLIGENCE (ICMAI 2018), 2018, : 37 - 41
  • [7] Hierarchical ensemble of Extreme Learning Machine
    Cai, Yaoming
    Liu, Xiaobo
    Zhang, Yongshan
    Cai, Zhihua
    [J]. PATTERN RECOGNITION LETTERS, 2018, 116 : 101 - 106
  • [8] Sample selection-based hierarchical extreme learning machine
    Xu, Xinzheng
    Li, Shan
    Liang, Tianming
    Sun, Tongfeng
    [J]. NEUROCOMPUTING, 2020, 377 (377) : 95 - 102
  • [9] Hierarchical Pruning Discriminative Extreme Learning Machine
    Guo, Tan
    Tan, Xiaoheng
    Zhang, Lei
    [J]. PROCEEDINGS OF ELM-2017, 2019, 10 : 230 - 239
  • [10] Visual Navigation Using Inverse Reinforcement Learning and an Extreme Learning Machine
    Fang, Qiang
    Zhang, Wenzhuo
    Wang, Xitong
    [J]. ELECTRONICS, 2021, 10 (16)