Visual Navigation Using Inverse Reinforcement Learning and an Extreme Learning Machine

被引:1
|
作者
Fang, Qiang [1 ]
Zhang, Wenzhuo [1 ]
Wang, Xitong [1 ]
机构
[1] Natl Univ Def Technol, Coll Intelligence Sci & Technol, Changsha 410073, Peoples R China
基金
中国国家自然科学基金;
关键词
visual navigation; inverse reinforcement learning (IRL); extreme learning machine (ELM); deep learning; A3C;
D O I
10.3390/electronics10161997
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we focus on the challenges of training efficiency, the designation of reward functions, and generalization in reinforcement learning for visual navigation and propose a regularized extreme learning machine-based inverse reinforcement learning approach (RELM-IRL) to improve the navigation performance. Our contributions are mainly three-fold: First, a framework combining extreme learning machine with inverse reinforcement learning is presented. This framework can improve the sample efficiency and obtain the reward function directly from the image information observed by the agent and improve the generation for the new target and the new environment. Second, the extreme learning machine is regularized by multi-response sparse regression and the leave-one-out method, which can further improve the generalization ability. Simulation experiments in the AI-THOR environment showed that the proposed approach outperformed previous end-to-end approaches, thus, demonstrating the effectiveness and efficiency of our approach.
引用
收藏
页数:21
相关论文
共 50 条
  • [21] Multitask Extreme Learning Machine for Visual Tracking
    Huaping Liu
    Fuchun Sun
    Yuanlong Yu
    [J]. Cognitive Computation, 2014, 6 : 391 - 404
  • [22] Multitask Extreme Learning Machine for Visual Tracking
    Liu, Huaping
    Sun, Fuchun
    Yu, Yuanlong
    [J]. COGNITIVE COMPUTATION, 2014, 6 (03) : 391 - 404
  • [23] Machine Teaching for Inverse Reinforcement Learning: Algorithms and Applications
    Brown, Daniel S.
    Niekum, Scott
    [J]. THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 7749 - 7758
  • [24] Utilizing hierarchical extreme learning machine based reinforcement learning for object sorting
    AlDahoul, Nouar
    Htike, ZawZaw
    [J]. INTERNATIONAL JOURNAL OF ADVANCED AND APPLIED SCIENCES, 2019, 6 (01): : 106 - 113
  • [25] Learning strategies in table tennis using inverse reinforcement learning
    Katharina Muelling
    Abdeslam Boularias
    Betty Mohler
    Bernhard Schölkopf
    Jan Peters
    [J]. Biological Cybernetics, 2014, 108 : 603 - 619
  • [26] Learning strategies in table tennis using inverse reinforcement learning
    Muelling, Katharina
    Boularias, Abdeslam
    Mohler, Betty
    Schoelkopf, Bernhard
    Peters, Jan
    [J]. BIOLOGICAL CYBERNETICS, 2014, 108 (05) : 603 - 619
  • [27] Learning Spatial Search using Submodular Inverse Reinforcement Learning
    Wu, Ji-Jie
    Tseng, Kuo-Shih
    [J]. 2020 IEEE INTERNATIONAL SYMPOSIUM ON SAFETY, SECURITY, AND RESCUE ROBOTICS (SSRR 2020), 2020, : 7 - 14
  • [28] Visual Navigation With Multiple Goals Based on Deep Reinforcement Learning
    Rao, Zhenhuan
    Wu, Yuechen
    Yang, Zifei
    Zhang, Wei
    Lu, Shijian
    Lu, Weizhi
    Zha, ZhengJun
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (12) : 5445 - 5455
  • [29] Collision Anticipation via Deep Reinforcement Learning for Visual Navigation
    Gutierrez-Maestro, Eduardo
    Lopez-Sastre, Roberto J.
    Maldonado-Bascon, Saturnino
    [J]. PATTERN RECOGNITION AND IMAGE ANALYSIS, PT I, 2020, 11867 : 386 - 397
  • [30] Deductive Reinforcement Learning for Visual Autonomous Urban Driving Navigation
    Huang, Changxin
    Zhang, Ronghui
    Ouyang, Meizi
    Wei, Pengxu
    Lin, Junfan
    Su, Jiang
    Lin, Liang
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (12) : 5379 - 5391