Robot behavioral selection using Q-learning

被引:0
|
作者
Martinson, E [1 ]
Stoytchev, A [1 ]
Arkin, R [1 ]
机构
[1] Georgia Inst Technol, Coll Comp, Mobile Robot Lab, Atlanta, GA 30332 USA
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Q-learning has often been used in robotics to learn primitive behaviors. However, the complexity of the algorithm increases exponentially with the number of states the robot can be in and the number of actions that it can take. Therefore, it is natural to try to reduce the number of states and actions in order to improve the efficiency of the algorithm. Robot behaviors and behavioral assemblages provide a good level of abstraction which can be used to speed up robot learning. Instead of coordinating a set of primitive actions, we use Q-learning to coordinate a set of well tested behavioral assemblages to accomplish a robotic target intercept mission.
引用
收藏
页码:970 / 977
页数:8
相关论文
共 50 条
  • [31] Balance Control of Robot With CMAC Based Q-learning
    Li Ming-ai
    Jiao Li-fang
    Qiao Jun-fei
    Ruan Xiao-gang
    2008 CHINESE CONTROL AND DECISION CONFERENCE, VOLS 1-11, 2008, : 2668 - 2672
  • [32] Optimization of industrial robot grasping processes with Q-learning
    Belke, Manuel
    Joeressen, Till
    Petrovic, Oliver
    Brecher, Christian
    2023 5TH INTERNATIONAL CONFERENCE ON CONTROL AND ROBOTICS, ICCR, 2023, : 113 - 119
  • [33] Based on A* and q-learning search and rescue robot navigation
    Pang, Tao
    Ruan, Xiaogang
    Wang, Ershen
    Fan, Ruiyuan
    Telkomnika - Indonesian Journal of Electrical Engineering, 2012, 10 (07): : 1889 - 1896
  • [34] Q-Learning for Autonomous Mobile Robot Obstacle Avoidance
    Ribeiro, Tiago
    Goncalves, Fernando
    Garcia, Ines
    Lopes, Gil
    Fernando Ribeiro, A.
    2019 19TH IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2019), 2019, : 243 - 249
  • [35] Neural Q-Learning Based Mobile Robot Navigation
    Yun, Soh Chin
    Parasuraman, S.
    Ganapathy, V.
    Joe, Halim Kusuma
    MATERIALS SCIENCE AND INFORMATION TECHNOLOGY, PTS 1-8, 2012, 433-440 : 721 - +
  • [36] An Online Home Energy Management System using Q-Learning and Deep Q-Learning
    İzmitligil H.
    Karamancıoğlu A.
    Sustainable Computing: Informatics and Systems, 2024, 43
  • [37] Emergency-Response Locomotion of Hexapod Robot with Heuristic Reinforcement Learning Using Q-Learning
    Yang, Ming-Chieh
    Samani, Hooman
    Zhu, Kening
    INTERACTIVE COLLABORATIVE ROBOTICS (ICR 2019), 2019, 11659 : 320 - 329
  • [38] Path planning for autonomous mobile robot using transfer learning-based Q-learning
    Wu, Shengshuai
    Hu, Jinwen
    Zhao, Chunhui
    Pan, Quan
    PROCEEDINGS OF 2020 3RD INTERNATIONAL CONFERENCE ON UNMANNED SYSTEMS (ICUS), 2020, : 88 - 93
  • [39] Learning Robot Grasping from a Random Pile with Deep Q-Learning
    Chen, Bin
    Su, Jianhua
    Wang, Lili
    Gu, Qipeng
    INTELLIGENT ROBOTICS AND APPLICATIONS, ICIRA 2021, PT II, 2021, 13014 : 142 - 152
  • [40] Antenna Selection in Energy Harvesting Relaying Networks Using Q-Learning Algorithm
    Daliang Ouyang
    Rui Zhao
    Yuanjian Li
    Rongxin Guo
    Yi Wang
    China Communications, 2021, 18 (04) : 64 - 75