Path planning for a statically stable biped robot using PRM and reinforcement learning

被引:12
|
作者
Kulkarni, Prasad
Goswami, Dip
Guha, Prithwijit
Dutta, Ashish
机构
[1] Nagoya Univ, Dept Engn Sci & Mech, Chikusa Ku, Nagoya, Aichi, Japan
[2] Tata Motors, Tata Motors Res Div, Pune, Maharashtra, India
[3] Natl Univ Singapore, Dept Elect & Comp Engn, Singapore 117576, Singapore
[4] Indian Inst Technol, Dept Elect Engn, Kanpur 208016, Uttar Pradesh, India
[5] Indian Inst Technol, Dept Mech Engn, Kanpur 208016, Uttar Pradesh, India
关键词
potential function; PRM; reinforcement learning; statically stable biped robot;
D O I
10.1007/s10846-006-9071-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper path planning and obstacle avoidance for a statically stable biped robot using PRM and reinforcement learning is discussed. The main objective of the paper is to compare these two methods of path planning for applications involving a biped robot. The statically stable biped robot under consideration is a 4-degree of freedom walking robot that can follow any given trajectory on flat ground and has a fixed step length of 200 mm. It is proved that the path generated by the first method produces the shortest smooth path but it also increases the computational burden on the controller, as the robot has to turn at almost all steps. However the second method produces paths that are composed of straight-line segments and hence requires less computation for trajectory following. Experiments were also conducted to prove the effectiveness of the reinforcement learning based path planning method.
引用
收藏
页码:197 / 214
页数:18
相关论文
共 50 条
  • [1] Path Planning for a Statically Stable Biped Robot Using PRM and Reinforcement Learning
    Prasad Kulkarni
    Dip Goswami
    Prithwijit Guha
    Ashish Dutta
    [J]. Journal of Intelligent and Robotic Systems, 2006, 47 : 197 - 214
  • [2] Navigation and Path Planning Using Reinforcement Learning for a Roomba Robot
    Romero-Marti, Daniel Paul
    Nunez-Varela, Jose Ignacio
    Soubervielle-Montalvo, Carlos
    Orozco-de-la-Paz, Alfredo
    [J]. 2016 XVIII CONGRESO MEXICANO DE ROBOTICA (COMROB 2016), 2016,
  • [3] Path Planning of Cleaning Robot with Reinforcement Learning
    Moon, Woohyeon
    Park, Bumgeun
    Nengroo, Sarvar Hussain
    Kim, Taeyoung
    Har, Dongsoo
    [J]. 2022 IEEE INTERNATIONAL SYMPOSIUM ON ROBOTIC AND SENSORS ENVIRONMENTS (ROSE), 2022,
  • [4] The motion control of a statically stable biped robot on an uneven floor
    Shih, CL
    Chiou, CJ
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 1998, 28 (02): : 244 - 249
  • [5] Multi-Robot Path Planning Method Using Reinforcement Learning
    Bae, Hyansu
    Kim, Gidong
    Kim, Jonguk
    Qian, Dianwei
    Lee, Sukgyu
    [J]. APPLIED SCIENCES-BASEL, 2019, 9 (15):
  • [6] Mobile Service Robot Path Planning Using Deep Reinforcement Learning
    Kumaar, A. A. Nippun
    Kochuvila, Sreeja
    [J]. IEEE ACCESS, 2023, 11 : 100083 - 100096
  • [7] Real Time Path Planning of Robot using Deep Reinforcement Learning
    Raajan, Jeevan
    Srihari, P., V
    Satya, Jayadev P.
    Bhikkaji, B.
    Pasumarthy, Ramkrishna
    [J]. IFAC PAPERSONLINE, 2020, 53 (02): : 15602 - 15607
  • [8] Path planning of a mobile robot by optimization and reinforcement learning
    Harukazu Igarashi
    [J]. Artificial Life and Robotics, 2002, 6 (1-2) : 59 - 65
  • [9] Robot path planning based on deep reinforcement learning
    Long, Yinxin
    He, Huajin
    [J]. 2020 IEEE CONFERENCE ON TELECOMMUNICATIONS, OPTICS AND COMPUTER SCIENCE (TOCS), 2020, : 151 - 154
  • [10] Robot path planning algorithm based on reinforcement learning
    Zhang F.
    Li N.
    Yuan R.
    Fu Y.
    [J]. Huazhong Keji Daxue Xuebao (Ziran Kexue Ban)/Journal of Huazhong University of Science and Technology (Natural Science Edition), 2018, 46 (12): : 65 - 70