A SHOOTING STRATEGY WHEN MOVING ON HUMANOID ROBOTS USING INVERSE KINEMATICS AND Q-LEARNING

被引:6
|
作者
Rezaeipanah, Amin [1 ]
Jamshidi, Zahra [2 ]
Jafari, Shahram [3 ]
机构
[1] Univ Rahjuyan Danesh Borazjan, Dept Comp Engn, Bushehr, Iran
[2] Islamic Azad Univ, Dept Comp Engn, Bushehr Branch, Bushehr, Iran
[3] Shiraz Univ, Sch Elect & Comp Engn, Shiraz, Iran
来源
INTERNATIONAL JOURNAL OF ROBOTICS & AUTOMATION | 2021年 / 36卷 / 03期
关键词
RoboCup3D; shooting strategy; humanoid robots; Q-learning; inverse kinematics;
D O I
10.2316/J.2021.206-0393
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Artificial intelligence and robotics are two fields of study, currently experiencing formidable growth. The RoboCup initiative seeks ways to foster these developments through a series of experiments with new ideas and approaches. Nowadays, the capability of increasing the number of shoots provided by soccer robots is the common goal of all the teams in the league of the RoboCup3D soccer. This article attempts to develop a humanoid soccer robotic shooting strategy in the RoboCup3D using inverse kinematics (IK) and Q-learning while the robot is walking. The vision preceptor on the RoboCup3D soccer simulation has noise and a small calibration error. Accordingly, the robot's moving parameters such as the angle and speed are dynamically optimized more accurately by Q-learning. Finally, if the robot was in the apt position according to the ball and the goal, it triggers the IK to perform the shooting strategy. The simulation results show the superiority of the proposed algorithm compared to most competitors in leagues of Iran's Open RoboCup3D and RoboCup soccer.
引用
收藏
页码:133 / 139
页数:7
相关论文
共 50 条
  • [31] Distributed lazy Q-learning for cooperative mobile robots
    Touzet, Claude F.
    International Journal of Advanced Robotic Systems, 2004, 1 (01) : 5 - 13
  • [32] Dynamic fuzzy Q-Learning and control of mobile robots
    Deng, C
    Er, MJ
    Xu, J
    2004 8TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION, VOLS 1-3, 2004, : 2336 - 2341
  • [33] Experimental Research on Avoidance Obstacle Control for Mobile Robots Using Q-Learning (QL) and Deep Q-Learning (DQL) Algorithms in Dynamic Environments
    Ha, Vo Thanh
    Vinh, Vo Quang
    ACTUATORS, 2024, 13 (01)
  • [34] Enhancing Nash Q-learning and Team Q-learning mechanisms by using bottlenecks
    Ghazanfari, Behzad
    Mozayani, Nasser
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2014, 26 (06) : 2771 - 2783
  • [35] An intelligent financial portfolio trading strategy using deep Q-learning
    Park, Hyungjun
    Sim, Min Kyu
    Choi, Dong Gu
    EXPERT SYSTEMS WITH APPLICATIONS, 2020, 158 (158)
  • [36] The running control of humanoid robot utilizing Q-learning and output zeroing
    Suseki, Kohei
    Nakaura, Shigeki
    Sampei, Mitsuji
    PROCEEDINGS OF THE 46TH IEEE CONFERENCE ON DECISION AND CONTROL, VOLS 1-14, 2007, : 6030 - 6036
  • [37] Humanoid Robot's Inverse Kinematics Using Algebraic Geometry
    da Silva Marques, Vinicius Abrao
    Goncalves, Rogerio Sales
    Mendes Carvalho, Joao Carlos
    Pfurner, Martin
    Husty, Manfred L.
    2015 12TH LATIN AMERICAN ROBOTICS SYMPOSIUM AND 2015 3RD BRAZILIAN SYMPOSIUM ON ROBOTICS (LARS-SBR), 2015, : 169 - 174
  • [38] Dynamic Obstacle Avoidance of Mobile Robots Using Real-Time Q-learning
    Kim, HoWon
    Lee, WonChang
    2022 INTERNATIONAL CONFERENCE ON ELECTRONICS, INFORMATION, AND COMMUNICATION (ICEIC), 2022,
  • [39] Learn Like Infants: A Strategy for Developmental Learning of Symbolic Skills Using Humanoid Robots
    Li, Kun
    Meng, Max Q. -H.
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2015, 7 (04) : 439 - 450
  • [40] Learn Like Infants: A Strategy for Developmental Learning of Symbolic Skills Using Humanoid Robots
    Kun Li
    Max Q.-H. Meng
    International Journal of Social Robotics, 2015, 7 : 439 - 450