Multi-agent Q-learning Based Navigation in an Unknown Environment

被引:0
|
作者
Nath, Amar [1 ]
Niyogi, Rajdeep [2 ]
Singh, Tajinder [1 ]
Kumar, Virendra [3 ]
机构
[1] St Longowal Inst Engn & Technol Deemed Univ, Longowal, India
[2] Indian Inst Technol Roorkee, Roorkee, Uttar Pradesh, India
[3] Cent Univ Tamil Nadu, Thiruvarur, India
关键词
D O I
10.1007/978-3-030-99584-3_29
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Collaborative task execution in an unknown and dynamic environment is an important and challenging research area in autonomous robotic systems. It is essential to start the task execution in situations like search and rescue at the earliest. However, the time duration between team announcement and the arrival of team members at the location of a task delays the execution of the task. The distributed approaches for task execution assume that the path is known. However, in an environment, say, a building, the position of the doors may not be known, and some of the doors may get closed during task execution. Hence, an agent should first learn the map of the environment. The learning of the map of an unknown environment can be accelerated with multiple agents. This paper proposes a distributed multi-agent Q-learning-based approach for navigation in an unknown environment. The proposed approach is implemented using ARGoS, a realistic multi-robot simulator.
引用
收藏
页码:330 / 340
页数:11
相关论文
共 50 条
  • [1] Q-LEARNING BY THE nth STEP STATE AND MULTI-AGENT NEGOTIATION IN UNKNOWN ENVIRONMENT
    Job, Josip
    Jovic, Franjo
    Livada, Caslav
    [J]. TEHNICKI VJESNIK-TECHNICAL GAZETTE, 2012, 19 (03): : 529 - 534
  • [2] The acquisition of sociality by using Q-learning in a multi-agent environment
    Nagayuki, Yasuo
    [J]. PROCEEDINGS OF THE SIXTEENTH INTERNATIONAL SYMPOSIUM ON ARTIFICIAL LIFE AND ROBOTICS (AROB 16TH '11), 2011, : 820 - 823
  • [3] Q-learning in Multi-Agent Cooperation
    Hwang, Kao-Shing
    Chen, Yu-Jen
    Lin, Tzung-Feng
    [J]. 2008 IEEE WORKSHOP ON ADVANCED ROBOTICS AND ITS SOCIAL IMPACTS, 2008, : 239 - 244
  • [4] Multi-Agent Advisor Q-Learning
    Subramanian, Sriram Ganapathi
    Taylor, Matthew E.
    Larson, Kate
    Crowley, Mark
    [J]. Journal of Artificial Intelligence Research, 2022, 74 : 1 - 74
  • [5] Multi-Agent Advisor Q-Learning
    Subramanian, Sriram Ganapathi
    Taylor, Matthew E.
    Larson, Kate
    Crowley, Mark
    [J]. PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 6884 - 6889
  • [6] Multi-Agent Advisor Q-Learning
    Subramanian, Sriram Ganapathi
    Taylor, Matthew E.
    Larson, Kate
    Crowley, Mark
    [J]. JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2022, 74 : 1 - 74
  • [7] Multi-agent crowdsourcing model based on Q-learning
    Fang, Xin
    Guo, Yongan
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS - TAIWAN (ICCE-TW), 2019,
  • [8] Untangling Braids with Multi-Agent Q-Learning
    Khan, Abdullah
    Vernitski, Alexei
    Lisitsa, Alexei
    [J]. 2021 23RD INTERNATIONAL SYMPOSIUM ON SYMBOLIC AND NUMERIC ALGORITHMS FOR SCIENTIFIC COMPUTING (SYNASC 2021), 2021, : 135 - 139
  • [9] Continuous Q-Learning for Multi-Agent Cooperation
    Hwang, Kao-Shing
    Jiang, Wei-Cheng
    Lin, Yu-Hong
    Lai, Li-Hsin
    [J]. CYBERNETICS AND SYSTEMS, 2012, 43 (03) : 227 - 256
  • [10] Multi-Agent Coordination Method Based on Fuzzy Q-Learning
    Peng, Jun
    Liu, Miao
    Wu, Min
    Zhang, Xiaoyong
    Lin, Kuo-Chi
    [J]. 2008 7TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION, VOLS 1-23, 2008, : 5411 - +