Multi-agent Q-learning Based Navigation in an Unknown Environment

被引:0
|
作者
Nath, Amar [1 ]
Niyogi, Rajdeep [2 ]
Singh, Tajinder [1 ]
Kumar, Virendra [3 ]
机构
[1] St Longowal Inst Engn & Technol Deemed Univ, Longowal, India
[2] Indian Inst Technol Roorkee, Roorkee, Uttar Pradesh, India
[3] Cent Univ Tamil Nadu, Thiruvarur, India
关键词
D O I
10.1007/978-3-030-99584-3_29
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Collaborative task execution in an unknown and dynamic environment is an important and challenging research area in autonomous robotic systems. It is essential to start the task execution in situations like search and rescue at the earliest. However, the time duration between team announcement and the arrival of team members at the location of a task delays the execution of the task. The distributed approaches for task execution assume that the path is known. However, in an environment, say, a building, the position of the doors may not be known, and some of the doors may get closed during task execution. Hence, an agent should first learn the map of the environment. The learning of the map of an unknown environment can be accelerated with multiple agents. This paper proposes a distributed multi-agent Q-learning-based approach for navigation in an unknown environment. The proposed approach is implemented using ARGoS, a realistic multi-robot simulator.
引用
收藏
页码:330 / 340
页数:11
相关论文
共 50 条
  • [41] Minimax fuzzy Q-learning in cooperative multi-agent systems
    Kilic, A
    Arslan, A
    [J]. ADVANCES IN INFORMATION SYSTEMS, 2002, 2457 : 264 - 272
  • [42] Continuous strategy replicator dynamics for multi-agent Q-learning
    Aram Galstyan
    [J]. Autonomous Agents and Multi-Agent Systems, 2013, 26 : 37 - 53
  • [43] A theoretical analysis of cooperative behaviorin multi-agent Q-learning
    Waltman, Ludo
    Kaymak, Uzay
    [J]. 2007 IEEE INTERNATIONAL SYMPOSIUM ON APPROXIMATE DYNAMIC PROGRAMMING AND REINFORCEMENT LEARNING, 2007, : 84 - +
  • [44] Multi-Agent Q-Learning for Power Allocation in Interference Channel
    Wongphatcharatham, Tanutsorn
    Phakphisut, Watid
    Wijitpornchai, Thongchai
    Areeprayoonkij, Poonlarp
    Jaruvitayakovit, Tanun
    Hannanta-Anan, Pimkhuan
    [J]. 2022 37TH INTERNATIONAL TECHNICAL CONFERENCE ON CIRCUITS/SYSTEMS, COMPUTERS AND COMMUNICATIONS (ITC-CSCC 2022), 2022, : 876 - 879
  • [45] Modular Production Control with Multi-Agent Deep Q-Learning
    Gankin, Dennis
    Mayer, Sebastian
    Zinn, Jonas
    Vogel-Heuser, Birgit
    Endisch, Christian
    [J]. 2021 26TH IEEE INTERNATIONAL CONFERENCE ON EMERGING TECHNOLOGIES AND FACTORY AUTOMATION (ETFA), 2021,
  • [46] A distributed Q-learning algorithm for multi-agent team coordination
    Huang, J
    Yang, B
    Liu, DY
    [J]. Proceedings of 2005 International Conference on Machine Learning and Cybernetics, Vols 1-9, 2005, : 108 - 113
  • [47] Cooperative Multi-Agent Q-Learning Using Distributed MPC
    Esfahani, Hossein Nejatbakhsh
    Velni, Javad Mohammadpour
    [J]. IEEE CONTROL SYSTEMS LETTERS, 2024, 8 : 2193 - 2198
  • [48] Multi-Agent Reward-Iteration Fuzzy Q-Learning
    Lixiong Leng
    Jingchen Li
    Jinhui Zhu
    Kao-Shing Hwang
    Haobin Shi
    [J]. International Journal of Fuzzy Systems, 2021, 23 : 1669 - 1679
  • [49] DVF:Multi-agent Q-learning with difference value factorization
    Huang, Anqi
    Wang, Yongli
    Sang, Jianghui
    Wang, Xiaoli
    Wang, Yupeng
    [J]. KNOWLEDGE-BASED SYSTEMS, 2024, 286
  • [50] An efficient multi-agent Q-learning method based on observing the adversary agent state change
    Sun, Ruoying
    Zhao, Gang
    [J]. 2006 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS, VOLS 1-6, PROCEEDINGS, 2006, : 4169 - +