Learning to Communicate for Mobile Sensing with Multi-agent Reinforcement Learning

被引:0
|
作者
Zhang, Bolei [1 ]
Liu, Junliang [2 ]
Xiao, Fu [1 ]
机构
[1] Nanjing Univ Posts & Telecommun, Sch Comp, Nanjing, Peoples R China
[2] JD Com, Beijing, Peoples R China
关键词
Mobile sensing; Reinforcement learning; Learning to communicate;
D O I
10.1007/978-3-030-86130-8_48
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Mobile sensing has become a promising paradigm for monitoring the environmental state. When equipped with sensors, a group of unmanned vehicles can autonomously move around for distributed sensing. To maximize the sensing coverage, a critical challenge is to coordinate the decentralized vehicles for cooperation. In this work, we propose a novel algorithm Comm-Q, in which the vehicles can learn to communicate for cooperation via multi-agent reinforcement learning. At each step, the vehicles can broadcast a message to others, and condition on received aggregated message to update their sensing policies. The message is also learned via reinforcement learning. In addition, we decompose and reshape the reward function for more efficient policy training. Experimental results show that our algorithm is scalable and can converge very fast during training phase. It also outperforms other baselines significantly during execution. The results validate that communication message plays an important role to coordinate the behaviors of different vehicles.
引用
收藏
页码:612 / 623
页数:12
相关论文
共 50 条
  • [1] Learning to Communicate with Deep Multi-Agent Reinforcement Learning
    Foerster, Jakob N.
    Assael, Yannis M.
    de Freitas, Nando
    Whiteson, Shimon
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [2] Communicate with Traffic Lights and Vehicles Based on Multi-Agent Reinforcement Learning
    Wu, Qiang
    Zhi, Peng
    Wei, Yongqiang
    Zhang, Liang
    Wu, Jianqing
    Zhou, Qingguo
    Zhou, Qiang
    Gao, Pengfei
    [J]. PROCEEDINGS OF THE 2021 IEEE 24TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN (CSCWD), 2021, : 843 - 848
  • [3] Multi-Agent Reinforcement Learning
    Stankovic, Milos
    [J]. 2016 13TH SYMPOSIUM ON NEURAL NETWORKS AND APPLICATIONS (NEUREL), 2016, : 43 - 43
  • [4] Learning to Share in Multi-Agent Reinforcement Learning
    Yi, Yuxuan
    Li, Ge
    Wang, Yaowei
    Lu, Zongqing
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [5] Efficient Communications in Multi-Agent Reinforcement Learning for Mobile Applications
    Lv, Zefang
    Xiao, Liang
    Du, Yousong
    Zhu, Yunjun
    Han, Shuai
    Liu, Yong-Jin
    [J]. IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (09) : 12440 - 12454
  • [6] IntelligentCrowd: Mobile Crowdsensing via Multi-Agent Reinforcement Learning
    Chen, Yize
    Wang, Hao
    [J]. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2021, 5 (05): : 840 - 845
  • [7] Hierarchical multi-agent reinforcement learning
    Mohammad Ghavamzadeh
    Sridhar Mahadevan
    Rajbala Makar
    [J]. Autonomous Agents and Multi-Agent Systems, 2006, 13 : 197 - 229
  • [8] Partitioning in multi-agent reinforcement learning
    Sun, R
    Peterson, T
    [J]. FROM ANIMALS TO ANIMATS 6, 2000, : 325 - 332
  • [9] Multi-Agent Reinforcement Learning for Microgrids
    Dimeas, A. L.
    Hatziargyriou, N. D.
    [J]. IEEE POWER AND ENERGY SOCIETY GENERAL MEETING 2010, 2010,
  • [10] Hierarchical multi-agent reinforcement learning
    Ghavamzadeh, Mohammad
    Mahadevan, Sridhar
    Makar, Rajbala
    [J]. AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, 2006, 13 (02) : 197 - 229