A Fairness-Aware Cooperation Strategy for Multi-Agent Systems Driven by Deep Reinforcement Learning

被引:0
|
作者
Liu, Zhixiang [1 ,2 ]
Shi, Huaguang [1 ,2 ]
Yan, Wenhao [1 ,2 ]
Jin, Zhanqi [1 ,2 ]
Zhou, Yi [1 ,2 ]
机构
[1] Henan Univ, Sch Artificial Intelligence, Zhengzhou 450046, Peoples R China
[2] Int Joint Res Lab Cooperat Vehicular Networks Hen, Zhengzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Multi-agent collaboration; Fairness; MADDPG; Gini coefficient;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The research on multi-agent cooperation strategies has been attracting widespread concerns in recent years. However, the current deep reinforcement learning algorithms mainly focus on improving cooperation efficiency while ignore fairness. Taking into account both collaboration efficiency and fairness is a complex multi-objective optimization problem. To address this concern, we design a Fair-Efficiency Multi-Agent Deep Deterministic Policy Gradient (FE-MADDPG) algonthm. First, we design a fair and efficient reward function which sets the resource occupancy rate as the ratio of each agent's average reward to the total reward to ensure the fairness of each agent. Then, we improve the MADDPG algonthm by utlhzmg the reward function and make comparisons of the efficiency of agents. Finally, we employ the Gini coefficient and the time consumed for completing the task as evaluation indicators to verify the fairness and efficiency. Simulation results show that the FE-MADDPG algonthm significantly improves the efficiency of the system under the premise of ensuring fairness for each agent.
引用
收藏
页码:4943 / 4948
页数:6
相关论文
共 50 条
  • [21] Deep reinforcement learning for multi-agent interaction
    Ahmed, Ibrahim H.
    Brewitt, Cillian
    Carlucho, Ignacio
    Christianos, Filippos
    Dunion, Mhairi
    Fosong, Elliot
    Garcin, Samuel
    Guo, Shangmin
    Gyevnar, Balint
    McInroe, Trevor
    Papoudakis, Georgios
    Rahman, Arrasy
    Schafer, Lukas
    Tamborski, Massimiliano
    Vecchio, Giuseppe
    Wang, Cheng
    Albrecht, Stefano, V
    [J]. AI COMMUNICATIONS, 2022, 35 (04) : 357 - 368
  • [22] Multi-agent deep reinforcement learning: a survey
    Sven Gronauer
    Klaus Diepold
    [J]. Artificial Intelligence Review, 2022, 55 : 895 - 943
  • [23] Lenient Multi-Agent Deep Reinforcement Learning
    Palmer, Gregory
    Tuyls, Karl
    Bloembergen, Daan
    Savani, Rahul
    [J]. PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS (AAMAS' 18), 2018, : 443 - 451
  • [24] Multi-agent deep reinforcement learning: a survey
    Gronauer, Sven
    Diepold, Klaus
    [J]. ARTIFICIAL INTELLIGENCE REVIEW, 2022, 55 (02) : 895 - 943
  • [25] Deep Multi-Agent Reinforcement Learning: A Survey
    Liang X.-X.
    Feng Y.-H.
    Ma Y.
    Cheng G.-Q.
    Huang J.-C.
    Wang Q.
    Zhou Y.-Z.
    Liu Z.
    [J]. Zidonghua Xuebao/Acta Automatica Sinica, 2020, 46 (12): : 2537 - 2557
  • [26] RACA: Relation-Aware Credit Assignment for Ad-Hoc Cooperation in Multi-Agent Deep Reinforcement Learning
    Chen, Hao
    Yang, Guangkai
    Zhang, Junge
    Yin, Qiyue
    Huang, Kaiqi
    [J]. 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [27] Learning to Communicate with Deep Multi-Agent Reinforcement Learning
    Foerster, Jakob N.
    Assael, Yannis M.
    de Freitas, Nando
    Whiteson, Shimon
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [28] Group and Socially Aware Multi-Agent Reinforcement Learning
    Vallecha, Manav
    Kala, Rahul
    [J]. 2022 30TH MEDITERRANEAN CONFERENCE ON CONTROL AND AUTOMATION (MED), 2022, : 73 - 78
  • [29] Intent-aware Multi-agent Reinforcement Learning
    Qi, Siyuan
    Zhu, Song-Chun
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2018, : 7533 - 7540
  • [30] Analysis of Evolutionary Dynamics for Bidding Strategy Driven by Multi-Agent Reinforcement Learning
    Zhu, Ziqing
    Chan, Ka Wing
    Bu, Siqi
    Or, Siu Wing
    Gao, Xiang
    Xia, Shiwei
    [J]. IEEE TRANSACTIONS ON POWER SYSTEMS, 2021, 36 (06) : 5975 - 5978