Deep Multi-Agent Reinforcement Learning With Minimal Cross-Agent Communication for SFC Partitioning

被引:1
|
作者
Pentelas, Angelos [1 ,2 ]
De Vleeschauwer, Danny [1 ]
Chang, Chia-Yu [1 ]
De Schepper, Koen [1 ]
Papadimitriou, Panagiotis [2 ]
机构
[1] Nokia Bell Labs, B-2018 Antwerp, Belgium
[2] Univ Macedonia, Dept Appl Informat, Thessaloniki 54636, Greece
基金
欧盟地平线“2020”;
关键词
Topology; Servers; Resource management; Virtual links; Network topology; Network function virtualization; Multi-agent systems; Reinforcement learning; Multi-agent reinforcement learning; network function virtualization; self-learning orchestration;
D O I
10.1109/ACCESS.2023.3269576
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Network Function Virtualization (NFV) decouples network functions from the underlying specialized devices, enabling network processing with higher flexibility and resource efficiency. This promotes the use of virtual network functions (VNFs), which can be grouped to form a service function chain (SFC). A critical challenge in NFV is SFC partitioning (SFCP), which is mathematically expressed as a graph-to-graph mapping problem. Given its NP-hardness, SFCP is commonly solved by approximation methods. Yet, the relevant literature exhibits a gradual shift towards data-driven SFCP frameworks, such as (deep) reinforcement learning (RL). In this article, we initially identify crucial limitations of existing RL-based SFCP approaches. In particular, we argue that most of them stem from the centralized implementation of RL schemes. Therefore, we devise a cooperative deep multi-agent reinforcement learning (DMARL) scheme for decentralized SFCP, which fosters the efficient communication of neighboring agents. Our simulation results (i) demonstrate that DMARL outperforms a state-of-the-art centralized double deep $Q$ -learning algorithm, (ii) unfold the fundamental behaviors learned by the team of agents, (iii) highlight the importance of information exchange between agents, and (iv) showcase the implications stemming from various network topologies on the DMARL efficiency.
引用
收藏
页码:40384 / 40398
页数:15
相关论文
共 50 条
  • [21] When Does Communication Learning Need Hierarchical Multi-Agent Deep Reinforcement Learning
    Ossenkopf, Marie
    Jorgensen, Mackenzie
    Geihs, Kurt
    CYBERNETICS AND SYSTEMS, 2019, 50 (08) : 672 - 692
  • [22] Learning to Schedule Joint Radar-Communication With Deep Multi-Agent Reinforcement Learning
    Lee, Joash
    Niyato, Dusit
    Guan, Yong Liang
    Kim, Dong In
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2022, 71 (01) : 406 - 422
  • [23] Air-Ground Coordination Communication by Multi-Agent Deep Reinforcement Learning
    Ding, Ruijin
    Gao, Feifei
    Yang, Guanghua
    Shen, Xuemin Sherman
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021), 2021,
  • [24] Multi-agent communication cooperation based on deep reinforcement learning and information theory
    Gao, Bing
    Zhang, Zhejie
    Zou, Qijie
    Liu, Zhiguo
    Zhao, Xiling
    Hangkong Xuebao/Acta Aeronautica et Astronautica Sinica, 2024, 45 (18):
  • [25] Multi-agent reinforcement learning based on local communication
    Zhang, Wenxu
    Ma, Lei
    Li, Xiaonan
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2019, 22 (Suppl 6): : 15357 - 15366
  • [26] Improving coordination with communication in multi-agent reinforcement learning
    Szer, D
    Charpillet, F
    ICTAI 2004: 16TH IEEE INTERNATIONALCONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2004, : 436 - 440
  • [27] Multi-Agent Reinforcement Learning for Coordinating Communication and Control
    Mason, Federico
    Chiariotti, Federico
    Zanella, Andrea
    Popovski, Petar
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2024, 10 (04) : 1566 - 1581
  • [28] Biases for Emergent Communication in Multi-agent Reinforcement Learning
    Eccles, Tom
    Bachrach, Yoram
    Lever, Guy
    Lazaridou, Angeliki
    Graepel, Thore
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [29] Low Entropy Communication in Multi-Agent Reinforcement Learning
    Yu, Lebin
    Qiu, Yunbo
    Wang, Qiexiang
    Zhang, Xudong
    Wang, Jian
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 5173 - 5178
  • [30] Multi-agent reinforcement learning based on local communication
    Wenxu Zhang
    Lei Ma
    Xiaonan Li
    Cluster Computing, 2019, 22 : 15357 - 15366