SmartFCT: Improving power-efficiency for data center networks with deep reinforcement learning

被引:20
|
作者
Sun, Penghao [1 ]
Guo, Zehua [2 ]
Liu, Sen [3 ]
Lan, Julong [1 ]
Wang, Junchao [1 ]
Hu, Yuxiang [1 ]
机构
[1] Natl Digital Switching Syst Engn & Technol RandD, Zhengzhou, Peoples R China
[2] Beijing Inst Technol, Beijing, Peoples R China
[3] Fudan Univ, Shanghai, Peoples R China
关键词
Data center networks; Software-Defined networking; Power efficiency; Flow completion time; Deep reinforcement learning; FLOW-COMPLETION-TIME; OPTIMIZATION;
D O I
10.1016/j.comnet.2020.107255
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Reducing the power consumption of Data Center Networks (DCNs) and guaranteeing the Flow Completion Time (FCT) of applications in DCNs are two major concerns for data center operators. However, existing works cannot realize the two goals together because of two issues: (1) dynamic traffic pattern in DCNs is hard to accurately model; (2) an optimal flow scheduling scheme is computationally expensive. In this paper, we propose SmartFCT, which employs the Deep Reinforcement Learning (DRL) coupled with Software-Defined Networking (SDN) to improve the power efficiency of DCNs and guarantee FCT. SmartFCT dynamically collects traffic distribution from switches to train its DRL model. The well-trained DRL agent of SmartFCT can quickly analyze the complicated traffic characteristics using neural networks and adaptively generate a action for scheduling flows and deliberately configuring margins for different links. Following the generated action, flows are consolidated into a few of active links and switches for saving power, and fine-grained margin configuration for active links avoids FCT violation of unexpected flow bursts. Simulation results show that SmartFCT can guarantee FCT and save up to 12.2% power consumption, compared with the state-of-the-art solutions.
引用
收藏
页数:9
相关论文
共 50 条
  • [31] MetisRL: A Reinforcement Learning Approach for Dynamic Routing in Data Center Networks
    Gao, Yuanning
    Gao, Xiaofeng
    Chen, Guihai
    [J]. DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, DASFAA 2022, PT II, 2022, : 615 - 622
  • [32] Deep Reinforcement Learning for Trajectory Design and Power Allocation in UAV Networks
    Zhao, Nan
    Cheng, Yiqiang
    Pei, Yiyang
    Liang, Ying-Chang
    Niyato, Dusit
    [J]. ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2020,
  • [33] Reconfigurable Network Topology Based on Deep Reinforcement Learning in Software-Defined Data-Center Networks
    Yang, Wen
    Guo, Bingli
    Shang, Yu
    Huang, Shanguo
    [J]. 2020 ASIA COMMUNICATIONS AND PHOTONICS CONFERENCE (ACP) AND INTERNATIONAL CONFERENCE ON INFORMATION PHOTONICS AND OPTICAL COMMUNICATIONS (IPOC), 2020,
  • [34] A deep reinforcement learning for user association and power control in heterogeneous networks
    Ding, Hui
    Zhao, Feng
    Tian, Jie
    Li, Dongyang
    Zhang, Haixia
    [J]. AD HOC NETWORKS, 2020, 102
  • [35] Improving Energy Efficiency Fairness of Wireless Networks: A Deep Learning Approach
    Lee, Hoon
    Jang, Han Seung
    Jung, Bang Chul
    [J]. ENERGIES, 2019, 12 (22)
  • [36] A Data Forwarding Mechanism based on Deep Reinforcement Learning for Deterministic Networks
    Li, Yuhong
    Zhang, Peng
    Zhou, Yingchao
    Jin, Di
    [J]. IEEE INFOCOM 2020 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (INFOCOM WKSHPS), 2020, : 285 - 290
  • [37] DeepEE: Joint Optimization of Job Scheduling and Cooling Control for Data Center Energy Efficiency Using Deep Reinforcement Learning
    Ran, Yongyi
    Hu, Han
    Zhou, Xin
    Wen, Yonggang
    [J]. 2019 39TH IEEE INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2019), 2019, : 645 - 655
  • [38] A Novel Method for Improving the Training Efficiency of Deep Multi-Agent Reinforcement Learning
    Pan, Yaozong
    Jiang, Haiyang
    Yang, Haitao
    Zhang, Jian
    [J]. IEEE ACCESS, 2019, 7 : 137992 - 137999
  • [39] Improving Sample Efficiency of Example-Guided Deep Reinforcement Learning for Bipedal Walking
    Galljamov, Rustam
    Zhao, Guoping
    Belousov, Boris
    Seyfarth, Andre
    Peters, Jan
    [J]. 2022 IEEE-RAS 21ST INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS (HUMANOIDS), 2022, : 587 - 593
  • [40] Improving exploration efficiency of deep reinforcement learning through samples produced by generative model
    Xu, Dayong
    Zhu, Fei
    Liu, Quan
    Zhao, Peiyao
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2021, 185