Residual Capacity-Aware Virtual Machine Assignment for Reducing Network Loads in Multi-tenant Data Center Networks

被引:2
|
作者
Kimura, Tomotaka [1 ]
Suzuki, Takaya [2 ]
Hirata, Kouji [3 ]
Muraguchi, Masahiro [2 ]
机构
[1] Doshisha Univ, Fac Sci & Engn, Kyoto, Japan
[2] Tokyo Univ Sci, Fac Engn, Tokyo, Japan
[3] Kansai Univ, Fac Engn Sci, Osaka, Japan
基金
日本学术振兴会;
关键词
Data center networks; Multi-tenant data center; Virtual machine assignment; Traffic management; PLACEMENT; ALLOCATION;
D O I
10.1007/s10922-019-09492-1
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper proposes a residual capacity-aware virtual machine (VM) assignment scheme for multi-tenant data center networks. In multi-tenant data centers, tenants submit their resource requirements and the data centers provide VMs that are assigned to physical servers according to the requirements. These VMs communicate with each other to execute distributed processing. The performance of such distributed processing depends on the amount of traffic communicated by the VMs because the increase in traffic volume causes network congestion, which leads to packet losses and high transmission delay. Therefore, we need an appropriate VM assignment strategy that avoids the generation of network congestion in order to satisfy the requirements of the tenants. The proposed scheme performs VM assignment that reduces the network loads caused by traffic injected into data center networks, taking into account the traffic volume among VMs and the residual capacities of physical servers. Through simulation experiments, we demonstrate that the proposed scheme reduces the network loads efficiently.
引用
收藏
页码:949 / 971
页数:23
相关论文
共 50 条
  • [1] Residual Capacity-Aware Virtual Machine Assignment for Reducing Network Loads in Multi-tenant Data Center Networks
    Tomotaka Kimura
    Takaya Suzuki
    Kouji Hirata
    Masahiro Muraguchi
    Journal of Network and Systems Management, 2019, 27 : 949 - 971
  • [2] Adaptive virtual machine assignment for multi-tenant data center networks
    Suzuki, Takaya
    Kimura, Tomotaka
    Hirata, Kouji
    Muraguchi, Masahiro
    2015 INTERNATIONAL CONFERENCE ON COMPUTER, INFORMATION AND TELECOMMUNICATION SYSTEMS (CITS), 2015,
  • [3] Network-Aware Container Scheduling in Multi-Tenant Data Center
    Rodrigues, Leonardo R.
    Pasin, Marcelo
    Alves, Omir C., Jr.
    Miers, Charles C.
    Pillon, Mauricio A.
    Felber, Pascal
    Koslovski, Guilherme P.
    2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [4] Online Provisioning for Virtual Clusters in Multi-tenant Cloud Data Center Network
    Lu, Shuaibing
    Fang, Zhiyi
    Wu, Jie
    IEEE INFOCOM 2018 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (INFOCOM WKSHPS), 2018, : 142 - 147
  • [5] PACE Your Network: Fair and Controllable Multi-Tenant Data Center Networks
    Carvalho, Tiago
    Kim, Hyong S.
    Neves, Nuno
    2013 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2013, : 3726 - +
  • [6] Virtual Network Allocation for Fault Tolerance with Bandwidth Efficiency in a Multi-Tenant Data Center
    Ogawa, Yukio
    Hasegawa, Go
    Murata, Masayuki
    2014 IEEE 6TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING TECHNOLOGY AND SCIENCE (CLOUDCOM), 2014, : 555 - 562
  • [7] Slice-Aware Capacity Expansion Strategies in Multi-Tenant Networks
    Akgul, Ozgur Umut
    Malanchini, Ilaria
    Capone, Antonio
    2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [8] A TRILL-based multi-tenant data center network
    Amamou, Ahmed
    Haddadou, Kamel
    Pujolle, Guy
    COMPUTER NETWORKS, 2014, 68 : 35 - 53
  • [9] VirtualRack: Bandwidth-Aware Virtual Network Allocation for Multi-Tenant Datacenters
    Huang, Tianlin
    Rong, Chao
    Tang, Yazhe
    Hu, Chengchen
    Li, Jinming
    Zhang, Peng
    2014 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2014, : 3620 - 3625
  • [10] Efficient Multi-Tenant Virtual Machine Allocation in Cloud Data Centers
    Li, Jiaxin
    Li, Dongsheng
    Ye, Yuming
    Lu, Xicheng
    TSINGHUA SCIENCE AND TECHNOLOGY, 2015, 20 (01) : 81 - 89