DxPU: Large-scale Disaggregated GPU Pools in the Datacenter

被引:0
|
作者
He, Bowen [1 ,2 ]
Zheng, Xiao [2 ]
Chen, Yuan [1 ,2 ]
Li, Weinan [2 ]
Zhou, Yajin [1 ]
Long, Xin [2 ]
Zhang, Pengcheng [2 ]
Lu, Xiaowei [2 ]
Jiang, Linquan [2 ]
Liu, Qiang [2 ]
Cai, Dennis [2 ]
Zhang, Xiantao [2 ]
机构
[1] Zhejiang Univ, Hangzhou, Peoples R China
[2] Alibaba Grp, Hangzhou, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Clouds; clusters; data centers;
D O I
10.1145/3617995
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The rapid adoption of AI and convenience offered by cloud services have resulted in the growing demands for GPUs in the cloud. Generally, GPUs are physically attached to host servers as PCIe devices. However, the fixed assembly combination of host servers and GPUs is extremely inefficient in resource utilization, upgrade, and maintenance. Due to these issues, the GPU disaggregation technique has been proposed to decouple GPUs from host servers. It aggregates GPUs into a pool and allocates GPU node(s) according to user demands. However, existing GPU disaggregation systems have flaws in software-hardware compatibility, disaggregation scope, and capacity. In this article, we present a new implementation of datacenter-scale GPU disaggregation, named DxPU. DxPU efficiently solves the above problems and can flexibly allocate as many GPU node(s) as users demand. To understand the performance overhead incurred by DxPU, we build up a performance model for AI specific workloads. With the guidance of modeling results, we develop a prototype system, which has been deployed into the datacenter of a leading cloud provider for a test run. We also conduct detailed experiments to evaluate the performance overhead caused by our system. The results show that the overhead of DxPU is less than 10%, compared with native GPU servers, in most of user scenarios.
引用
下载
收藏
页数:23
相关论文
共 50 条
  • [31] Large-Scale Graph Processing on Multi-GPU Platforms
    Zhang H.
    Zhang L.
    Wu Y.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2018, 55 (02): : 273 - 288
  • [32] A multi-GPU algorithm for large-scale neuronal networks
    de Camargo, Raphael Y.
    Rozante, Luiz
    Song, Siang W.
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2011, 23 (06): : 556 - 572
  • [33] Power Control for GPU Clusters in Processing Large-scale Streams
    Chen, Qingkui
    Wang, Haifeng
    Liu, Bocheng
    JOURNAL OF COMPUTERS, 2013, 8 (10) : 2489 - 2496
  • [34] Efficient and Large-Scale Dissipative Particle Dynamics Simulations on GPU
    Yang, Keda
    Bai, Zhiqiang
    Su, Jiaye
    Guo, Hongxia
    SOFT MATERIALS, 2014, 12 (02) : 185 - 196
  • [35] A New GPU Bundle Adjustment Method for Large-Scale Data
    Zheng Maoteng
    Zhou Shunping
    Xiong Xiaodong
    Zhu Junfeng
    PHOTOGRAMMETRIC ENGINEERING AND REMOTE SENSING, 2017, 83 (09): : 633 - 641
  • [36] Collective behavior of large-scale neural networks with GPU acceleration
    Qu, Jingyi
    Wang, Rubin
    COGNITIVE NEURODYNAMICS, 2017, 11 (06) : 553 - 563
  • [37] LARGE-SCALE PARALLEL MULTIBODY DYNAMICS WITH FRICTIONAL CONTACT ON THE GPU
    Negrut, Dan
    Tasora, Alessandro
    Anitescu, Mihai
    PROCEEDINGS OF THE ASME DYNAMIC SYSTEMS AND CONTROL CONFERENCE 2008, PTS A AND B, 2009, : 347 - 354
  • [38] Large-Scale Welding Process Simulation by GPU Parallelized Computing
    Huang, H.
    Chen, J.
    Feng, Z.
    Wang, H-P
    Cal, W.
    Carlson, B. E.
    WELDING JOURNAL, 2021, 100 (11) : 359S - 370S
  • [39] PARALLEL SIMULATION OF LARGE-SCALE ARTIFICIAL SOCIETY WITH GPU AS COPROCESSOR
    Guo, Gang
    Chen, Bin
    Qiu, Xiaogang
    INTERNATIONAL JOURNAL OF MODELING SIMULATION AND SCIENTIFIC COMPUTING, 2013, 4 (02)
  • [40] Efficient Large-scale Approximate Nearest Neighbor Search on the GPU
    Wieschollek, Patrick
    Wang, Oliver
    Sorkine-Hornung, Alexander
    Lensch, Hendrik P. A.
    2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 2027 - 2035