Sockets Direct Procotol over InfiniBand in clusters: Is it beneficial?

被引:12
|
作者
Balaji, P [1 ]
Narravula, S [1 ]
Vaidyanathan, K [1 ]
Krishnamoorthy, S [1 ]
Wu, J [1 ]
Panda, DK [1 ]
机构
[1] Ohio State Univ, Columbus, OH 43210 USA
关键词
D O I
10.1109/ISPASS.2004.1291353
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The Sockets Direct Protocol (SDP) has been proposed recently in order to enable sockets based applications to take advantage of the enhanced features provided by, InfiniBand Architecture. In this paper we study the benefits and limitations of an implementation of SDP We first analyze the performance of SDP based on a detailed suite of micro-benchmarks. Next, we evaluate it on two real application domains: (1) A multi-tier Data-Center environment and (2) A Parallel Virtual File System (PVFS). Our microbenchmark results show that SDP is able to provide tip to 2.7 times better bandwidth as compared to the native sockets implementation over InfiniBand (lPoIB) and significantly better latency for large message sizes. Our experimental results also show that SDP is able to achieve a considerably higher performance (improvement of tip to 2.4 times) as compared to lPoIB in the PVFS environment. In the data-center environment, SDP outpeforms IPoIB for large file transfers in-spite of currently being limited by a high connection setup time. However this limitation is entirely implementation specific and as the InfiniBand software and hardware products are rapidly maturing, we expect this limitation to be overcome soon. Based on this, we have shown that the projected performance for SDP, without the connection setup time, can outperform IPoIB for small message transfers as well.
引用
收藏
页码:28 / 35
页数:8
相关论文
共 50 条
  • [1] Advanced Flow-control Mechanisms for the Sockets Direct Protocol over InfiniBand
    Balaji, P.
    Bhagvat, S.
    Panda, D. K.
    Thakur, R.
    Gropp, W.
    [J]. 2007 INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING WORKSHOPS (ICPP), 2007, : 602 - +
  • [2] Zero copy sockets direct protocol over InfiniBand - Preliminary implementation and performance analysis
    Goldenberg, D
    Kagan, M
    Ravid, R
    Tsirkin, MS
    [J]. HOT INTERCONNECTS 13, 2005, : 128 - 137
  • [3] A performance analysis of the Sockets Direct Protocol (SDP) with asynchronous I/O over 4X InfiniBand
    Cohen, A
    [J]. CONFERENCE PROCEEDINGS OF THE 2004 IEEE INTERNATIONAL PERFORMANCE, COMPUTING, AND COMMUNICATIONS CONFERENCE, 2004, : 241 - 246
  • [4] Codesign for InfiniBand Clusters
    Sur, Sayantan
    Potluri, Sreeram
    Kandalla, Krishna
    Subramoni, Hari
    Panda, Dhabaleswar K.
    Tomko, Karen
    [J]. COMPUTER, 2011, 44 (11) : 31 - 36
  • [5] Operating two InfiniBand grid clusters over 28 km distance
    Richling, Sabine
    Hau, Steffen
    Kredel, Heinz
    Kruse, Hans-Guenther
    [J]. INTERNATIONAL JOURNAL OF GRID AND UTILITY COMPUTING, 2011, 2 (04) : 303 - 312
  • [6] High Performance MPI Library over SR-IOV Enabled InfiniBand Clusters
    Zhang, Jie
    Lu, Xiaoyi
    Jose, Jithin
    Li, Mingzhe
    Shi, Rong
    Panda, Dhabaleswar K.
    [J]. 2014 21ST INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING (HIPC), 2014,
  • [7] Offloaded GPU Collectives using CORE-Direct and CUDA Capabilities on InfiniBand Clusters
    Venkatesh, A.
    Hamidouche, K.
    Subramoni, H.
    Panda, Dhabaleswar K.
    [J]. 2015 IEEE 22ND INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING (HIPC), 2015, : 234 - 243
  • [8] Software suites direct InfiniBand connections
    Wong, W
    [J]. ELECTRONIC DESIGN, 2001, 49 (15) : 50 - 51
  • [9] Software suites direct InfiniBand connections
    Wong, W.
    [J]. 2001, Penton Publishing Co. (49)
  • [10] Implementing an OpenMP execution environment on InfiniBand clusters
    Tao, Jie
    Karl, Wolfgang
    Trinitis, Carsten
    [J]. OPENMP SHARED MEMORY PARALLEL PROGRAMMING, PROCEEDINGS, 2008, 4315 : 65 - +