High performance MPI-2 one-sided communication over InfiniBand

被引:0
|
作者
Jiang, WH [1 ]
Liu, JX [1 ]
Jin, HW [1 ]
Panda, DK [1 ]
Gropp, W [1 ]
Thakur, R [1 ]
机构
[1] Ohio State Univ, Columbus, OH 43210 USA
关键词
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Many existing MPI-2 one-sided communication implementations are built on top of MPI send/receive operations. Although this approach can achieve good portability, it suffers from high communication overhead and dependency on remote process for communication progress. To address these problems, we propose a high performance MPI-2 one-sided communication design over the InfiniBand Architecture. In our design, MPI-2 one-sided communication operations such as MPI_Put, MPI_Get and MPI_Accumulate are directly mapped to InfiniBand Remote Direct Memory Access (RDMA) operations. Our design has been implemented based on MPICH2 over InfiniBand. We present detailed design issues for this approach and perform a set of micro-benchmarks to characterize different aspects of its performance. Our performance evaluation shows that compared with the design based on MPI send/receive, our design can improve throughput up to 77%, and reduce lantency and synchronization overhead up to 19% and 13%, respectively. Under certain process skew, the bad impact can be significantly reduced by new design, from 41% to nearly 0%. It also can achieve better overlap of communication and computation.
引用
收藏
页码:531 / 538
页数:8
相关论文
共 50 条
  • [31] Efficient Notifications for MPI One-Sided Applications
    Sergent, Marc
    Aitkaci, Celia Tassadit
    Lemarinier, Pierre
    Papaure, Guillaume
    [J]. EUROMPI'19: PROCEEDINGS OF THE 26TH EUROPEAN MPI USERS' GROUP MEETING, 2019,
  • [32] Performance of asynchronous optimized Schwarz with one-sided communication
    Yamazaki, Ichitaro
    Chow, Edmond
    Bouteiller, Aurelien
    Dongarra, Jack
    [J]. PARALLEL COMPUTING, 2019, 86 : 66 - 81
  • [33] High Performance RDMA-Based MPI Implementation over InfiniBand
    Jiuxing Liu
    Jiesheng Wu
    Dhabaleswar K. Panda
    [J]. International Journal of Parallel Programming, 2004, 32 : 167 - 198
  • [34] High performance RDMA-based MPI implementation over InfiniBand
    Liu, JX
    Wu, JS
    Panda, DK
    [J]. INTERNATIONAL JOURNAL OF PARALLEL PROGRAMMING, 2004, 32 (03) : 167 - 198
  • [35] Implementing MPI-IO atomic mode and shared file pointers using MPI one-sided communication
    Latham, Robert
    Ross, Robert
    Thakur, Rajeev
    [J]. INTERNATIONAL JOURNAL OF HIGH PERFORMANCE COMPUTING APPLICATIONS, 2007, 21 (02): : 132 - 143
  • [36] Asynchronous progress design for a MPI-based PGAS one-sided communication system
    Zhou, Huan
    Gracia, Jose
    [J]. 2016 IEEE 22ND INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS), 2016, : 999 - 1006
  • [37] MPI for Python']Python:: Performance improvements and MPI-2 extensions
    Dalcin, Lisandro
    Paz, Rodrigo
    Storti, Mario
    D'Elia, Jorge
    [J]. JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2008, 68 (05) : 655 - 662
  • [38] Symbolic Execution of MPI Programs with One-Sided Communications
    Hu, Nenghui
    Bian, Zheng
    Shuai, Ziqi
    Chen, Zhenbang
    Zhang, Yufeng
    [J]. PROCEEDINGS OF THE 2023 30TH ASIA-PACIFIC SOFTWARE ENGINEERING CONFERENCE, APSEC 2023, 2023, : 657 - 658
  • [39] Scalability Challenges in Current MPI One-Sided Implementations
    Zhao, Xin
    Balaji, Pavan
    Gropp, William
    [J]. 2016 15TH INTERNATIONAL SYMPOSIUM ON PARALLEL AND DISTRIBUTED COMPUTING (ISPDC), 2016, : 38 - 47
  • [40] Formal methods applied to high-performance computing software design: a case study of MPI one-sided communication-based locking
    Pervez, Salman
    Gopalakrishnan, Ganesh
    Kirby, Robert M.
    Thakur, Rajeev
    Gropp, William
    [J]. SOFTWARE-PRACTICE & EXPERIENCE, 2010, 40 (01): : 23 - 43