Interoperability strategies for GASPI and MPI in large-scale scientific applications

被引:0
|
作者
Simmendinger, Christian [1 ]
Iakymchuk, Roman [2 ]
Cebamanos, Luis [5 ]
Akhmetova, Dana [2 ]
Bartsch, Valeria [6 ]
Rotaru, Tiberiu [7 ]
Rahn, Mirko [6 ]
Laure, Erwin [3 ,4 ]
Markidis, Stefano [3 ]
机构
[1] T Syst Solut Res, Stuttgart, Germany
[2] KTH Royal Inst Technol, Lindstedtsvagen 5, S-10044 Stockholm, Sweden
[3] KTH Royal Inst Technol, High Performance Comp, Stockholm, Sweden
[4] KTH Royal Inst Technol, PDC Ctr, High Performance Comp Ctr, Stockholm, Sweden
[5] Univ Edinburgh, EPCC, Edinburgh, Midlothian, Scotland
[6] Fraunhofer ITWM, HPC Dept, Kaiserslautern, Germany
[7] Fraunhofer ITWM, Kaiserslautern, Germany
基金
欧盟地平线“2020”;
关键词
Interoperability; GASPI; MPI; iPIC3D; Ludwig; MiniGhost; halo exchange; Allreduce; PHYSICS; PLASMA;
D O I
10.1177/1094342018808359
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
One of the main hurdles of partitioned global address space (PGAS) approaches is the dominance of message passing interface (MPI), which as a de facto standard appears in the code basis of many applications. To take advantage of the PGAS APIs like global address space programming interface (GASPI) without a major change in the code basis, interoperability between MPI and PGAS approaches needs to be ensured. In this article, we consider an interoperable GASPI/MPI implementation for the communication/performance crucial parts of the Ludwig and iPIC3D applications. To address the discovered performance limitations, we develop a novel strategy for significantly improved performance and interoperability between both APIs by leveraging GASPI shared windows and shared notifications. First results with a corresponding implementation in the MiniGhost proxy application and the Allreduce collective operation demonstrate the viability of this approach.
引用
收藏
页码:554 / 568
页数:15
相关论文
共 50 条
  • [31] INTEROPERABILITY ISSUES IN LARGE-SCALE DISTRIBUTED OBJECT SYSTEMS
    MANOLA, F
    [J]. ACM COMPUTING SURVEYS, 1995, 27 (02) : 268 - 270
  • [32] A simulation study of data distribution strategies for large-scale scientific data collaborations
    Al Kiswany, Samer
    Ripeanu, Matei
    [J]. 2007 CANADIAN CONFERENCE ON ELECTRICAL AND COMPUTER ENGINEERING, VOLS 1-3, 2007, : 223 - 226
  • [33] The Large-scale Structure of Scientific Method
    Kosso, Peter
    [J]. SCIENCE & EDUCATION, 2009, 18 (01) : 33 - 42
  • [34] The Large-scale Structure of Scientific Method
    Peter Kosso
    [J]. Science & Education, 2009, 18 : 33 - 42
  • [35] Web portal for large-scale computations based on grid and MPI
    Kazakh National University, Mechanics and Mathematics Faculty, Computer Science Department, Masanchi street 39/47, Almaty
    050012, Kazakhstan
    不详
    050091, Kazakhstan
    [J]. Scalable Comput. Pract. Exp, 2008, 2 (135-142):
  • [36] An Efficient MPI Message Queue Mechanism for Large-scale Jobs
    Zounmevo, Judicael A.
    Afsahi, Ahmad
    [J]. PROCEEDINGS OF THE 2012 IEEE 18TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS 2012), 2012, : 464 - 471
  • [37] Large-Scale Clustering using MPI-based Canopy
    Burys, Jacek
    Awan, Ahsan Javed
    Heinis, Thomas
    [J]. PROCEEDINGS OF 2018 IEEE/ACM MACHINE LEARNING IN HPC ENVIRONMENTS (MLHPC 2018), 2018, : 77 - 84
  • [38] LARGE-SCALE APPLICATIONS OF SUPERCONDUCTIVITY
    SCHWARTZ, BB
    FONER, S
    [J]. PHYSICS TODAY, 1977, 30 (07) : 34 - &
  • [39] LARGE-SCALE APPLICATIONS OF SUPERCONDUCTIVITY
    BOGNER, G
    YASUKOCH.K
    [J]. CRYOGENICS, 1974, 14 (09) : 533 - 535
  • [40] Neighborhood communication paradigm to increase scalability in large-scale dynamic scientific applications
    Ovcharenko, Aleksandr
    Ibanez, Daniel
    Delalondre, Fabien
    Sahni, Onkar
    Jansen, Kenneth E.
    Carothers, Christopher D.
    Shephard, Mark S.
    [J]. PARALLEL COMPUTING, 2012, 38 (03) : 140 - 156