Beyond Explicit Transfers: Shared and Managed Memory in OpenMP

被引:0
|
作者
Neth, Brandon [1 ]
Scogland, Thomas R. W. [2 ]
Duran, Alejandro [3 ]
de Supinski, Bronis R. [2 ]
机构
[1] Univ Arizona, Tucson, AZ 85721 USA
[2] Lawrence Livermore Natl Lab, Livermore, CA 94550 USA
[3] Intel Corp, Iberia, Spain
关键词
D O I
10.1007/978-3-030-85262-7_13
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
OpenMP began supporting offloading in version 4.0, almost 10 years ago. It introduced the programming model for offload to GPUs or other accelerators that was common at the time, requiring users to explicitly transfer data between host and devices. But advances in heterogeneous computing and programming systems have created a new environment. No longer are programmers required to manage tracking and moving their data on their own. Now, for those who want it, inter-device address mapping and other runtime systems push these data management tasks behind a veil of abstraction. In the context of this progress, OpenMP offloading support shows signs of its age. However, because of its ubiquity as a standard for portable, parallel code, OpenMP is well positioned to provide a similar standard for heterogeneous programming. Towards this goal, we review the features available in other programming systems and argue that OpenMP expand its offloading support to better meet the expectations of modern programmers The first step, detailed here, augments OpenMP's existing memory space abstraction with device awareness and a concept of shared and managed memory. Thus, users can allocate memory accessible to different combinations of devices that do not require explicit memory transfers. We show the potential performance impact of this feature and discuss the possible downsides.
引用
收藏
页码:183 / 194
页数:12
相关论文
共 50 条
  • [1] Shared Memory OpenMP Parallelization of Explicit MPM and Its Application to Hypervelocity Impact
    Huang, P.
    Zhang, X.
    Ma, S.
    Wang, H. K.
    CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2008, 38 (02): : 119 - 147
  • [2] A shared memory benchmark in OpenMP
    Müller, Matthias S. (mueller@hlrs.de), 1600, et al; Fujitsu; IBM; Kayamori Foundation of Information; Mitsubishi Space Software; NEC; Science Advancement (Springer Verlag):
  • [3] Teaching Shared Memory Parallel Concepts with OpenMP
    Adams, Joel
    Brown, Richard
    Shoop, Elizabeth
    PROCEEDINGS OF THE 45TH ACM TECHNICAL SYMPOSIUM ON COMPUTER SCIENCE EDUCATION (SIGCSE'14), 2014, : 743 - 743
  • [4] OpenMP: shared-memory parallelism from the ashes
    Kuck & Associates Inc
    Computer, 5 (108-109):
  • [5] OpenMP: Shared-memory parallelism from the ashes
    Throop, J
    COMPUTER, 1999, 32 (05) : 108 - 109
  • [6] OpenMP vs. MPI on a shared memory multiprocessor
    Behrens, J
    Haan, O
    Kornblueh, L
    PARALLEL COMPUTING: SOFTWARE TECHNOLOGY, ALGORITHMS, ARCHITECTURES AND APPLICATIONS, 2004, 13 : 177 - 183
  • [7] Scientific programming - Shared-memory programming with OpenMP
    Still, CH
    Langer, SH
    Alley, WE
    Zimmerman, GB
    COMPUTERS IN PHYSICS, 1998, 12 (06): : 577 - 584
  • [8] Performance comparison of MPI and OpenMP on shared memory multiprocessors
    Krawezik, G
    Cappello, F
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2006, 18 (01): : 29 - 61
  • [9] Parallel molecular dynamics using OPENMP on a shared memory machine
    Couturier, R
    Chipot, C
    COMPUTER PHYSICS COMMUNICATIONS, 2000, 124 (01) : 49 - 59
  • [10] Optimizing OpenMP Programs on Software Distributed Shared Memory Systems
    Seung-Jai Min
    Ayon Basumallik
    Rudolf Eigenmann
    International Journal of Parallel Programming, 2003, 31 : 225 - 249