OpenMP-oriented applications for distributed shared memory architectures

被引:6
|
作者
Marowka, A [1 ]
Liu, ZY [1 ]
Chapman, B [1 ]
机构
[1] Univ Houston, Dept Comp Sci, Houston, TX 77204 USA
来源
关键词
OpenMP; data locality; NAS parallel benchmarks; programming model;
D O I
10.1002/cpe.752
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
The rapid rise of OpenMP as the preferred parallel programming paradigm for small-to-medium scale parallelism could slow unless OpenMP can show capabilities for becoming the model-of-choice for large scale high-performance parallel computing in the coming decade. The main stumbling block for the adaptation of OpenMP to distributed shared memory (DSM) machines, which are based on architectures like cc-NUMA, stems from the lack of capabilities for data placement among processors and threads for achieving data locality. The absence of such a mechanism causes remote memory accesses and inefficient cache memory use, both of which lead to poor performance. This paper presents a simple software programming approach called copy-inside-copy-back (CC) that exploits the data privatization mechanism of OpenMP for data placement and replacement. This technique enables one to distribute data manually without taking away control and flexibility from the programmer and is thus an alternative to the automat and implicit approaches. Moreover, the CC approach improves on the OpenMP-SPMD style of programming that makes the development process of an OpenMP application more structured and simpler. The CC technique was tested and analyzed using the NAS Parallel Benchmarks on SGI Origin 2000 multiprocessor machines. This study shows that OpenMP improves performance of coarse-grained parallelism, although a fast copy mechanism is essential. Copyright (C) 2004 John Wiley Sons, Ltd.
引用
收藏
页码:371 / 384
页数:14
相关论文
共 50 条
  • [41] Prospects for optical interconnects in distributed, shared-memory organized MIMD architectures
    Frietman, EEE
    Ernst, RJ
    Crosbie, R
    Shimoji, M
    JOURNAL OF SUPERCOMPUTING, 1999, 14 (02): : 107 - 128
  • [42] Binding Nested OpenMP Programs on Hierarchical Memory Architectures
    Schmidl, Dirk
    Terboven, Christian
    Mey, Dieter An
    Buecker, Martin
    BEYOND LOOP LEVEL PARALLELISM IN OPENMP: ACCELERATORS, TASKING AND MORE, PROCEEDINGS, 2010, 6132 : 29 - +
  • [43] Synthesis of heterogeneous distributed architectures for memory-intensive applications
    Huang, C
    Ravi, S
    Raghunathan, A
    Jha, NK
    ICCAD-2003: IEEE/ACM DIGEST OF TECHNICAL PAPERS, 2003, : 46 - 53
  • [44] OpenMP: shared-memory parallelism from the ashes
    Kuck & Associates Inc
    Computer, 5 (108-109):
  • [45] Beyond Explicit Transfers: Shared and Managed Memory in OpenMP
    Neth, Brandon
    Scogland, Thomas R. W.
    Duran, Alejandro
    de Supinski, Bronis R.
    OPENMP: ENABLING MASSIVE NODE-LEVEL PARALLELISM, IWOMP 2021, 2021, 12870 : 183 - 194
  • [46] OpenMP: Shared-memory parallelism from the ashes
    Throop, J
    COMPUTER, 1999, 32 (05) : 108 - 109
  • [47] OpenMP vs. MPI on a shared memory multiprocessor
    Behrens, J
    Haan, O
    Kornblueh, L
    PARALLEL COMPUTING: SOFTWARE TECHNOLOGY, ALGORITHMS, ARCHITECTURES AND APPLICATIONS, 2004, 13 : 177 - 183
  • [48] Scientific programming - Shared-memory programming with OpenMP
    Still, CH
    Langer, SH
    Alley, WE
    Zimmerman, GB
    COMPUTERS IN PHYSICS, 1998, 12 (06): : 577 - 584
  • [49] Performance comparison of MPI and OpenMP on shared memory multiprocessors
    Krawezik, G
    Cappello, F
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2006, 18 (01): : 29 - 61
  • [50] Scheduling loop applications in software distributed shared memory systems
    Liang, Tyng-Yeu
    Shieh, Ce-Kuen
    Liu, Deh-Cheng
    IEICE Transactions on Information and Systems, 2000, E83-D (09) : 1721 - 1730