CUDA-NP: Realizing Nested Thread-Level Parallelism in GPGPU Applications

被引:11
|
作者
Yang, Yi [1 ]
Li, Chao [2 ]
Zhou, Huiyang [2 ]
机构
[1] NEC Labs Amer, Dept Comp Syst Architecture, Princeton, NJ 08540 USA
[2] N Carolina State Univ, Dept Elect & Comp Engn, Raleigh, NC 27606 USA
基金
美国国家科学基金会;
关键词
GPGPU; nested parallelism; compiler; local memory; OPENMP; PERFORMANCE; COMPILER; OPTIMIZATION; FRAMEWORK; DESIGN;
D O I
10.1007/s11390-015-1500-y
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Parallel programs consist of series of code sections with different thread-level parallelism (TLP). As a result, it is rather common that a thread in a parallel program, such as a GPU kernel in CUDA programs, still contains both sequential code and parallel loops. In order to leverage such parallel loops, the latest NVIDIA Kepler architecture introduces dynamic parallelism, which allows a GPU thread to start another GPU kernel, thereby reducing the overhead of launching kernels from a CPU. However, with dynamic parallelism, a parent thread can only communicate with its child threads through global memory and the overhead of launching GPU kernels is non-trivial even within GPUs. In this paper, we first study a set of GPGPU benchmarks that contain parallel loops, and highlight that these benchmarks do not have a very high loop count or high degree of TLP. Consequently, the benefits of leveraging such parallel loops using dynamic parallelism are too limited to offset its overhead. We then present our proposed solution to exploit nested parallelism in CUDA, referred to as CUDA-NP. With CUDA-NP, we initially enable a high number of threads when a GPU program starts, and use control flow to activate different numbers of threads for different code sections. We implement our proposed CUDA-NP framework using a directive-based compiler approach. For a GPU kernel, an application developer only needs to add OpenMP-like pragmas for parallelizable code sections. Then, our CUDA-NP compiler automatically generates the optimized GPU kernels. It supports both the reduction and the scan primitives, explores different ways to distribute parallel loop iterations into threads, and efficiently manages on-chip resource. Our experiments show that for a set of GPGPU benchmarks, which have already been optimized and contain nested parallelism, our proposed CUDA-NP framework further improves the performance by up to 6.69 times and 2.01 times on average.
引用
收藏
页码:3 / 19
页数:17
相关论文
共 50 条
  • [1] CUDA-NP: Realizing Nested Thread-Level Parallelism in GPGPU Applications
    Yi Yang
    Chao Li
    Huiyang Zhou
    Journal of Computer Science and Technology, 2015, 30 : 3 - 19
  • [2] CUDA-NP: Realizing Nested Thread-Level Parallelism in GPGPU Applications
    Yang, Yi
    Zhou, Huiyang
    ACM SIGPLAN NOTICES, 2014, 49 (08) : 93 - 105
  • [3] An Efficient Vectorization Approach to Nested Thread-level Parallelism for CUDA GPUs
    Xu, Shixiong
    Gregg, David
    2015 INTERNATIONAL CONFERENCE ON PARALLEL ARCHITECTURE AND COMPILATION (PACT), 2015, : 488 - 489
  • [4] Evolution of Thread-Level Parallelism in Desktop Applications
    Blake, Geoffrey
    Dreslinski, Ronald G.
    Mudge, Trevor
    Flautner, Krisztian
    ISCA 2010: THE 37TH ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE, 2010, : 302 - 313
  • [5] Thread-level parallelism and interactive performance of desktop applications
    Flautner, K
    Uhlig, R
    Reinhardt, S
    Mudge, T
    ACM SIGPLAN NOTICES, 2000, 35 (11) : 129 - 138
  • [6] Exploiting speculative thread-level parallelism in data compression applications
    Wang, Shengyue
    Zhai, Antonia
    Yew, Pen-Chung
    LANGUAGES AND COMPILERS FOR PARALLEL COMPUTING, 2007, 4382 : 126 - +
  • [7] On the limitations of compilers to exploit thread-level parallelism in embedded applications
    Islam, Mafijul
    6TH IEEE/ACIS INTERNATIONAL CONFERENCE ON COMPUTER AND INFORMATION SCIENCE, PROCEEDINGS, 2007, : 60 - 65
  • [8] Exploitation of Nested Thread-Level Speculative Parallelism on Multi-Core Systems
    Kejariwal, Arun
    Girkar, Milind
    Tian, Xinmin
    Saito, Hideki
    Nicolau, Alexandru
    Veidenbaum, Alexander V.
    Banerjee, Utpal
    Polychronopoulos, Constantine D.
    PROCEEDINGS OF THE 2010 COMPUTING FRONTIERS CONFERENCE (CF 2010), 2010, : 99 - 100
  • [9] Predicting loop termination to boost speculative thread-level parallelism in embedded applications
    Islam, Mafijul Md.
    19TH INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE AND HIGH PERFORMANCE COMPUTING, PROCEEDINGS, 2007, : 54 - 61
  • [10] Thread partitioning and value prediction for exploiting speculative thread-level parallelism
    Marcuello, P
    González, A
    Tubella, J
    IEEE TRANSACTIONS ON COMPUTERS, 2004, 53 (02) : 114 - 125