Optimizing and Auto-tuning Belief Propagation on the GPU

被引:5
|
作者
Grauer-Gray, Scott [1 ]
Cavazos, John [1 ]
机构
[1] Univ Delaware, Newark, DE 19716 USA
关键词
D O I
10.1007/978-3-642-19595-2_9
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
A CUDA kernel will utilize high-latency local memory for storage when there are not enough registers to hold the required data or if the data is an array that is accessed using a variable index within a loop. However, accesses from local memory take longer than accesses from registers and shared memory, so it is desirable to minimize the use of local memory. This paper contains an analysis of strategies used to reduce the use of local memory in a CUDA implementation of belief propagation for stereo processing. We perform experiments using registers as well as shared memory as alternate locations for data initially placed in local memory, and then develop a hybrid implementation that allows the programmer to store an adjustable amount of data in shared, register, and local memory. We show results of running our optimized implementations on two different stereo sets and across three generations of nVidia GPUs, and introduce an auto-tuning implementation that generates an optimized belief propagation implementation on any input stereo set on any CUDA-capable GPU.
引用
收藏
页码:121 / 135
页数:15
相关论文
共 50 条
  • [41] Auto-tuning of cascade control systems
    Song, SH
    Cai, WJ
    Wang, YG
    ISA TRANSACTIONS, 2003, 42 (01) : 63 - 72
  • [42] Auto-tuning of a tunable structural insert
    Harland, NR
    Mace, BR
    Jones, RW
    NOISE AND VIBRATION ENGINEERING, VOLS 1 - 3, PROCEEDINGS, 2001, : 77 - 84
  • [43] Auto-tuning Kernel Mean Matching
    Miao, Yun-Qian
    Farahat, Ahmed K.
    Kamel, Mohamed S.
    2013 IEEE 13TH INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW), 2013, : 560 - 567
  • [44] Threshold Auto-Tuning Metric Learning
    Rivero, Rachelle
    Onuma, Yuya
    Kato, Tsuyoshi
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2019, E102D (06) : 1163 - 1170
  • [45] A Note on Auto-tuning GEMM for GPUs
    Li, Yinan
    Dongarra, Jack
    Tomov, Stanimire
    COMPUTATIONAL SCIENCE - ICCS 2009, PART I, 2009, 5544 : 884 - 892
  • [46] Auto-Tuning Active Queue Management
    Novak, Joe H.
    Kasera, Sneha Kumar
    2017 9TH INTERNATIONAL CONFERENCE ON COMMUNICATION SYSTEMS AND NETWORKS (COMSNETS), 2017, : 136 - 143
  • [47] Auto-Tuning TRSM with an Asynchronous Task Assignment Model on Multicore, Multi-GPU and Coprocessor systems
    Pinto, Clicia
    Barreto, Marcos
    Boratto, Murilo
    2016 IEEE/ACS 13TH INTERNATIONAL CONFERENCE OF COMPUTER SYSTEMS AND APPLICATIONS (AICCSA), 2016,
  • [48] Going green: optimizing GPUs for energy efficiency through model-steered auto-tuning
    Schoonhoven, Richard
    Veenboer, Bram
    van Werkhoven, Ben
    Batenburg, K. Joost
    2022 IEEE/ACM INTERNATIONAL WORKSHOP ON PERFORMANCE MODELING, BENCHMARKING AND SIMULATION OF HIGH PERFORMANCE COMPUTER SYSTEMS (PMBS), 2022, : 48 - 59
  • [49] An Architecture for Flexible Auto-Tuning: The Periscope Tuning Framework 2.0
    Mijakovic, Robert
    Firbach, Michael
    Gerndt, Michael
    2016 2ND INTERNATIONAL CONFERENCE ON GREEN HIGH PERFORMANCE COMPUTING (ICGHPC), 2016,
  • [50] A History-Based Auto-Tuning Framework for Fast and High-Performance DNN Design on GPU
    Mu, Jiandong
    Wang, Mengdi
    Li, Lanbo
    Yang, Jun
    Lin, Wei
    Zhang, Wei
    PROCEEDINGS OF THE 2020 57TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2020,