Optimizing and Auto-tuning Belief Propagation on the GPU

被引:5
|
作者
Grauer-Gray, Scott [1 ]
Cavazos, John [1 ]
机构
[1] Univ Delaware, Newark, DE 19716 USA
关键词
D O I
10.1007/978-3-642-19595-2_9
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
A CUDA kernel will utilize high-latency local memory for storage when there are not enough registers to hold the required data or if the data is an array that is accessed using a variable index within a loop. However, accesses from local memory take longer than accesses from registers and shared memory, so it is desirable to minimize the use of local memory. This paper contains an analysis of strategies used to reduce the use of local memory in a CUDA implementation of belief propagation for stereo processing. We perform experiments using registers as well as shared memory as alternate locations for data initially placed in local memory, and then develop a hybrid implementation that allows the programmer to store an adjustable amount of data in shared, register, and local memory. We show results of running our optimized implementations on two different stereo sets and across three generations of nVidia GPUs, and introduce an auto-tuning implementation that generates an optimized belief propagation implementation on any input stereo set on any CUDA-capable GPU.
引用
收藏
页码:121 / 135
页数:15
相关论文
共 50 条
  • [21] Scalable Auto-Tuning of Synthesis Parameters for Optimizing High-Performance Processors
    Ziegler, Matthew M.
    Liu, Hung-Yi
    Carloni, Luca P.
    ISLPED '16: PROCEEDINGS OF THE 2016 INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN, 2016, : 180 - 185
  • [22] Optimizing and Auto-Tuning Iterative Stencil Loops for GPUs with the In-Plane Method
    Tang, Wai Teng
    Tan, Wen Jun
    Krishnamoorthy, Ratna
    Wong, Yi Wen
    Kuo, Shyh-Hao
    Goh, Rick Siow Mong
    Turner, Stephen John
    Wong, Weng-Fai
    IEEE 27TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM (IPDPS 2013), 2013, : 452 - 462
  • [23] A Fine-grained Prefetching Scheme for DGEMM Kernels on GPU with Auto-tuning Compatibility
    Li, Jialin
    Ye, Huang
    Tian, Shaobo
    Li, Xinyuan
    Zhang, Jian
    2022 IEEE 36TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM (IPDPS 2022), 2022, : 863 - 874
  • [24] Matrix Multiplication Beyond Auto-Tuning: Rewrite-based GPU Code Generation
    Steuwer, Michel
    Remmelg, Toomas
    Dubach, Christophe
    2016 INTERNATIONAL CONFERENCE ON COMPILERS, ARCHITECTURE AND SYNTHESIS FOR EMBEDDED SYSTEMS (CASES), 2016,
  • [25] Vibration control of milling machine by using auto-tuning magnetic damper and auto-tuning vibration absorber
    Nagaya, K
    Kobayasi, J
    Imai, K
    INTERNATIONAL JOURNAL OF APPLIED ELECTROMAGNETICS AND MECHANICS, 2002, 16 (1-2) : 111 - 123
  • [26] ATF: A Generic Auto-Tuning Framework
    Rasch, Ari
    Haidl, Michael
    Gorlatch, Sergei
    2017 19TH IEEE INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING AND COMMUNICATIONS (HPCC) / 2017 15TH IEEE INTERNATIONAL CONFERENCE ON SMART CITY (SMARTCITY) / 2017 3RD IEEE INTERNATIONAL CONFERENCE ON DATA SCIENCE AND SYSTEMS (DSS), 2017, : 64 - 71
  • [27] Auto-tuning of cascade control systems
    Song, SH
    Xie, LH
    Cai, WJ
    PROCEEDINGS OF THE 4TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION, VOLS 1-4, 2002, : 3339 - 3343
  • [28] Auto-tuning interactive multiple model
    Ng, GW
    Lau, A
    How, KY
    ACQUISITION, TRACKING, AND POINTING XII, 1998, 3365 : 131 - 138
  • [29] Survey on PID auto-tuning modules
    Ang, KH
    Yun, L
    PROCEEDINGS OF THE 5TH ASIA-PACIFIC CONFERENCE ON CONTROL & MEASUREMENT, 2002, : 148 - 153
  • [30] ATF: A Generic Auto-Tuning Framework
    Rasch, Ari
    Gorlatch, Sergei
    HPDC '18: PROCEEDINGS OF THE 27TH INTERNATIONAL SYMPOSIUM ON HIGH-PERFORMANCE PARALLEL AND DISTRIBUTED COMPUTING: POSTERS/DOCTORAL CONSORTIUM, 2018, : 3 - 4