Autotuning Batch Cholesky Factorization in CUDA with Interleaved Layout of Matrices

被引:4
|
作者
Gates, Mark [1 ]
Kurzak, Jakub [1 ]
Luszczek, Piotr [1 ]
Pei, Yu [1 ]
Dongarra, Jack [2 ,3 ,4 ]
机构
[1] Univ Tennessee, Innovat Comp Lab, Knoxville, TN 37996 USA
[2] Univ Tennessee, Knoxville, TN 37996 USA
[3] Oak Ridge Natl Lab, Oak Ridge, TN USA
[4] Univ Manchester, Manchester, Lancs, England
来源
2017 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS (IPDPSW) | 2017年
基金
美国国家科学基金会;
关键词
batch computation; GPU computing; numerical linear algebra; Cholesky factorization; data layout;
D O I
10.1109/IPDPSW.2017.18
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Batch matrix operations address the case of solving the same linear algebra problem for a very large number of very small matrices. In this paper, we focus on implementing the batch Cholesky factorization in CUDA, in single precision arithmetic, for NVIDIA GPUs. Specifically, we look into the benefits of using noncanonical data layouts, where consecutive memory locations store elements with the same row and column index in a set of consecutive matrices. We discuss a number of different implementation options and tuning parameters. We demonstrate superior performance to traditional implementations for the case of very small matrices.
引用
收藏
页码:1408 / 1417
页数:10
相关论文
共 34 条