Adaptive Optimization of Sparse Matrix-Vector Multiplication on Emerging Many-Core Architectures

被引:11
|
作者
Chen, Shizhao [2 ]
Fang, Jianbin [2 ]
Chen, Donglin [2 ]
Xu, Chuanfu [1 ,2 ]
Wang, Zheng [3 ,4 ,5 ]
机构
[1] China Aerodynam Res & Dev Ctr, State Key Lab Aerodynam, Chengdu, Sichuan, Peoples R China
[2] Natl Univ Def Technol, Coll Comp, Changsha, Hunan, Peoples R China
[3] Univ Lancaster, MetaLab, Sch Comp & Commun, Lancaster, England
[4] Northwest Univ, Sch Informat Sci & Technol, Xian, Shaanxi, Peoples R China
[5] Xian Univ Posts & Telecommun, Sch Comp Sci & Technol, Xian, Shaanxi, Peoples R China
基金
中国国家自然科学基金; 英国工程与自然科学研究理事会;
关键词
Sparse matrix vector multiplication; Performance optimization; Many-Cores; Performance analysis;
D O I
10.1109/HPCC/SmartCity/DSS.2018.00116
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sparse matrix vector multiplication (SpMV) is one of the most common operations in scientific and high-performance applications, and is often responsible for the application performance bottleneck. While the sparse matrix representation has a significant impact on the resulting application performance, choosing the right representation typically relies on expert knowledge and trial and error. This paper provides the first comprehensive study on the impact of sparse matrix representations on two emerging many-core architectures: the Intel's Knights Landing (KNL) XeonPhi and the ARM-based FT-2000Plus (FTP). Our large-scale experiments involved over 9,500 distinct profiling runs performed on 956 sparse datasets and five mainstream SpMV representations. We show that the best sparse matrix representation depends on the underlying architecture and the program input. To help developers to choose the optimal matrix representation, we employ machine learning to develop a predictive model. Our model is first trained offline using a set of training examples. The learned model can be used to predict the best matrix representation for any unseen input for a given architecture. We show that our model delivers on average 95% and 91% of the best available performance on KNL and FTP respectively, and it achieves this with no runtime profiling overhead.
引用
收藏
页码:649 / 658
页数:10
相关论文
共 50 条
  • [21] Optimization by Runtime Specialization for Sparse Matrix-Vector Multiplication
    Kamin, Sam
    Garzaran, Maria Jesus
    Aktemur, Baris
    Xu, Danqing
    Yilmaz, Buse
    Chen, Zhongbo
    [J]. ACM SIGPLAN NOTICES, 2015, 50 (03) : 93 - 102
  • [22] Optimization techniques for sparse matrix-vector multiplication on GPUs
    Maggioni, Marco
    Berger-Wolf, Tanya
    [J]. JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2016, 93-94 : 66 - 86
  • [23] MEMORY-EFFICIENT SPARSE MATRIX-MATRIX MULTIPLICATION BY ROW MERGING ON MANY-CORE ARCHITECTURES
    Gremse, Felix
    Kuepper, Kerstin
    Naumann, Uwe
    [J]. SIAM JOURNAL ON SCIENTIFIC COMPUTING, 2018, 40 (04): : C429 - C449
  • [24] Structured sparse matrix-vector multiplication on massively parallel SIMD architectures
    Dehn, T
    Eiermann, M
    Giebermann, K
    Sperling, V
    [J]. PARALLEL COMPUTING, 1995, 21 (12) : 1867 - 1894
  • [25] Optimizing Sparse Matrix-Vector Multiplications on an ARMv8-based Many-Core Architecture
    Chen, Donglin
    Fang, Jianbin
    Chen, Shizhao
    Xu, Chuanfu
    Wang, Zheng
    [J]. INTERNATIONAL JOURNAL OF PARALLEL PROGRAMMING, 2019, 47 (03) : 418 - 432
  • [26] Sparse Matrix-Vector Multiplication on GPGPUs
    Filippone, Salvatore
    Cardellini, Valeria
    Barbieri, Davide
    Fanfarillo, Alessandro
    [J]. ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE, 2017, 43 (04):
  • [27] Optimization of Sparse Matrix-Vector Multiplication with Variant CSR on GPUs
    Feng, Xiaowen
    Jin, Hai
    Zheng, Ran
    Hu, Kan
    Zeng, Jingxiang
    Shao, Zhiyuan
    [J]. 2011 IEEE 17TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS), 2011, : 165 - 172
  • [28] An Extended Compression Format for the Optimization of Sparse Matrix-Vector Multiplication
    Karakasis, Vasileios
    Gkountouvas, Theodoros
    Kourtis, Kornilios
    Goumas, Georgios
    Koziris, Nectarios
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2013, 24 (10) : 1930 - 1940
  • [29] GPU accelerated sparse matrix-vector multiplication and sparse matrix-transpose vector multiplication
    Tao, Yuan
    Deng, Yangdong
    Mu, Shuai
    Zhang, Zhenzhong
    Zhu, Mingfa
    Xiao, Limin
    Ruan, Li
    [J]. CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2015, 27 (14): : 3771 - 3789
  • [30] Conflict-Free Symmetric Sparse Matrix-Vector Multiplication on Multicore Architectures
    Elafrou, Athena
    Goumas, Georgios
    Koziris, Nectarios
    [J]. PROCEEDINGS OF SC19: THE INTERNATIONAL CONFERENCE FOR HIGH PERFORMANCE COMPUTING, NETWORKING, STORAGE AND ANALYSIS, 2019,