Adaptive Optimization of Sparse Matrix-Vector Multiplication on Emerging Many-Core Architectures

被引:11
|
作者
Chen, Shizhao [2 ]
Fang, Jianbin [2 ]
Chen, Donglin [2 ]
Xu, Chuanfu [1 ,2 ]
Wang, Zheng [3 ,4 ,5 ]
机构
[1] China Aerodynam Res & Dev Ctr, State Key Lab Aerodynam, Chengdu, Sichuan, Peoples R China
[2] Natl Univ Def Technol, Coll Comp, Changsha, Hunan, Peoples R China
[3] Univ Lancaster, MetaLab, Sch Comp & Commun, Lancaster, England
[4] Northwest Univ, Sch Informat Sci & Technol, Xian, Shaanxi, Peoples R China
[5] Xian Univ Posts & Telecommun, Sch Comp Sci & Technol, Xian, Shaanxi, Peoples R China
基金
中国国家自然科学基金; 英国工程与自然科学研究理事会;
关键词
Sparse matrix vector multiplication; Performance optimization; Many-Cores; Performance analysis;
D O I
10.1109/HPCC/SmartCity/DSS.2018.00116
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sparse matrix vector multiplication (SpMV) is one of the most common operations in scientific and high-performance applications, and is often responsible for the application performance bottleneck. While the sparse matrix representation has a significant impact on the resulting application performance, choosing the right representation typically relies on expert knowledge and trial and error. This paper provides the first comprehensive study on the impact of sparse matrix representations on two emerging many-core architectures: the Intel's Knights Landing (KNL) XeonPhi and the ARM-based FT-2000Plus (FTP). Our large-scale experiments involved over 9,500 distinct profiling runs performed on 956 sparse datasets and five mainstream SpMV representations. We show that the best sparse matrix representation depends on the underlying architecture and the program input. To help developers to choose the optimal matrix representation, we employ machine learning to develop a predictive model. Our model is first trained offline using a set of training examples. The learned model can be used to predict the best matrix representation for any unseen input for a given architecture. We show that our model delivers on average 95% and 91% of the best available performance on KNL and FTP respectively, and it achieves this with no runtime profiling overhead.
引用
收藏
页码:649 / 658
页数:10
相关论文
共 50 条
  • [41] On improving the performance of sparse matrix-vector multiplication
    White, JB
    Sadayappan, P
    [J]. FOURTH INTERNATIONAL CONFERENCE ON HIGH-PERFORMANCE COMPUTING, PROCEEDINGS, 1997, : 66 - 71
  • [42] Adaptive Hybrid Storage Format for Sparse Matrix-Vector Multiplication on Multi-Core SIMD CPUs
    Chen, Shizhao
    Fang, Jianbin
    Xu, Chuanfu
    Wang, Zheng
    [J]. APPLIED SCIENCES-BASEL, 2022, 12 (19):
  • [43] Optimizing Sparse Tensor Times Matrix on Multi-core and Many-core Architectures
    Li, Jiajia
    Ma, Yuchen
    Yan, Chenggang
    Vuduc, Richard
    [J]. PROCEEDINGS OF 2016 6TH WORKSHOP ON IRREGULAR APPLICATIONS: ARCHITECTURE AND ALGORITHMS (IA3), 2016, : 26 - 33
  • [44] SMAT: An Input Adaptive Auto-Tuner for Sparse Matrix-Vector Multiplication
    Li, Jiajia
    Tan, Guangming
    Chen, Mingyu
    Sun, Ninghui
    [J]. ACM SIGPLAN NOTICES, 2013, 48 (06) : 117 - 126
  • [45] Performance Analysis and Optimization of Sparse Matrix-Vector Multiplication on Intel Xeon Phi
    Elafrou, Athena
    Goumas, Georgios
    Koziris, Nectarios
    [J]. 2017 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS (IPDPSW), 2017, : 1389 - 1398
  • [46] Optimization of Sparse Matrix-Vector Multiplication by Auto Selecting Storage Schemes on GPU
    Kubota, Yuji
    Takahashi, Daisuke
    [J]. COMPUTATIONAL SCIENCE AND ITS APPLICATIONS - ICCSA 2011, PT II, 2011, 6783 : 547 - 561
  • [47] Performance modeling and optimization of sparse matrix-vector multiplication on NVIDIA CUDA platform
    Shiming Xu
    Wei Xue
    Hai Xiang Lin
    [J]. The Journal of Supercomputing, 2013, 63 : 710 - 721
  • [48] A Comprehensive Performance Model of Sparse Matrix-Vector Multiplication to Guide Kernel Optimization
    Xia, Tian
    Fu, Gelin
    Li, Chenyang
    Luo, Zhongpei
    Zhang, Lucheng
    Chen, Ruiyang
    Zhao, Wenzhe
    Zheng, Nanning
    Ren, Pengju
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2023, 34 (02) : 519 - 534
  • [49] On Sparse Matrix-Vector Product Optimization
    Emad, Nahid
    Hamdi-Larbi, Olfa
    Mahjoub, Zaher
    [J]. 3RD ACS/IEEE INTERNATIONAL CONFERENCE ON COMPUTER SYSTEMS AND APPLICATIONS, 2005, 2005,
  • [50] Performance modeling and optimization of sparse matrix-vector multiplication on NVIDIA CUDA platform
    Xu, Shiming
    Xue, Wei
    Lin, Hai Xiang
    [J]. JOURNAL OF SUPERCOMPUTING, 2013, 63 (03): : 710 - 721