A data locality methodology for matrix–matrix multiplication algorithm

被引:0
|
作者
Nicolaos Alachiotis
Vasileios I. Kelefouras
George S. Athanasiou
Harris E. Michail
Angeliki S. Kritikakou
Costas E. Goutis
机构
[1] University of Patras,VLSI Design Lab., Electrical & Computer Engineering Department
来源
关键词
Compilers; Memory management; Data locality; Data reuse; Recursive array layouts; Scheduling; Strassen’s algorithm; Matrix-matrix multiplication;
D O I
暂无
中图分类号
学科分类号
摘要
Matrix-Matrix Multiplication (MMM) is a highly important kernel in linear algebra algorithms and the performance of its implementations depends on the memory utilization and data locality. There are MMM algorithms, such as standard, Strassen–Winograd variant, and many recursive array layouts, such as Z-Morton or U-Morton. However, their data locality is lower than that of the proposed methodology. Moreover, several SOA (state of the art) self-tuning libraries exist, such as ATLAS for MMM algorithm, which tests many MMM implementations. During the installation of ATLAS, on the one hand an extremely complex empirical tuning step is required, and on the other hand a large number of compiler options are used, both of which are not included in the scope of this paper. In this paper, a new methodology using the standard MMM algorithm is presented, achieving improved performance by focusing on data locality (both temporal and spatial). This methodology finds the scheduling which conforms with the optimum memory management. Compared with (Chatterjee et al. in IEEE Trans. Parallel Distrib. Syst. 13:1105, 2002; Li and Garzaran in Proc. of Lang. Compil. Parallel Comput., 2005; Bilmes et al. in Proc. of the 11th ACM Int. Conf. Super-comput., 1997; Aberdeen and Baxter in Concurr. Comput. Pract. Exp. 13:103, 2001), the proposed methodology has two major advantages. Firstly, the scheduling used for the tile level is different from the element level’s one, having better data locality, suited to the sizes of memory hierarchy. Secondly, its exploration time is short, because it searches only for the number of the level of tiling used, and between (1, 2) (Sect. 4) for finding the best tile size for each cache level. A software tool (C-code) implementing the above methodology was developed, having the hardware model and the matrix sizes as input. This methodology has better performance against others at a wide range of architectures. Compared with the best existing related work, which we implemented, better performance up to 55% than the Standard MMM algorithm and up to 35% than Strassen’s is observed, both under recursive data array layouts.
引用
收藏
页码:830 / 851
页数:21
相关论文
共 50 条
  • [31] FAST ALGORITHM FOR SPARSE-MATRIX MULTIPLICATION
    SCHOOR, A
    INFORMATION PROCESSING LETTERS, 1982, 15 (02) : 87 - 89
  • [32] SUMMA: Scalable universal matrix multiplication algorithm
    VanDeGeijn, RA
    Watts, J
    CONCURRENCY-PRACTICE AND EXPERIENCE, 1997, 9 (04): : 255 - 274
  • [33] An implementation of the matrix multiplication algorithm SUMMA in mpF
    Kalinov, A
    Ledovskikh, I
    Posypkin, M
    Levchenko, Z
    Chizhov, VT
    PARALLEL COMPUTING TECHNOLOGIES, 2005, 3606 : 420 - 432
  • [34] SIMPLE SPARSE-MATRIX MULTIPLICATION ALGORITHM
    KRAL, D
    NEOGRADY, P
    KELLO, V
    COMPUTER PHYSICS COMMUNICATIONS, 1995, 85 (02) : 213 - 216
  • [35] A practical streaming approximate matrix multiplication algorithm
    Francis, Deena P.
    Raimond, Kumudha
    JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2022, 34 (01) : 1455 - 1465
  • [36] FOLDING AND DOUBLE MAPPING OF THE MATRIX MULTIPLICATION ALGORITHM
    GUSEV, M
    EVANS, DJ
    INTERNATIONAL JOURNAL OF COMPUTER MATHEMATICS, 1995, 55 (3-4) : 183 - 187
  • [37] A Data Locality-aware Design Framework for Reconfigurable Sparse Matrix-Vector Multiplication Kernel
    Li, Sicheng
    Wang, Yandan
    Wen, Wujie
    Wang, Yu
    Chen, Yiran
    Li, Hai
    2016 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD), 2016,
  • [38] A high-performance matrix-matrix multiplication methodology for CPU and GPU architectures
    Kelefouras, Vasilios
    Kritikakou, A.
    Mporas, Iosif
    Kolonias, Vasilios
    JOURNAL OF SUPERCOMPUTING, 2016, 72 (03): : 804 - 844
  • [39] A Matrix–Matrix Multiplication methodology for single/multi-core architectures using SIMD
    Vasilios Kelefouras
    Angeliki Kritikakou
    Costas Goutis
    The Journal of Supercomputing, 2014, 68 : 1418 - 1440
  • [40] Predicting optimal sparse general matrix-matrix multiplication algorithm on GPUs
    Wei, Bingxin
    Wang, Yizhuo
    Chang, Fangli
    Gao, Jianhua
    Ji, Weixing
    INTERNATIONAL JOURNAL OF HIGH PERFORMANCE COMPUTING APPLICATIONS, 2024, 38 (03): : 245 - 259