New generalized data structures for matrices lead to a variety of high-performance algorithms

被引:0
|
作者
Gustavson, FG [1 ]
机构
[1] IBM Corp, Thomas J Watson Res Ctr, Yorktown Heights, NY 10598 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We describe new data structures for full and packed storage of dense symmetric/triangular arrays that generalize both. Using the new data structures, one is led to several new algorithms that save "half" the storage and outperform the current blocked-based level-3 algorithms in LAPACK. We concentrate on the simplest forms of the new algorithms and show for Cholesky factorization they are a direct generalization of LINPACK. This means that level-3 BLAS's are not required to obtain level-3 performance. The replacement for level-3 BLAS are so-called kernel routines, and on IBM platforms they are producible from simple textbook type codes, by the XLF Fortran compiler. In the sequel I will label these "vanilla" codes. The results for Cholesky, on Power3 with a peak performance of 800 Mflop/s at n greater than or equal to 200 is over 720 MFlop/s and reaches 735 MFlop/s. Using conventional full-format LAPACK DPOTRF with ESSL BLAS's, one first gets 600 MFlop/s at n 600 and only reaches a peak of 620 MFlop/s. We have also produced simple square blocked full-matrix data formats where the blocks themselves are stored in column-major (Fortran) order or row-major (C) format. The simple algorithms of LU factorization with partial pivoting for this new data format is a direct generalization of LINPACK algorithm DGEFA. Again, no conventional level-3 BLAS's are required; the replacements are again so-called kernel routines, Programming far squared blocked full-matrix format can be accomplished in standard Fortran through the use of three- and four-dimensional arrays. Thus, no new compiler support is necessary. Finally we mention that other more complicated algorithms are possible, for example, recursive ones. The recursive algorithms are also easily programmed via the use of tables that address where the blocks are stored in the two-dimensional recursive block array.
引用
收藏
页码:46 / 61
页数:16
相关论文
共 50 条
  • [1] New generalized data structures for matrices lead to a variety of high performance algorithms
    Gustavson, FG
    PARALLEL PROCESSING APPLIED MATHEMATICS, 2002, 2328 : 418 - 436
  • [2] New generalized matrix data structures lead to a variety of high-performance algorithms
    Gustavson, FG
    ARCHITECTURE OF SCIENTIFIC SOFTWARE, 2001, 60 : 211 - 234
  • [3] New generalized data structures for matrices lead to a variety of high performance dense linear algebra algorithms
    Gustavson, Fred G.
    APPLIED PARALLEL COMPUTING: STATE OF THE ART IN SCIENTIFIC COMPUTING, 2006, 3732 : 11 - 20
  • [4] High-performance linear algebra algorithms using new generalized data structures for matrices
    Gustavson, FG
    IBM JOURNAL OF RESEARCH AND DEVELOPMENT, 2003, 47 (01) : 31 - 55
  • [5] High-Performance Algorithms and Data Structures to Catch Elephant Flows
    Ros-Giralt, Jordi
    Commike, Alan
    Lethin, Richard
    Maji, Sourav
    Veeraraghavan, Malathi
    2016 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE (HPEC), 2016,
  • [6] SHAD: the Scalable High-performance Algorithms and Data-structures Library
    Castellana, Vito Giovanni
    Minutoli, Marco
    2018 18TH IEEE/ACM INTERNATIONAL SYMPOSIUM ON CLUSTER, CLOUD AND GRID COMPUTING (CCGRID), 2018, : 442 - 451
  • [7] QUANTITATION OF COCAINE IN A VARIETY OF MATRICES BY HIGH-PERFORMANCE LIQUID-CHROMATOGRAPHY
    JANE, I
    SCOTT, A
    SHARPE, RWL
    WHITE, PC
    JOURNAL OF CHROMATOGRAPHY, 1981, 214 (02): : 243 - 248
  • [8] High-performance graph algorithms from parallel sparse matrices
    Gilbert, John R.
    Reinhardt, Steve
    Shah, Viral B.
    APPLIED PARALLEL COMPUTING: STATE OF THE ART IN SCIENTIFIC COMPUTING, 2007, 4699 : 260 - +
  • [9] High-performance direct algorithms for computing the sign function of triangular matrices
    Stotland, Vadim
    Schwartz, Oded
    Toledo, Sivan
    NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, 2018, 25 (02)
  • [10] New data structures for matrices and specialized inner kernels: Low overhead for high performance
    Herrero, Jose R.
    PARALLEL PROCESSING AND APPLIED MATHEMATICS, 2008, 4967 : 659 - 667