A Loop-aware Autotuner for High-Precision Floating-point Applications

被引:1
|
作者
Gu, Ruidong [1 ]
Beata, Paul [1 ]
Becchi, Michela [1 ]
机构
[1] North Carolina State Univ, Dept Elect & Comp Engn, Raleigh, NC 27695 USA
基金
美国国家科学基金会;
关键词
autotuner; mixed-precision; floating-point;
D O I
10.1109/ISPASS48437.2020.00048
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Many scientific applications (e.g., molecular dynamics, climate modeling and astrophysical simulations) rely on floating-point arithmetic. Due to its approximate nature, the use of floating-point arithmetic can lead to inaccuracy and reproducibility issues, which can be particularly significant for long running applications. Indeed, previous work has shown that 64-bit IEEE floating-point arithmetic can be insufficient for many algorithms and applications, such as ill-conditioned linear systems, large summations, long-time or large-scale physical simulations, and experimental mathematics applications. To overcome these issues, existing work has proposed high-precision floating-point libraries (e.g., the GNU multiple precision arithmetic library), but these libraries come at the cost of significant execution time. In this work, we propose an auto-tuner for applications requiring high-precision floating-point arithmetic to deliver a prescribed level of accuracy. Our auto-tuner uses compiler analysis to discriminate operations and variables that require high-precision from those that can be handled using standard IEEE 64-bit floating-point arithmetic, and it generates a mixed precision program that trades off performance and accuracy by selectively using different precisions for different variables and operations. In particular, our auto-tuner leverages loop and data dependences analysis to quickly identify precision-sensitive variables and operations and provide results that are robust to different input datasets. We test our auto-tuner on a mix of applications with different computational patterns.
引用
收藏
页码:285 / 295
页数:11
相关论文
共 50 条
  • [21] SIMULATING LOW PRECISION FLOATING-POINT ARITHMETIC
    Higham, Nicholas J.
    Pranesh, Srikara
    SIAM JOURNAL ON SCIENTIFIC COMPUTING, 2019, 41 (05): : C585 - C602
  • [22] MULTIPLE PRECISION FLOATING-POINT COMPUTATION IN FORTRAN
    VERMA, SB
    SHARAN, M
    SOFTWARE-PRACTICE & EXPERIENCE, 1980, 10 (03): : 163 - 173
  • [23] A multi-precision floating-point adder
    Ozbilen, Metin Mete
    Gok, Mustafa
    PRIME: 2008 PHD RESEARCH IN MICROELECTRONICS AND ELECTRONICS, PROCEEDINGS, 2008, : 117 - 120
  • [24] High throughput compression of double-precision floating-point data
    Burtscher, Martin
    Ratanaworabhan, Paruj
    DCC 2007: DATA COMPRESSION CONFERENCE, PROCEEDINGS, 2007, : 293 - +
  • [25] A quadruple precision and dual double precision floating-point multiplier
    Akkas, A
    Schulte, MJ
    EUROMICRO SYMPOSIUM ON DIGITAL SYSTEM DESIGN, PROCEEDINGS, 2003, : 76 - 81
  • [26] An IEEE 754 Double-Precision Floating-Point Multiplier for Denormalized and Normalized Floating-Point Numbers
    Thompson, Ross
    Stine, James E.
    PROCEEDINGS OF THE ASAP2015 2015 IEEE 26TH INTERNATIONAL CONFERENCE ON APPLICATION-SPECIFIC SYSTEMS, ARCHITECTURES AND PROCESSORS, 2015, : 62 - 63
  • [27] PRECISION ATTAINABLE WITH VARIOUS FLOATING-POINT NUMBER SYSTEMS
    BRENT, RP
    IEEE TRANSACTIONS ON COMPUTERS, 1973, C 22 (06) : 601 - 607
  • [28] FLOATING-POINT PRECISION AT INTEGER SPEED FROM TI
    TUCK, B
    ELECTRONIC PRODUCTS MAGAZINE, 1987, 29 (21): : 18 - &
  • [29] SOFTWARE FOR DOUBLED-PRECISION FLOATING-POINT COMPUTATIONS
    LINNAINMAA, S
    ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE, 1981, 7 (03): : 275 - 283
  • [30] Quantitative study of floating-point precision on modern FPGAs
    Ben Abdelhamid, Riadh
    Kuwazawa, Gen
    Yamaguchi, Yoshiki
    THE PROCEEDINGS OF THE 13TH INTERNATIONAL SYMPOSIUM ON HIGHLY EFFICIENT ACCELERATORS AND RECONFIGURABLE TECHNOLOGIES, HEART 2023, 2023, : 49 - 58