A Loop-aware Autotuner for High-Precision Floating-point Applications

被引:1
|
作者
Gu, Ruidong [1 ]
Beata, Paul [1 ]
Becchi, Michela [1 ]
机构
[1] North Carolina State Univ, Dept Elect & Comp Engn, Raleigh, NC 27695 USA
基金
美国国家科学基金会;
关键词
autotuner; mixed-precision; floating-point;
D O I
10.1109/ISPASS48437.2020.00048
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Many scientific applications (e.g., molecular dynamics, climate modeling and astrophysical simulations) rely on floating-point arithmetic. Due to its approximate nature, the use of floating-point arithmetic can lead to inaccuracy and reproducibility issues, which can be particularly significant for long running applications. Indeed, previous work has shown that 64-bit IEEE floating-point arithmetic can be insufficient for many algorithms and applications, such as ill-conditioned linear systems, large summations, long-time or large-scale physical simulations, and experimental mathematics applications. To overcome these issues, existing work has proposed high-precision floating-point libraries (e.g., the GNU multiple precision arithmetic library), but these libraries come at the cost of significant execution time. In this work, we propose an auto-tuner for applications requiring high-precision floating-point arithmetic to deliver a prescribed level of accuracy. Our auto-tuner uses compiler analysis to discriminate operations and variables that require high-precision from those that can be handled using standard IEEE 64-bit floating-point arithmetic, and it generates a mixed precision program that trades off performance and accuracy by selectively using different precisions for different variables and operations. In particular, our auto-tuner leverages loop and data dependences analysis to quickly identify precision-sensitive variables and operations and provide results that are robust to different input datasets. We test our auto-tuner on a mix of applications with different computational patterns.
引用
收藏
页码:285 / 295
页数:11
相关论文
共 50 条
  • [31] Stochastic Optimization of Floating-Point Programs with Tunable Precision
    Schkufza, Eric
    Sharma, Rahul
    Aiken, Alex
    ACM SIGPLAN NOTICES, 2014, 49 (06) : 53 - 64
  • [32] Exploiting Community Structure for Floating-Point Precision Tuning
    Guo, Hui
    Rubio-Gonzalez, Cindy
    ISSTA'18: PROCEEDINGS OF THE 27TH ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, 2018, : 333 - 343
  • [33] Multiple precision floating-point arithmetic on SIMD processors
    van der Hoeven, Joris
    2017 IEEE 24TH SYMPOSIUM ON COMPUTER ARITHMETIC (ARITH), 2017, : 2 - 9
  • [34] Energy-Efficient Multiple-Precision Floating-Point Multiplier for Embedded Applications
    Kuang, Shiann-Rong
    Wu, Kun-Yi
    Yu, Kee-Khuan
    JOURNAL OF SIGNAL PROCESSING SYSTEMS FOR SIGNAL IMAGE AND VIDEO TECHNOLOGY, 2013, 72 (01): : 43 - 55
  • [35] Energy-Efficient Multiple-Precision Floating-Point Multiplier for Embedded Applications
    Shiann-Rong Kuang
    Kun-Yi Wu
    Kee-Khuan Yu
    Journal of Signal Processing Systems, 2013, 72 : 43 - 55
  • [36] Advanced components in the variable precision floating-point library
    Wang, Xiaojun
    Braganza, Sherman
    Leeser, Miriam
    FCCM 2006: 14TH ANNUAL IEEE SYMPOSIUM ON FIELD-PROGRAMMABLE CUSTOM COMPUTING MACHINES, PROCEEDINGS, 2006, : 249 - +
  • [37] A compression method for arbitrary precision floating-point images
    Manders, Corey
    Farbiz, Farzam
    Mann, Steve
    2007 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1-7, 2007, : 1861 - +
  • [38] Fine-grained floating-point precision analysis
    Lam, Michael O.
    Hollingsworth, Jeffrey K.
    INTERNATIONAL JOURNAL OF HIGH PERFORMANCE COMPUTING APPLICATIONS, 2018, 32 (02): : 231 - 245
  • [39] Floating-Point Format Inference in Mixed-Precision
    Martel, Matthieu
    NASA FORMAL METHODS (NFM 2017), 2017, 10227 : 230 - 246
  • [40] Rigorous Floating-Point Mixed-Precision Tuning
    Chiang, Wei-Fan
    Baranowski, Mark
    Briggs, Ian
    Solovyev, Alexey
    Gopalakrishnan, Ganesh
    Rakamaric, Zvonimir
    ACM SIGPLAN NOTICES, 2017, 52 (01) : 300 - 315