Program analysis techniques for transforming programs for parallel execution

被引:9
|
作者
Psarris, K [1 ]
机构
[1] Univ Texas, Dept Comp Sci, San Antonio, TX 78249 USA
基金
美国国家科学基金会;
关键词
parallelizing compilers; data dependence; program analysis; automatic parallelization; compiles optimization;
D O I
10.1016/S0167-8191(01)00132-6
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
In a multiple processor system, computer programs have to be redesigned to efficiently use the parallel processors and deliver higher performance. One major approach is automatic detection of parallelism, in which existing conventional sequential programs are translated into parallel programs, in order to benefit from the presence of multiple processors. Optimizing compilers rely upon pro-ram analysis techniques to detect data dependences between program statements, The results of the analysis enable the compiler to identify code fragments that can be executed in parallel. The proposed dependence analysis techniques fall into two different categories: either efficient and approximate tests or exact but exponential. In this paper, we show that exact data dependence information can be computed efficiently in practice. The Banerjee inequality and the GCD test are the two tests traditionally used to determine statement data dependence in automatic parallelization of loops, These tests are approximate in the sense that they are necessary but not sufficient conditions for data dependence. In an earlier work we formally studied the accuracy of the Banerjee and GCD tests and derived a set of conditions that can be tested along with the Banerjee inequality and the GCD test to obtain exact data dependence information. In this work, we perform an empirical study to explain and demonstrate the accuracy of the Banerjee and GCD tests in actual practice. Our experiments indicate that exact data dependence information can be computed in linear time in practice. (C) 2002 Elsevier Science B.V. All rights reserved.
引用
收藏
页码:455 / 469
页数:15
相关论文
共 50 条
  • [41] AUTOMATIC RESTRUCTURING OF FORTRAN PROGRAMS FOR PARALLEL EXECUTION
    POLYCHRONOPOULOS, CD
    LECTURE NOTES IN COMPUTER SCIENCE, 1988, 295 : 107 - 130
  • [42] DEBUGGING TECHNIQUES FOR PARALLEL PROGRAMS
    LEU, E
    SCHIPER, A
    TSI-TECHNIQUE ET SCIENCE INFORMATIQUES, 1991, 10 (01): : 5 - 21
  • [43] PARALLEL PROCESSING PROMISES FASTER PROGRAM EXECUTION
    HINDIN, HJ
    COMPUTER DESIGN, 1985, 24 (10): : 57 - 66
  • [44] Parallel execution of for loops using checkpointing techniques
    Renault, E
    2005 INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING WORKSHOPS, PROCEEDINGS, 2005, : 313 - 319
  • [45] Parallel program execution support in the JGRID system
    Pota, S
    Sipos, G
    Juhasz, Z
    Kacsuk, P
    DISTRIBUTED AND PARALLEL SYSTEMS: CLUSTER AND GRID COMPUTING, 2005, 777 : 13 - 20
  • [46] Program Execution Models for Massively Parallel Computing
    Dennis, Jack B.
    APPLICATIONS, TOOLS AND TECHNIQUES ON THE ROAD TO EXASCALE COMPUTING, 2012, 22 : 29 - 40
  • [47] ACCURATE PREDICTIONS OF PARALLEL PROGRAM EXECUTION TIME
    DRISCOLL, MA
    DAASCH, WR
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 1995, 25 (01) : 16 - 30
  • [48] MINIMIZING OVERHEAD COSTS IN PARALLEL PROGRAM EXECUTION
    IVANI, AM
    PROGRAMMING AND COMPUTER SOFTWARE, 1984, 10 (01) : 39 - 43
  • [49] Parallel program execution support in the JGrid system
    Pota, S.
    Sipos, G.
    Juhasz, Z.
    Kacsuk, P.
    INTERNATIONAL JOURNAL OF COMPUTATIONAL SCIENCE AND ENGINEERING, 2009, 4 (03) : 213 - 220
  • [50] Parallel scheme for the execution of logic programs in a multiprocessor environment
    Pennsylvania State Univ, University Park, United States
    Microcomput Appl, 3 (81-93):