A THRESHOLD SCHEDULING STRATEGY FOR SISAL ON DISTRIBUTED-MEMORY MACHINES

被引:12
|
作者
PANDE, SS [1 ]
AGRAWAL, DP [1 ]
MAUNEY, J [1 ]
机构
[1] N CAROLINA STATE UNIV,DEPT COMP SCI,RALEIGH,NC 27695
关键词
D O I
10.1006/jpdc.1994.1054
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The problem of scheduling tasks on distributed memory machines is known to be NP-complete in the strong sense, ruling out the possibility of a pseudo-polynomial algorithm. This paper introduces a new heuristic algorithm for scheduling Sisal (Streams and Iterations In a Single Assignment Language) programs on a distributed memory machine, Intel Touchstone i860. Our compile time scheduling method works on IF-2, an intermediate form based on the dataflow parallelism in the program. We initially carry out a dependence analysis, to bind the implicit dependencies across IF-2 graph boundaries, followed by a cost assignment based on Intel Touchstone i860 timings. The scheduler works in two phases. The first phase of the scheduler finds the earliest and latest completion times of each task given by the shortest and longest paths from root task to the given task respectively. A threshold defined as the difference between the latest and the earliest start times of the task, is found. The scheduler varies the value of the allowable threshold, and determines the best value for minimal schedule length. In the second phase of the scheduler, we merge the processors to generate schedule to match the available number of processors. Schedule results for several benchmark programs have been included to demonstrate the effectiveness of our approach. (C) 1994 Academic Press, Inc.
引用
收藏
页码:223 / 236
页数:14
相关论文
共 50 条
  • [21] THE DATA ALIGNMENT PHASE IN COMPILING PROGRAMS FOR DISTRIBUTED-MEMORY MACHINES
    LI, JK
    CHEN, M
    [J]. JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 1991, 13 (02) : 213 - 221
  • [22] SUPPORTING DYNAMIC DATA-STRUCTURES ON DISTRIBUTED-MEMORY MACHINES
    ROGERS, A
    CARLISLE, MC
    REPPY, JH
    HENDREN, LJ
    [J]. ACM TRANSACTIONS ON PROGRAMMING LANGUAGES AND SYSTEMS, 1995, 17 (02): : 233 - 263
  • [23] An Interleaving Transformation for Parallelizing Reductions for Distributed-Memory Parallel Machines
    Jan-Jan Wu
    [J]. The Journal of Supercomputing, 2000, 15 : 321 - 339
  • [24] ON DATA DEPENDENCE ANALYSIS FOR COMPILING PROGRAMS ON DISTRIBUTED-MEMORY MACHINES
    SHARMA, S
    HUANG, CH
    SADAYAPPAN, P
    [J]. SIGPLAN NOTICES, 1993, 28 (01): : 13 - 16
  • [25] An algorithmic framework for parallelizing vision computations on distributed-memory machines
    Chung, Y
    [J]. 1997 INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS, PROCEEDINGS, 1997, : 160 - 165
  • [26] Linear Systems Solvers for Distributed-Memory Machines with GPU Accelerators
    Kurzak, Jakub
    Gates, Mark
    Charara, Ali
    YarKhan, Asim
    Yamazaki, Ichitaro
    Dongarra, Jack
    [J]. EURO-PAR 2019: PARALLEL PROCESSING, 2019, 11725 : 495 - 506
  • [27] Optimizing I/O for irregular applications on distributed-memory machines
    Carretero, J
    No, J
    Choudhary, A
    [J]. PARALLEL COMPUTATION, 1999, 1557 : 470 - 479
  • [28] Least Squares Solvers for Distributed-Memory Machines with GPU Accelerators
    Kurzak, Jakub
    Gates, Mark
    Charara, Ali
    Yarkhan, Asim
    Dongarra, Jack
    [J]. INTERNATIONAL CONFERENCE ON SUPERCOMPUTING (ICS 2019), 2019, : 117 - 126
  • [29] Nonblocking Data Structures for Distributed-Memory Machines: Stacks as an Example
    Diep, Thanh-Dang
    Furlinger, Karl
    [J]. 2021 29TH EUROMICRO INTERNATIONAL CONFERENCE ON PARALLEL, DISTRIBUTED AND NETWORK-BASED PROCESSING (PDP 2021), 2021, : 9 - 17
  • [30] PARALLELIZING MOLECULAR-DYNAMICS PROGRAMS FOR DISTRIBUTED-MEMORY MACHINES
    HWANG, YS
    DAS, R
    SALTZ, JH
    [J]. IEEE COMPUTATIONAL SCIENCE & ENGINEERING, 1995, 2 (02): : 18 - 29