LIMITS OF INSTRUCTION-LEVEL PARALLELISM

被引:0
|
作者
WALL, DW [1 ]
机构
[1] DIGITAL EQUIPMENT CORP,WESTERN RES LAB,PALO ALTO,CA
来源
SIGPLAN NOTICES | 1991年 / 26卷 / 04期
关键词
D O I
暂无
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Growing interest in ambitious multiple-issue machines and heavily-pipelined machines requires a careful examination of how much instruction-level parallelism exists in typical programs. Such an examination is complicated by the wide variety of hardware and software techniques for increasing the parallelism that can be exploited, including branch prediction, register renaming, and alias analysis. By performing simulations based on instruction traces, we can model techniques at the limits of feasibility and even beyond. Our study shows a striking difference between assuming that the techniques we use are perfect and merely assuming that they are impossibly good. Even with impossibly good techniques, average parallelism rarely exceeds 7, with 5 more common.
引用
收藏
页码:176 / 188
页数:13
相关论文
共 50 条
  • [41] A Two-Way Loop Algorithm for Exploiting Instruction-Level Parallelism in Memory System
    Misra, Sanjay
    Alfa, Abraham Ayegba
    Adewale, Sunday Olamide
    Akogbe, Michael Abogunde
    Olaniyi, Mikail Olayemi
    [J]. COMPUTATIONAL SCIENCE AND ITS APPLICATIONS - ICCSA 2014, PT V, 2014, 8583 : 255 - +
  • [42] Potential analysis of a superscalar core employing a reconfigurable array for improving instruction-level parallelism
    Marcelo Brandalero
    Antonio Carlos S. Beck
    [J]. Design Automation for Embedded Systems, 2016, 20 : 155 - 169
  • [43] Instruction-level distributed processing
    Smith, JE
    [J]. COMPUTER, 2001, 34 (04) : 59 - +
  • [44] INSTRUCTION-LEVEL PARALLEL PROCESSING
    FISHER, JA
    RAU, BR
    [J]. SCIENCE, 1991, 253 (5025) : 1233 - 1241
  • [45] Exploiting thread-level and instruction-level parallelism to cluster mass spectrometry data using multicore architectures
    Fahad Saeed
    Jason D. Hoffert
    Trairak Pisitkun
    Mark A. Knepper
    [J]. Network Modeling Analysis in Health Informatics and Bioinformatics, 2014, 3 (1)
  • [46] Exploiting thread-level and instruction-level parallelism to cluster mass spectrometry data using multicore architectures
    Saeed, Fahad
    Hoffert, Jason D.
    Pisitkun, Trairak
    Knepper, Mark A.
    [J]. NETWORK MODELING AND ANALYSIS IN HEALTH INFORMATICS AND BIOINFORMATICS, 2014, 3 (01):
  • [47] Hacky Racers: Exploiting Instruction-Level Parallelism to Generate Stealthy Fine-Grained Timers
    Xiao, Haocheng
    Ainsworth, Sam
    [J]. PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON ARCHITECTURAL SUPPORT FOR PROGRAMMING LANGUAGES AND OPERATING SYSTEMS, VOL 2, ASPLOS 2023, 2023, : 354 - 369
  • [48] Many-Thread Aware Instruction-Level Parallelism: Architecting Shader Cores for GPU Computing
    Xiang, Ping
    Yang, Yi
    Mantor, Mike
    Rubin, Norm
    Zhou, Huiyang
    [J]. PROCEEDINGS OF THE 21ST INTERNATIONAL CONFERENCE ON PARALLEL ARCHITECTURES AND COMPILATION TECHNIQUES (PACT'12), 2012, : 449 - 450
  • [49] A neural network-based approach for the performance evaluation of branch prediction in instruction-level parallelism processors
    Nain, Sweety
    Chaudhary, Prachi
    [J]. JOURNAL OF SUPERCOMPUTING, 2022, 78 (04): : 4960 - 4976
  • [50] A neural network-based approach for the performance evaluation of branch prediction in instruction-level parallelism processors
    Sweety Nain
    Prachi Chaudhary
    [J]. The Journal of Supercomputing, 2022, 78 : 4960 - 4976