Converting thread-level parallelism to instruction-level parallelism via simultaneous multithreading

被引:75
|
作者
Lo, JL
Eggers, SJ
Emer, JS
Levy, HM
Stamm, RL
Tullsen, DM
机构
[1] DIGITAL EQUIPMENT CORP, HUDSON, MA USA
[2] UNIV CALIF SAN DIEGO, DEPT COMP SCI & ENGN, LA JOLLA, CA 92093 USA
来源
ACM TRANSACTIONS ON COMPUTER SYSTEMS | 1997年 / 15卷 / 03期
关键词
cache interference; instruction-level parallelism; multiprocessors; multithreading; simultaneous multithreading; thread-level parallelism;
D O I
10.1145/263326.263382
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
To achieve high performance, contemporary computer systems rely on two forms of parallelism: instruction-level parallelism (ILP) and thread-level parallelism (TLP). Wide-issue superscalar processors exploit ILP by executing multiple instructions from a single program in a single cycle. Multiprocessors (MP) exploit TLP by executing different threads in parallel on different processors. Unfortunately, both parallel processing styles statically partition processor resources, thus preventing them from adapting to dynamically changing levels of ILP and TLP in a program. With insufficient TLP, processors in an MP will be idle; with insufficient ILP, multiple-issue hardware on a superscalar is wasted. This article explores parallel processing on an alternative architecture, simultaneous multithreading (SMT), which allows multiple threads to compete for and share all of the processor's resources every cycle. The most compelling reason for running parallel applications on an SMT processor is its ability to use thread-level parallelism and instruction-level parallelism interchangeably. By permitting multiple threads to share the processor's functional units simultaneously, the processor can use both ILP and TLP to accommodate variations in parallelism. When a program has only a single thread, all of the SMT processor's resources can be dedicated to that thread; when more TLP exists, this parallelism can compensate for a lack of per-thread ILP. We examine two alternative on-chip parallel architectures for the next generation of processors. We compare SMT and small-scale, on-chip multiprocessors in their ability to exploit both ILP and TLP. First, we identify the hardware bottlenecks that prevent multiprocessors from effectively exploiting ILP. Then, we show that because of its dynamic resource sharing, SMT avoids these inefficiencies and benefits from being able to run more threads on a single processor. The use of TLP is especially advantageous when per-thread ILP is limited. The ease of adding additional thread contexts on an SMT (relative to adding additional processors on an MP) allows simultaneous multithreading to expose more parallelism, further increasing functional unit utilization and attaining a 52% average speedup (versus a four-processor, single-chip multiprocessor with comparable execution resources). This study also addresses an often-cited concern regarding the use of thread-level parallelism or multithreading: interference in the memory system and branch prediction hardware. We find that multiple threads cause interthread interference in the caches and place greater demands on the memory system, thus increasing average memory latencies. By exploiting thread-level parallelism, however, SMT hides these additional latencies, so that they only have a small impact on total program performance. We also find that for parallel applications, the additional threads have minimal effects on branch prediction.
引用
收藏
页码:322 / 354
页数:33
相关论文
共 50 条
  • [31] Using Data-Level Parallelism to Accelerate Instruction-Level Redundancy
    Hu, Yu
    Chen, Zhongliang
    Li, Xiaowei
    2012 WORLD AUTOMATION CONGRESS (WAC), 2012,
  • [32] Dual-thread Speculation: A Simple Approach to Uncover Thread-level Parallelism on a Simultaneous Multithreaded Processor
    Fredrik Warg
    Per Stenstrom
    International Journal of Parallel Programming, 2008, 36 : 166 - 183
  • [33] Exploiting data- and thread-level parallelism for image correlation
    Kadidlo, Juergen
    Strey, Alfred
    PROCEEDINGS OF THE 16TH EUROMICRO CONFERENCE ON PARALLEL, DISTRIBUTED AND NETWORK-BASED PROCESSING, 2008, : 407 - +
  • [34] Exploiting speculative thread-level parallelism in data compression applications
    Wang, Shengyue
    Zhai, Antonia
    Yew, Pen-Chung
    LANGUAGES AND COMPILERS FOR PARALLEL COMPUTING, 2007, 4382 : 126 - +
  • [35] On the limitations of compilers to exploit thread-level parallelism in embedded applications
    Islam, Mafijul
    6TH IEEE/ACIS INTERNATIONAL CONFERENCE ON COMPUTER AND INFORMATION SCIENCE, PROCEEDINGS, 2007, : 60 - 65
  • [36] Parallelization Spectroscopy: Analysis of Thread-level Parallelism in HPC Programs
    Kejariwal, Arun
    Cascaval, Calin
    ACM SIGPLAN NOTICES, 2009, 44 (04) : 293 - 294
  • [37] Compiler-Driven Software Speculation for Thread-Level Parallelism
    Yiapanis, Paraskevas
    Brown, Gavin
    Lujan, Mikel
    ACM TRANSACTIONS ON PROGRAMMING LANGUAGES AND SYSTEMS, 2016, 38 (02):
  • [38] Programming Matrix Algorithms-by-Blocks for Thread-Level Parallelism
    Quintana-Orti, Gregorio
    Quintana-Orti, Enrique S.
    Van de Geijn, Robert A.
    Van Zee, Field G.
    Chan, Ernie
    ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE, 2009, 36 (03):
  • [39] Relational profiling: Enabling thread-level parallelism in virtual machines
    Heil, T
    Smith, JE
    33RD ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE: MICRO-33 2000, PROCEEDINGS, 2000, : 281 - 290
  • [40] Exploiting the thread-level parallelism for BGP on Multi-core
    Gao Lei
    Lai Mingche
    Gong Zhenghu
    CNSR 2008: PROCEEDINGS OF THE 6TH ANNUAL COMMUNICATION NETWORKS AND SERVICES RESEARCH CONFERENCE, 2008, : 510 - 516