Converting thread-level parallelism to instruction-level parallelism via simultaneous multithreading

被引:75
|
作者
Lo, JL
Eggers, SJ
Emer, JS
Levy, HM
Stamm, RL
Tullsen, DM
机构
[1] DIGITAL EQUIPMENT CORP, HUDSON, MA USA
[2] UNIV CALIF SAN DIEGO, DEPT COMP SCI & ENGN, LA JOLLA, CA 92093 USA
来源
ACM TRANSACTIONS ON COMPUTER SYSTEMS | 1997年 / 15卷 / 03期
关键词
cache interference; instruction-level parallelism; multiprocessors; multithreading; simultaneous multithreading; thread-level parallelism;
D O I
10.1145/263326.263382
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
To achieve high performance, contemporary computer systems rely on two forms of parallelism: instruction-level parallelism (ILP) and thread-level parallelism (TLP). Wide-issue superscalar processors exploit ILP by executing multiple instructions from a single program in a single cycle. Multiprocessors (MP) exploit TLP by executing different threads in parallel on different processors. Unfortunately, both parallel processing styles statically partition processor resources, thus preventing them from adapting to dynamically changing levels of ILP and TLP in a program. With insufficient TLP, processors in an MP will be idle; with insufficient ILP, multiple-issue hardware on a superscalar is wasted. This article explores parallel processing on an alternative architecture, simultaneous multithreading (SMT), which allows multiple threads to compete for and share all of the processor's resources every cycle. The most compelling reason for running parallel applications on an SMT processor is its ability to use thread-level parallelism and instruction-level parallelism interchangeably. By permitting multiple threads to share the processor's functional units simultaneously, the processor can use both ILP and TLP to accommodate variations in parallelism. When a program has only a single thread, all of the SMT processor's resources can be dedicated to that thread; when more TLP exists, this parallelism can compensate for a lack of per-thread ILP. We examine two alternative on-chip parallel architectures for the next generation of processors. We compare SMT and small-scale, on-chip multiprocessors in their ability to exploit both ILP and TLP. First, we identify the hardware bottlenecks that prevent multiprocessors from effectively exploiting ILP. Then, we show that because of its dynamic resource sharing, SMT avoids these inefficiencies and benefits from being able to run more threads on a single processor. The use of TLP is especially advantageous when per-thread ILP is limited. The ease of adding additional thread contexts on an SMT (relative to adding additional processors on an MP) allows simultaneous multithreading to expose more parallelism, further increasing functional unit utilization and attaining a 52% average speedup (versus a four-processor, single-chip multiprocessor with comparable execution resources). This study also addresses an often-cited concern regarding the use of thread-level parallelism or multithreading: interference in the memory system and branch prediction hardware. We find that multiple threads cause interthread interference in the caches and place greater demands on the memory system, thus increasing average memory latencies. By exploiting thread-level parallelism, however, SMT hides these additional latencies, so that they only have a small impact on total program performance. We also find that for parallel applications, the additional threads have minimal effects on branch prediction.
引用
收藏
页码:322 / 354
页数:33
相关论文
共 50 条
  • [1] Software Thread Integration for Instruction-Level Parallelism
    So, Won
    Dean, Alexander G.
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2013, 13 (01)
  • [2] Exploiting thread-level and instruction-level parallelism to cluster mass spectrometry data using multicore architectures
    Fahad Saeed
    Jason D. Hoffert
    Trairak Pisitkun
    Mark A. Knepper
    Network Modeling Analysis in Health Informatics and Bioinformatics, 2014, 3 (1)
  • [3] Exploiting thread-level and instruction-level parallelism to cluster mass spectrometry data using multicore architectures
    Saeed, Fahad
    Hoffert, Jason D.
    Pisitkun, Trairak
    Knepper, Mark A.
    NETWORK MODELING AND ANALYSIS IN HEALTH INFORMATICS AND BIOINFORMATICS, 2014, 3 (01):
  • [4] Exploiting Thread-level Parallelism Based on Banlancing Load for Speculative Multithreading
    Li Yuancheng
    ADVANCES IN MECHATRONICS AND CONTROL ENGINEERING III, 2014, 678 : 8 - 11
  • [5] Scalable instruction-level parallelism
    Jesshope, C
    COMPUTER SYSTEMS: ARCHITECTURES, MODELING, AND SIMULATION, 2004, 3133 : 383 - 392
  • [6] Compilers for instruction-level parallelism
    Schlansker, M
    Conte, TM
    Dehnert, J
    Ebcioglu, K
    Fang, JZ
    Thompson, CL
    COMPUTER, 1997, 30 (12) : 63 - &
  • [7] LIMITS OF INSTRUCTION-LEVEL PARALLELISM
    WALL, DW
    SIGPLAN NOTICES, 1991, 26 (04): : 176 - 188
  • [8] Exploiting Java']Java instruction/thread level parallelism with horizontal multithreading
    Watanabe, KJ
    Chu, WM
    Li, YM
    PROCEEDINGS OF THE 6TH AUSTRALASIAN COMPUTER SYSTEMS ARCHITECTURE CONFERENCE, ACSAC 2001, 2001, 23 (04): : 122 - 129
  • [9] Limits of Instruction-Level Parallelism Capture
    Goossens, Bernard
    Parello, David
    2013 INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE, 2013, 18 : 1664 - 1673
  • [10] Workshop 17: Instruction-level parallelism
    Arvind, D.K.
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 1997, 1300 LNCS : 1039 - 1042