Viable architectures for high-performance computing

被引:2
|
作者
Ziavras, SG [1 ]
Wang, Q
Papathanasiou, P
机构
[1] New Jersey Inst Technol, Dept Elect & Comp Engn, Newark, NJ 07102 USA
[2] New Jersey Inst Technol, Dept Comp Sci, Newark, NJ 07102 USA
[3] Dataline Comp Inst, Piraeus 18900, Greece
来源
COMPUTER JOURNAL | 2003年 / 46卷 / 01期
关键词
D O I
10.1093/comjnl/46.1.36
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Existing interprocessor connection networks are often plagued by poor topological properties that result in large memory latencies for distributed shared-memory (DSM) computers or multicomputers. On the other hand, scalable networks with very good topological properties are often impossible to build because of their prohibitively high very large scale integration (VLSI) (e.g. wiring) complexity. Such a network is the generalized hypercube (GH). The GH supports full connectivity of all of its nodes in each dimension and is characterized by outstanding topological properties. Also, low-dimensional GHs have very large bisection widths. We present here the class of highly-overlapping windows (HOWs) networks, which are capable of lower complexity than GHs, comparable performance and better scalability. HOWs are obtained from GHs by uniformly removing edges to produce feasible systems of lower wiring complexity. Resulting systems contain numerous highly-overlapping GHs of smaller size. The GH, the binary hypercube and the mesh all belong to this new class of interconnections. In practical cases, HOWs have higher bisection width than tori with similar node and channel costs. Also, HOWs have a very large degree of fault tolerance. This paper focuses on 2-D HOW systems. We analyze the hardware cost of HOWs, present graph embeddings and communications algorithms for HOWs, carry out performance comparisons with binary hypercubes and GHs and simulate HOWs under heavy communication loads. Our results show the suitability of HOWs for very-high-performance computing.
引用
收藏
页码:36 / 54
页数:19
相关论文
共 50 条
  • [41] High-Performance Computing for Defense
    Davis, Larry P.
    Henry, Cray J.
    Campbell, Roy L., Jr.
    Ward, William A., Jr.
    COMPUTING IN SCIENCE & ENGINEERING, 2007, 9 (06) : 35 - 44
  • [42] Optical high-performance computing
    Fisk University, Nashville, TN, United States
    不详
    不详
    Journal of the Optical Society of America A: Optics and Image Science, and Vision, 2008, 25 (09):
  • [43] The marketplace of high-performance computing
    Strohmaier, E
    Dongarra, JJ
    Meuer, HW
    Simon, HD
    PARALLEL COMPUTING, 1999, 25 (13-14) : 1517 - 1544
  • [44] High-performance computing for vision
    Wang, CL
    Bhat, PB
    Prasanna, VK
    PROCEEDINGS OF THE IEEE, 1996, 84 (07) : 931 - 946
  • [45] Trends in high-performance computing
    Dongarra, J
    IEEE CIRCUITS & DEVICES, 2006, 22 (01): : 22 - 27
  • [46] Thoughts on high-performance computing
    Yang, Xuejun
    NATIONAL SCIENCE REVIEW, 2014, 1 (03) : 332 - 333
  • [47] Productivity in high-performance computing
    Sterling, Thomas
    Dekate, Chirag
    ADVANCES IN COMPUTERS, VOL 72: HIGH PERFORMANCE COMPUTING, 2008, 72 : 101 - 134
  • [48] High-performance computing - An overview
    Marksteiner, P
    COMPUTER PHYSICS COMMUNICATIONS, 1996, 97 (1-2) : 16 - 35
  • [49] High-Performance Computing with TeraStat
    Bompiani, Edoardo
    Petrillo, Umberto Ferraro
    Lasinio, Giovanna Jona
    Palini, Francesco
    2020 IEEE INTL CONF ON DEPENDABLE, AUTONOMIC AND SECURE COMPUTING, INTL CONF ON PERVASIVE INTELLIGENCE AND COMPUTING, INTL CONF ON CLOUD AND BIG DATA COMPUTING, INTL CONF ON CYBER SCIENCE AND TECHNOLOGY CONGRESS (DASC/PICOM/CBDCOM/CYBERSCITECH), 2020, : 499 - 506
  • [50] HIGH-PERFORMANCE DISTRIBUTED COMPUTING
    RAGHAVENDRA, CS
    CONCURRENCY-PRACTICE AND EXPERIENCE, 1994, 6 (04): : 231 - 233