Optimized On-Chip-Pipelining for Memory-Intensive Computations on Multi-Core Processors with Explicit Memory Hierarchy

被引:0
|
作者
Keller, Joerg [1 ]
Kessler, Christoph W. [2 ]
Hulten, Rikard [2 ]
机构
[1] FernUniv, Hagen, Germany
[2] Linkopings Univ, Linkoping, Sweden
关键词
parallel merge sort; on-chip pipelining; multicore computing; task mapping; streaming computations; ALGORITHMS;
D O I
暂无
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Limited bandwidth to off-chip main memory tends to be a performance bottleneck in chip multiprocessors, and this will become even more problematic with an increasing number of cores. Especially for streaming computations where the ratio between computational work and memory transfer is low, transforming the program into more memory-efficient code is an important program optimization. On-chip pipelining reorganizes the computation so that partial results of subtasks are forwarded immediately between the cores over the high-bandwidth internal network, in order to reduce the volume of main memory accesses, and thereby improves the throughput for memory-intensive computations. At the same time, throughput is also constrained by the limited amount of on-chip memory available for buffering forwarded data. By optimizing the mapping of tasks to cores, balancing a trade-off between load balancing, buffer memory consumption, and communication load on the on-chip network, a larger buffer size can be applied, resulting in less DMA communication and scheduling overhead. In this article, we consider parallel mergesort as a representative memory-intensive application in detail, and focus on the global merging phase, which is dominating the overall sorting time for larger data sets. We work out the technical issues of applying the on-chip pipelining technique, and present several algorithms for optimized mapping of merge trees to the multiprocessor cores. We also demonstrate how some of these algorithms can be used for mapping of other streaming task graphs. We describe an implementation of pipelined parallel mergesort for the Cell Broadband Engine, which serves as an exemplary target. We evaluate experimentally the influence of buffer sizes and mapping optimizations, and show that optimized on-chip pipelining indeed speeds up, for realistic problem sizes, merging times by up to 70% on QS20 and 143% on PS3 compared to the merge phase of CellSort, which was by now the fastest merge sort implementation on Cell.
引用
收藏
页码:1987 / 2023
页数:37
相关论文
共 34 条
  • [31] Improving Performance of Structured-memory, Data-Intensive Applications on Multi-core Platforms via a Space-Filling Curve Memory Layout
    Bethel, E. Wes
    Camp, David
    Donofrio, David
    Howison, Mark
    2015 IEEE 29TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS, 2015, : 565 - 574
  • [32] Realization and Performance Comparison of Sequential and Weak Memory Consistency Models in Network-on-Chip based Multi-core Systems
    Naeem, Abdul
    Chen, Xiaowen
    Lu, Zhonghai
    Jantsch, Axel
    2011 16TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE (ASP-DAC), 2011,
  • [33] Memory-centric network-on-chip for power efficient execution of task-level pipeline on a multi-core processor
    Kim, D.
    Kim, K.
    Kim, J. -Y.
    Lee, S.
    Yoo, H. -J.
    IET COMPUTERS AND DIGITAL TECHNIQUES, 2009, 3 (05): : 513 - 524
  • [34] An Energy-Efficient Deep Belief Network Processor Based on Heterogeneous Multi-Core Architecture With Transposable Memory and On-Chip Learning
    Wu, Jiajun
    Huang, Xuan
    Yang, Le
    Wang, Jipeng
    Liu, Bingqiang
    Wen, Ziyuan
    Li, Juhui
    Yu, Guoyi
    Chong, Kwen-Siong
    Wang, Chao
    IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, 2021, 11 (04) : 725 - 738