Parallelism versus Memory Allocation in Pipelined Router Forwarding Engines

被引:0
|
作者
Fan Chung
Ronald Graham
Jia Mao
George Varghese
机构
[1] Department of Mathematics,
[2] University of California,undefined
[3] San Diego,undefined
[4] Department of Computer Science and Engineering,undefined
[5] University of California,undefined
[6] San Diego,undefined
来源
关键词
Online Algorithm; Memory Allocation; Memory Bank; Weak Edge; Memory Request;
D O I
暂无
中图分类号
学科分类号
摘要
A crucial problem that needs to be solved is the allocation of memory to processors in a pipeline. Ideally, the processor memories should be totally separate (i.e., one-port memories) in order to minimize contention; however, this minimizes memory sharing. Idealized sharing occurs by using a single shared memory for all processors but this maximizes contention. Instead, in this paper we show that perfect memory sharing of shared memory can be achieved with a collection of two-port memories, as long as the number of processors is less than the number of memories. We show that the problem of allocation is NP-complete in general, but has a fast approximation algorithm that comes within a factor of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\frac 32$\end{document} asymptotically. The proof utilizes a new bin packing model, which is interesting in its own right. Further, for important special cases that arise in practice a more sophisticated modification of this approximation algorithm is in fact optimal. We also discuss the online memory allocation problem and present fast online algorithms that provide good memory utilization while allowing fast updates.
引用
收藏
页码:829 / 849
页数:20
相关论文
共 12 条
  • [1] Parallelism versus memory allocation in pipelined router forwarding engines
    Chung, Fan
    Graham, Ronald
    Mao, Jia
    Varghese, George
    [J]. THEORY OF COMPUTING SYSTEMS, 2006, 39 (06) : 829 - 849
  • [2] Fast incremental updates for pipelined forwarding engines
    Basu, A
    Narlikar, G
    [J]. IEEE-ACM TRANSACTIONS ON NETWORKING, 2005, 13 (03) : 690 - 703
  • [3] Fast incremental updates for pipelined forwarding engines
    Basu, A
    Narlikar, G
    [J]. IEEE INFOCOM 2003: THE CONFERENCE ON COMPUTER COMMUNICATIONS, VOLS 1-3, PROCEEDINGS, 2003, : 64 - 74
  • [4] Design and buffer sizing of TCAM-based pipelined forwarding engines
    Li, Yufeng
    Qiu, Han
    Gu, Xiaozhuo
    Lan, Julong
    Yang, Jianwen
    [J]. 21ST INTERNATIONAL CONFERENCE ON ADVANCED NETWORKING AND APPLICATIONS, PROCEEDINGS, 2007, : 769 - 776
  • [5] Pipelined Model Parallelism: Complexity Results and Memory Considerations
    Beaumont, Olivier
    Eyraud-Dubois, Lionel
    Shilova, Alena
    [J]. EURO-PAR 2021: PARALLEL PROCESSING, 2021, 12820 : 183 - 198
  • [6] Reconfigurable memory architecture for scalable IP forwarding engines
    Akhbarizadeh, M
    Nourani, M
    [J]. ELEVENTH INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATIONS AND NETWORKS, PROCEEDINGS, 2002, : 432 - 437
  • [7] Analysis of memory demand for forwarding engines in core routers
    Li, Yu-Feng
    Qiu, Han
    Lan, Ju-Long
    Yang, Jian-Wen
    [J]. Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2008, 36 (07): : 1421 - 1428
  • [8] MadPipe: Memory Aware Dynamic Programming Algorithm for Pipelined Model Parallelism
    Beaumont, Olivier
    Eyraud-Dubois, Lionel
    Shilova, Alena
    [J]. 2022 IEEE 36TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS (IPDPSW 2022), 2022, : 1063 - 1073
  • [9] ParaDiMe: A Distributed Memory FPGA Router Based on Speculative Parallelism and Path Encoding
    Hoo, Chin Hau
    Kumar, Akash
    [J]. 2017 IEEE 25TH ANNUAL INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE CUSTOM COMPUTING MACHINES (FCCM 2017), 2017, : 172 - 179
  • [10] Multi-pipelined and memory-efficient packet classification engines on FPGAs
    Erdem, Oguzhan
    Carus, Aydin
    [J]. COMPUTER COMMUNICATIONS, 2015, 67 : 75 - 91