Energy-Efficient Reconfigurable Cache Architectures for Accelerator-Enabled Embedded Systems

被引:0
|
作者
Farmahini-Farahani, Amin [1 ]
Kim, Nam Sung [1 ]
Morrow, Katherine [1 ]
机构
[1] Univ Wisconsin Madison, Dept Elect & Comp Engn, Madison, WI 53706 USA
关键词
PERFORMANCE; POWER;
D O I
暂无
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
High-performance embedded systems often include one or more embedded processors tightly coupled with more specialized accelerators. These accelerators improve both performance and energy efficiency because they are specialized for specific (or specific classes of) computations. Data communication between the accelerator and memory, however, is a potential bottleneck for both performance and energy-efficiency. In this paper, we compare and evaluate, for the first time, the impact of L1 data cache design on performance and energy consumption of embedded processor-accelerator systems with shared memory. For this evaluation, we consider data cache design parameters such as size, associativity, and port count, as well as L1 cache sharing between the processor and accelerator. We demonstrate the potential of configurable caches to exploit diversity in cache requirements across hybrid software/hardware applications to significantly improve energy-efficiency while maintaining high performance. Guided by these studies, we propose two techniques for improving energy-efficiency of the cache hierarchy in processor-accelerator systems. The first technique adds configurability to the accelerator-cache interface to allow the accelerator to either share the processor's L1 data cache or use its own private L1 cache. The second technique modifies the L1 cache structure to provide a configurable tradeoff between bandwidth (number of ports) and capacity. Our simulation results show that the first and second techniques improve cache hierarchy energy-efficiency by up to 64% and 33%, respectively, over that of non-configurable caches.
引用
收藏
页码:211 / 220
页数:10
相关论文
共 50 条
  • [1] Energy efficient caching-on-cache architectures for embedded systems
    Wu, HC
    Chen, TF
    Li, HY
    Wang, JS
    JOURNAL OF INFORMATION SCIENCE AND ENGINEERING, 2003, 19 (05) : 809 - 825
  • [2] Reconfigurable Energy Efficient Near Threshold Cache Architectures
    Dreslinski, Ronald G.
    Chen, Gregory K.
    Mudge, Trevor
    Blaauw, David
    Sylvester, Dennis
    Flautner, Krisztian
    2008 PROCEEDINGS OF THE 41ST ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE: MICRO-41, 2008, : 459 - +
  • [3] Energy-Efficient Cache Partitioning Using Machine Learning for Embedded Systems
    Nour, Samar
    Habashy, Shahira M.
    Salem, Sameh A.
    JORDAN JOURNAL OF ELECTRICAL ENGINEERING, 2023, 9 (03): : 285 - 300
  • [4] An Asynchronous Energy-Efficient CNN Accelerator with Reconfigurable Architecture
    Chen, Weijia
    Wu, Hui
    Wei, Shaojun
    He, Anping
    Chen, Hong
    2018 IEEE ASIAN SOLID-STATE CIRCUITS CONFERENCE (A-SSCC): PROCEEDINGS OF TECHNICAL PAPERS, 2018, : 51 - 54
  • [5] Energy-efficient mapping and scheduling for DVS enabled distributed embedded systems
    Schmitz, MT
    Al-Hashimi, BM
    Eles, P
    DESIGN, AUTOMATION AND TEST IN EUROPE CONFERENCE AND EXHIBITION, 2002 PROCEEDINGS, 2002, : 514 - 521
  • [6] Energy-Efficient Trace Reuse Cache for Embedded Processors
    Tsai, Yi-Ying
    Chen, Chung-Ho
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2011, 19 (09) : 1681 - 1694
  • [7] Energy-Efficient Transceiver Design for Cache-Enabled Millimeter-Wave Systems
    He, Shiwen
    Wang, Jiaheng
    Huang, Wei
    Huang, Yongming
    Xiao, Ming
    Zhang, Yaoxue
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2020, 68 (06) : 3876 - 3889
  • [8] An Energy-Efficient Reconfigurable LSTM Accelerator for Natural Language Processing
    Azari, Elham
    Vrudhula, Sarma
    2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 4450 - 4459
  • [9] Runtime Reconfigurable Hardware Accelerator for Energy-Efficient Transposed Convolutions
    Marrazzo, Emanuel
    Spagnolo, Fanny
    Perri, Stefania
    PRIME 2022: 17TH INTERNATIONAL CONFERENCE ON PHD RESEARCH IN MICROELECTRONICS AND ELECTRONICS, 2022, : 49 - 52
  • [10] Dynamically Reconfigurable Hybrid Cache: An Energy-Efficient Last-Level Cache Design
    Chen, Yu-Ting
    Cong, Jason
    Huang, Hui
    Liu, Bin
    Liu, Chunyue
    Potkonjak, Miodrag
    Reinman, Glenn
    DESIGN, AUTOMATION & TEST IN EUROPE (DATE 2012), 2012, : 45 - 50