POMMEL: Exploring Off-Chip Memory Energy & Power Consumption in Convolutional Neural Network Accelerators

被引:0
|
作者
Montgomerie-Corcoran, Alexander [1 ]
Bouganis, Christos-Savvas [1 ]
机构
[1] Imperial Coll London, Dept Elect & Elect Engn, London, England
基金
英国工程与自然科学研究理事会;
关键词
Convolutional Neural Networks; Power Modelling; Machine Learning Acceleration;
D O I
10.1109/DSD53832.2021.00073
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Reducing the power and energy consumption of Convolutional Neural Network (CNN) Accelerators is becoming an increasingly popular design objective for both cloud and edge-based settings. Aiming towards the design of more efficient accelerator systems, the accelerator architect must understand how different design choices impact both power and energy consumption. The purpose of this work is to enable CNN accelerator designers to explore how design choices affect the memory subsystem in particular, which is a significant contributing component. By considering high-level design parameters of CNN accelerators that affect the memory subsystem, the proposed tool returns power and energy consumption estimates for a range of networks and memory types. This allows for power and energy of the off-chip memory subsystem to be considered earlier within the design process, enabling greater optimisations at the beginning phases. Towards this, the paper introduces POMMEL, an off-chip memory subsystem modelling tool for CNN accelerators, and its evaluation across a range of accelerators, networks, and memory types is performed. Furthermore, using POMMEL, the impact of various state-of-the-art compression and activity reduction schemes on the power and energy consumption of current accelerations is also investigated.
引用
收藏
页码:442 / 448
页数:7
相关论文
共 50 条
  • [21] GShuttle: Optimizing Memory Access Efficiency for Graph Convolutional Neural Network Accelerators
    Jia-Jun Li
    Ke Wang
    Hao Zheng
    Ahmed Louri
    Journal of Computer Science and Technology, 2023, 38 : 115 - 127
  • [22] Optimizing Off-Chip Memory Access Costs in Low Power MPEG-4 Decoder
    Habli, Haitham
    Ersfolk, Johan
    Lilius, Johan
    Westerlund, Tomi
    Nurmi, Jari
    PROCEEDINGS OF THE 3RD INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION SYSTEMS (ICICS'12), 2012,
  • [23] NOVELLA: Nonvolatile Last-Level Cache Bypass for Optimizing Off-Chip Memory Energy
    Bagchi, Aritra
    Rishabh, Ohm
    Panda, Preeti Ranjan
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2024, 43 (11) : 3913 - 3924
  • [24] Low-power, transparent optical network interface for high bandwidth off-chip interconnects
    Liboiron-Ladouceur, Odile
    Wang, Howard
    Garg, Ajay S.
    Bergman, Keren
    OPTICS EXPRESS, 2009, 17 (08): : 6550 - 6561
  • [25] DEF: Differential Encoding of Featuremaps for Low Power Convolutional Neural Network Accelerators
    Montgomerie-Corcoran, Alexander
    Savvas-Bouganis, Christos
    2021 26TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE (ASP-DAC), 2021, : 703 - 708
  • [26] EcoFlow: Efficient Convolutional Dataflows on Low-Power Neural Network Accelerators
    Orosa, Lois
    Koppula, Skanda
    Umuroglu, Yaman
    Kanellopoulos, Konstantinos
    Gomez-Luna, Juan
    Blott, Michaela
    Vissers, Kees
    Mutlu, Onur
    IEEE TRANSACTIONS ON COMPUTERS, 2024, 73 (09) : 2275 - 2289
  • [27] Detection of Trojans Using a Combined Ring Oscillator Network and Off-Chip Transient Power Analysis
    Zhang, Xuehui
    Ferraiuolo, Andrew
    Tehranipoor, Mohammad
    ACM JOURNAL ON EMERGING TECHNOLOGIES IN COMPUTING SYSTEMS, 2013, 9 (03)
  • [28] Refresh Triggered Computation: Improving the Energy Efficiency of Convolutional Neural Network Accelerators
    Jafri, Syed M. A. H.
    Hassan, Hasan
    Hemani, Ahmed
    Mutlu, Onur
    ACM TRANSACTIONS ON ARCHITECTURE AND CODE OPTIMIZATION, 2021, 18 (01)
  • [29] Computational Resource Consumption in Convolutional Neural Network Training – A Focus on Memory
    Torres L.A.
    Barrios C.J.
    Denneulin Y.
    Supercomputing Frontiers and Innovations, 2021, 8 (01) : 45 - 61
  • [30] Improving off-chip memory energy behavior in a multi-processor, multi-bank environment
    De La Luz, V
    Kandemir, M
    Sezer, U
    LANGUAGES AND COMPILERS FOR PARALLEL COMPUTING, 2003, 2624 : 100 - 114