POMMEL: Exploring Off-Chip Memory Energy & Power Consumption in Convolutional Neural Network Accelerators

被引:0
|
作者
Montgomerie-Corcoran, Alexander [1 ]
Bouganis, Christos-Savvas [1 ]
机构
[1] Imperial Coll London, Dept Elect & Elect Engn, London, England
基金
英国工程与自然科学研究理事会;
关键词
Convolutional Neural Networks; Power Modelling; Machine Learning Acceleration;
D O I
10.1109/DSD53832.2021.00073
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Reducing the power and energy consumption of Convolutional Neural Network (CNN) Accelerators is becoming an increasingly popular design objective for both cloud and edge-based settings. Aiming towards the design of more efficient accelerator systems, the accelerator architect must understand how different design choices impact both power and energy consumption. The purpose of this work is to enable CNN accelerator designers to explore how design choices affect the memory subsystem in particular, which is a significant contributing component. By considering high-level design parameters of CNN accelerators that affect the memory subsystem, the proposed tool returns power and energy consumption estimates for a range of networks and memory types. This allows for power and energy of the off-chip memory subsystem to be considered earlier within the design process, enabling greater optimisations at the beginning phases. Towards this, the paper introduces POMMEL, an off-chip memory subsystem modelling tool for CNN accelerators, and its evaluation across a range of accelerators, networks, and memory types is performed. Furthermore, using POMMEL, the impact of various state-of-the-art compression and activity reduction schemes on the power and energy consumption of current accelerations is also investigated.
引用
收藏
页码:442 / 448
页数:7
相关论文
共 50 条
  • [31] On-Chip Memory Technology Design Space Explorations for Mobile Deep Neural Network Accelerators
    Li, Haitong
    Bhargava, Mudit
    Whatmough, Paul N.
    Wong, H-S Philip
    PROCEEDINGS OF THE 2019 56TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2019,
  • [32] Efficient Hierarchical Discretization of Off-chip Power Delivery Network Geometries for 2.5D Electrical Analysis
    Mondal, Mosin
    Pingenot, James
    Jandhyala, Vikram
    PROCEEDINGS OF THE ELEVENTH INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN (ISQED 2010), 2010, : 590 - 597
  • [33] Stick Buffer Cache v2: Improved Input Feature Map Cache for Reducing off-chip Memory Traffic in CNN Accelerators
    Struharik, Rastislav
    Vranjkovic, Vuk
    2019 27TH TELECOMMUNICATIONS FORUM (TELFOR 2019), 2019, : 450 - 453
  • [34] SuperSlash: A Unified Design Space Exploration and Model Compression Methodology for Design of Deep Learning Accelerators With Reduced Off-Chip Memory Access Volume
    Ahmad, Hazoor
    Arif, Tabasher
    Hanif, Muhammad Abdullah
    Hafiz, Rehan
    Shafique, Muhammad
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2020, 39 (11) : 4191 - 4204
  • [35] Considerations of Integrating Computing-In-Memory and Processing-In-Sensor into Convolutional Neural Network Accelerators for Low-Power Edge Devices
    Tang, Kea-Tiong
    Wei, Wei-Chen
    Yeh, Zuo-Wei
    Hsu, Tzu-Hsiang
    Chiu, Yen-Cheng
    Xue, Cheng-Xin
    Kuo, Yu -Chun
    Wen, Tai-Hsing
    Ho, Mon-Shu
    Lo, Chung-Chuan
    Liu, Ren-Shuo
    Hsieh, Chih-Cheng
    Chang, Meng-Fan
    2019 SYMPOSIUM ON VLSI CIRCUITS, 2019, : T166 - T167
  • [36] Sparse Periodic Systolic Dataflow for Lowering Latency and Power Dissipation of Convolutional Neural Network Accelerators
    Heo, Jung Hwan
    Fayyazi, Arash
    Esmaili, Amirhossein
    Pedram, Massoud
    2022 ACM/IEEE INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN, ISLPED 2022, 2022,
  • [37] Approximated Prediction Strategy for Reducing Power Consumption of Convolutional Neural Network Processor
    Ujiie, Takayuki
    Hiromoto, Masayuki
    Sato, Takashi
    PROCEEDINGS OF 29TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, (CVPRW 2016), 2016, : 870 - 876
  • [38] Design Tradeoff of Internal Memory Size and Memory Access Energy in Deep Neural Network Hardware Accelerators
    Hsiao, Shen-Fu
    Wu, Pei-Hsuen
    2018 IEEE 7TH GLOBAL CONFERENCE ON CONSUMER ELECTRONICS (GCCE 2018), 2018, : 735 - 736
  • [39] Convolutional Neural Network With Genetic Algorithm for Predicting Energy Consumption in Public Buildings
    Abdelaziz, Ahmed
    Santos, Vitor
    Dias, Miguel Sales
    IEEE ACCESS, 2023, 11 : 64049 - 64069
  • [40] Building energy consumption optimization method based on convolutional neural network and BIM
    Xu, Fang
    Liu, Qiaoran
    ALEXANDRIA ENGINEERING JOURNAL, 2023, 77 : 407 - 417