Energy Complexity of Convolutional Neural Networks

被引:1
|
作者
Sima, Jiri [1 ]
Vidnerova, Petra [1 ]
Mrazek, Vojtech [2 ]
机构
[1] Czech Acad Sci, Inst Comp Sci, Prague 8, Czech Republic
[2] Brno Univ Technol, Fac Informat Technol, Brno 61200, Czech Republic
关键词
Complex networks - Convolution - Convolutional neural networks - Data flow analysis - Energy efficiency - Low power electronics - Software testing - Statistical tests;
D O I
10.1162/neco_a_01676
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The energy efficiency of hardware implementations of convolutional neural networks (CNNs) is critical to their widespread deployment in low-power mobile devices. Recently, a number of methods have been proposed for providing energy-optimal mappings of CNNs onto diverse hardware accelerators. Their estimated energy consumption is related to specific implementation details and hardware parameters, which does not allow for machine-independent exploration of CNN energy measures. In this letter, we introduce a simplified theoretical energy complexity model for CNNs, based on only a two-level memory hierarchy that captures asymptotically all important sources of energy consumption for different CNN hardware implementations. In this model, we derive a simple energy lower bound and calculate the energy complexity of evaluating a CNN layer for two common data flows, providing corresponding upper bounds. According to statistical tests, the theoretical energy upper and lower bounds we present fit asymptotically very well with the real energy consumption of CNN implementations on the Simba and Eyeriss hardware platforms, estimated by the Timeloop/Accelergy program, which validates the proposed energy complexity model for CNNs.
引用
收藏
页码:1601 / 1625
页数:25
相关论文
共 50 条
  • [1] Energy Complexity Model for Convolutional Neural Networks
    Sima, Jiri
    Vidnerova, Petra
    Mrazek, Vojtech
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PART X, 2023, 14263 : 186 - 198
  • [2] Complexity of Deep Convolutional Neural Networks in Mobile Computing
    Naeem, Saad
    Jamil, Noreen
    Khan, Habib Ullah
    Nazir, Shah
    COMPLEXITY, 2020, 2020
  • [3] Low-Complexity Approximate Convolutional Neural Networks
    Cintra, Renato J.
    Duffner, Stefan
    Garcia, Christophe
    Leite, Andre
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (12) : 5981 - 5992
  • [4] On the Reduction of Computational Complexity of Deep Convolutional Neural Networks
    Maji, Partha
    Mullins, Robert
    ENTROPY, 2018, 20 (04)
  • [5] Relationship between Complexity and Precision of Convolutional Neural Networks
    Ke, Xiaolong
    Cao, Wenming
    Lv, Fangfang
    PROCEEDINGS OF THE 2017 2ND INTERNATIONAL SYMPOSIUM ON ADVANCES IN ELECTRICAL, ELECTRONICS AND COMPUTER ENGINEERING (ISAEECE 2017), 2017, 124 : 325 - 329
  • [6] Energy Complexity of Recurrent Neural Networks
    Sima, Jiri
    NEURAL COMPUTATION, 2014, 26 (05) : 953 - 973
  • [7] A Faster Algorithm for Reducing the Computational Complexity of Convolutional Neural Networks
    Zhao, Yulin
    Wang, Donghui
    Wang, Leiou
    Liu, Peng
    ALGORITHMS, 2018, 11 (10)
  • [8] Evolving Energy Efficient Convolutional Neural Networks
    Young, Steven R.
    Johnston, J. Travis
    Schuman, Catherine D.
    Devineni, Pravallika
    Kay, Bill
    Rose, Derek C.
    Parsa, Maryam
    Patton, Robert M.
    Potok, Thomas E.
    2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 4479 - 4485
  • [9] Energy Propagation in Deep Convolutional Neural Networks
    Wiatowski, Thomas
    Grohs, Philipp
    Boelcskei, Helmut
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2018, 64 (07) : 4819 - 4842
  • [10] ICNN: An Iterative Implementation of Convolutional Neural Networks to Enable Energy and Computational Complexity Aware Dynamic Approximation
    Neshatpour, Katayoun
    Rehnia, Farnaz
    Homayoun, Houman
    Sasan, Avesta
    PROCEEDINGS OF THE 2018 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE), 2018, : 551 - 556