The Impact of GPU DVFS on the Energy and Performance of Deep Learning: an Empirical Study

被引:40
|
作者
Tang, Zhenheng [1 ]
Wang, Yuxin [1 ]
Wang, Qiang [1 ]
Chu, Xiaowen [1 ]
机构
[1] Hong Kong Baptist Univ, Dept Comp Sci, Hong Kong, Peoples R China
关键词
Graphics Processing Units; Dynamic Voltage and Frequency Scaling; Deep Convolutional Neural Network;
D O I
10.1145/3307772.3328315
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Over the past years, great progress has been made in improving the computing power of general-purpose graphics processing units (GPGPUs), which facilitates the prosperity of deep neural networks (DNNs) in multiple fields like computer vision and natural language processing. A typical DNN training process repeatedly updates tens of millions of parameters, which not only requires huge computing resources but also consumes significant energy. In order to train DNNs in a more energy-efficient way, we empirically investigate the impact of GPU Dynamic Voltage and Frequency Scaling (DVFS) on the energy consumption and performance of deep learning. Our experiments cover a wide range of GPU architectures, DVFS settings, and DNN configurations. We observe that, compared to the default core frequency settings of three tested GPUs, the optimal core frequency can help conserve 8.7%similar to 23.1% energy consumption for different DNN training cases. Regarding the inference, the benefits vary from 19.6%similar to 26.4%. Our findings suggest that GPU DVFS has great potentials to help develop energy efficient DNN training/inference schemes.
引用
收藏
页码:315 / 325
页数:11
相关论文
共 50 条
  • [1] A survey and measurement study of GPU DVFS on energy conservation
    Mei, Xinxin
    Wang, Qiang
    Chu, Xiaowen
    DIGITAL COMMUNICATIONS AND NETWORKS, 2017, 3 (02) : 89 - 100
  • [2] A survey and measurement study of GPU DVFS on energy conservation
    Xinxin Mei
    Qiang Wang
    Xiaowen Chu
    Digital Communications and Networks, 2017, 3 (02) : 89 - 100
  • [3] An Empirical Study on Energy Disaggregation via Deep Learning
    He, Wan
    Chai, Ying
    PROCEEDINGS OF THE 2016 2ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND INDUSTRIAL ENGINEERING (AIIE 2016), 2016, 133 : 338 - 342
  • [4] Performance Impact of DVFS for Molecular Dynamics Simulations on Tesla K40 GPU
    Astsatryan, H.
    Narsisian, W.
    Poghosyan, A.
    Shahinyan, A.
    2018 41ST INTERNATIONAL CONVENTION ON INFORMATION AND COMMUNICATION TECHNOLOGY, ELECTRONICS AND MICROELECTRONICS (MIPRO), 2018, : 854 - 860
  • [5] An Empirical Study on Performance Bugs in Deep Learning Frameworks
    Makkouk, Tarek
    Kim, Dong Jae
    Chen, Tse-Hsun
    2022 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE MAINTENANCE AND EVOLUTION (ICSME 2022), 2022, : 35 - 46
  • [6] Learning Based DVFS for Simultaneous Temperature, Performance and Energy Management
    Shen, Hao
    Lu, Jun
    Qiu, Qinru
    2012 13TH INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN (ISQED), 2012, : 747 - 754
  • [7] Performance Evaluation of Deep Learning Frameworks on Embedded GPU
    Fang, Hao
    Lan, Qiang
    Shi, Yang
    Wen, Mei
    2016 INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND INFORMATION SECURITY (CSIS 2016), 2016, : 200 - 205
  • [8] Empirical performance modeling of GPU kernels using active learning
    Balaprakash, Prasanna
    Rupp, Karl
    Mametjanov, Azamat
    Gramacy, Robert B.
    Hovland, Paul D.
    Wild, Stefan M.
    PARALLEL COMPUTING: ACCELERATING COMPUTATIONAL SCIENCE AND ENGINEERING (CSE), 2014, 25 : 646 - 655
  • [9] An Empirical Study on the Impact of Deep Parameters on Mobile App Energy Usage
    Xu, Qiang
    Davis, James C.
    Hu, Y. Charlie
    Jindal, Abhilash
    2022 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE ANALYSIS, EVOLUTION AND REENGINEERING (SANER 2022), 2022, : 844 - 855
  • [10] Single node deep learning frameworks: Comparative study and CPU/GPU performance analysis
    Lerat, Jean-Sebastien
    Mahmoudi, Sidi Ahmed
    Mahmoudi, Said
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2023, 35 (14):