Tango: A Deep Neural Network Benchmark Suite for Various Accelerators

被引:25
|
作者
Karki, Aajna [1 ]
Keshava, Chethan Palangotu [1 ]
Shivakumar, Spoorthi Mysore [1 ]
Skow, Joshua [1 ]
Hegde, Goutam Madhukeshwar [1 ]
Jeon, Hyeran [1 ]
机构
[1] San Jose State Univ, Comp Engn Dept, San Jose, CA 95192 USA
关键词
Deep neural network; Benchmark Suite;
D O I
10.1109/ISPASS.2019.00021
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural networks (DNNs) have been proving the effectiveness in various computing fields. To provide more efficient computing platforms for DNN applications, it is essential to have evaluation environments that include assorted benchmark workloads. Though a few DNN benchmark suites have been recently released, most of them require to install proprietary DNN libraries or resource-intensive DNN frameworks, which are hard to run on resource-limited mobile platforms or architecture simulators. To provide a more scalable evaluation environment, we propose a new DNN benchmark suite that can run on any platform that supports CUDA and OpenCL. The proposed benchmark suite includes the most widely used five convolution neural networks and two recurrent neural networks. We provide architectural statistics of these networks while running them on an architecture simulator, a server- and a mobile-GM, and a mobile FPGA.
引用
收藏
页码:137 / 138
页数:2
相关论文
共 50 条
  • [31] Quantization-Error-Robust Deep Neural Network for Embedded Accelerators
    Jung, Youngbeom
    Kim, Hyeonuk
    Choi, Yeongjae
    Kim, Lee-Sup
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2022, 69 (02) : 609 - 613
  • [32] Enhancing the Utilization of Processing Elements in Spatial Deep Neural Network Accelerators
    Asadikouhanjani, Mohammadreza
    Ko, Seok-Bum
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2021, 40 (09) : 1947 - 1951
  • [33] Compute-in-Time for Deep Neural Network Accelerators: Challenges and Prospects
    Al Maharmeh, Hamza
    Sarhan, Nabil J.
    Hung, Chung-Chih
    Ismail, Mohammed
    Alhawari, Mohammad
    2020 IEEE 63RD INTERNATIONAL MIDWEST SYMPOSIUM ON CIRCUITS AND SYSTEMS (MWSCAS), 2020, : 990 - 993
  • [34] USING DATAFLOW TO OPTIMIZE ENERGY EFFICIENCY OF DEEP NEURAL NETWORK ACCELERATORS
    Chen, Yu-Hsin
    Emer, Joel
    Sze, Vivienne
    IEEE MICRO, 2017, 37 (03) : 12 - 21
  • [35] Neural network benchmark
    Liu, Y
    APPLICATIONS AND SCIENCE OF ARTIFICIAL NEURAL NETWORKS III, 1997, 3077 : 214 - 224
  • [36] The Nebula Benchmark Suite: Implications of Lightweight Neural Networks
    Kim, Bogil
    Lee, Sungjae
    Park, Chanho
    Kim, Hyeonjin
    Song, William J.
    IEEE TRANSACTIONS ON COMPUTERS, 2021, 70 (11) : 1887 - 1900
  • [37] A Cross Benchmark Assessment of A Deep Convolutional Neural Network for Face Recognition
    Phillips, P. Jonathon
    2017 12TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2017), 2017, : 705 - 710
  • [38] A survey of neural network accelerators
    Li, Zhen
    Wang, Yuqing
    Zhi, Tian
    Chen, Tianshi
    FRONTIERS OF COMPUTER SCIENCE, 2017, 11 (05) : 746 - 761
  • [39] A survey of neural network accelerators
    Zhen Li
    Yuqing Wang
    Tian Zhi
    Tianshi Chen
    Frontiers of Computer Science, 2017, 11 : 746 - 761
  • [40] Benchmark study on deep neural network potentials for small organic molecules
    Modee, Rohit
    Laghuvarapu, Siddhartha
    Priyakumar, U. Deva
    JOURNAL OF COMPUTATIONAL CHEMISTRY, 2022, 43 (05) : 308 - 318