Tango: A Deep Neural Network Benchmark Suite for Various Accelerators

被引:25
|
作者
Karki, Aajna [1 ]
Keshava, Chethan Palangotu [1 ]
Shivakumar, Spoorthi Mysore [1 ]
Skow, Joshua [1 ]
Hegde, Goutam Madhukeshwar [1 ]
Jeon, Hyeran [1 ]
机构
[1] San Jose State Univ, Comp Engn Dept, San Jose, CA 95192 USA
关键词
Deep neural network; Benchmark Suite;
D O I
10.1109/ISPASS.2019.00021
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural networks (DNNs) have been proving the effectiveness in various computing fields. To provide more efficient computing platforms for DNN applications, it is essential to have evaluation environments that include assorted benchmark workloads. Though a few DNN benchmark suites have been recently released, most of them require to install proprietary DNN libraries or resource-intensive DNN frameworks, which are hard to run on resource-limited mobile platforms or architecture simulators. To provide a more scalable evaluation environment, we propose a new DNN benchmark suite that can run on any platform that supports CUDA and OpenCL. The proposed benchmark suite includes the most widely used five convolution neural networks and two recurrent neural networks. We provide architectural statistics of these networks while running them on an architecture simulator, a server- and a mobile-GM, and a mobile FPGA.
引用
收藏
页码:137 / 138
页数:2
相关论文
共 50 条
  • [41] Fast Inner-Product Algorithms and Architectures for Deep Neural Network Accelerators
    Pogue, Trevor E.
    Nicolici, Nicola
    IEEE TRANSACTIONS ON COMPUTERS, 2024, 73 (02) : 495 - 509
  • [42] CANN: Curable Approximations for High-Performance Deep Neural Network Accelerators
    Hanif, Muhammad Abdullah
    Khalid, Faiq
    Shafique, Muhammad
    PROCEEDINGS OF THE 2019 56TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2019,
  • [43] Exploration and Generation of Efficient FPGA-based Deep Neural Network Accelerators
    Ali, Nermine
    Philippe, Jean-Marc
    Tain, Benoit
    Coussy, Philippe
    2021 IEEE WORKSHOP ON SIGNAL PROCESSING SYSTEMS (SIPS 2021), 2021, : 123 - 128
  • [44] High-level Modeling of Manufacturing Faults in Deep Neural Network Accelerators
    Kundu, Shamik
    Soyyigit, Ahmet
    Hoque, Khaza Anuarul
    Basu, Kanad
    2020 26TH IEEE INTERNATIONAL SYMPOSIUM ON ON-LINE TESTING AND ROBUST SYSTEM DESIGN (IOLTS 2020), 2020,
  • [45] Reconfigurable Multi-Input Adder Design for Deep Neural Network Accelerators
    Moradian, Hossein
    Jo, Sujeong
    Choi, Kiyoung
    2018 INTERNATIONAL SOC DESIGN CONFERENCE (ISOCC), 2018, : 212 - 213
  • [46] Memory Bandwidth and Energy Efficiency Optimization of Deep Convolutional Neural Network Accelerators
    Nie, Zikai
    Li, Zhisheng
    Wang, Lei
    Guo, Shasha
    Dou, Qiang
    ADVANCED COMPUTER ARCHITECTURE, 2018, 908 : 15 - 29
  • [47] Attacking a Joint Protection Scheme for Deep Neural Network Hardware Accelerators and Models
    Wilhelmstaetter, Simon
    Conrad, Joschua
    Upadhyaya, Devanshi
    Polian, Ilia
    Ortmanns, Maurits
    2024 IEEE 6TH INTERNATIONAL CONFERENCE ON AI CIRCUITS AND SYSTEMS, AICAS 2024, 2024, : 144 - 148
  • [48] Efficient Compression Technique for NoC-based Deep Neural Network Accelerators
    Lorandel, Jordane
    Lahdhiri, Habiba
    Bourdel, Emmanuelle
    Monteleone, Salvatore
    Palesi, Maurizio
    2020 23RD EUROMICRO CONFERENCE ON DIGITAL SYSTEM DESIGN (DSD 2020), 2020, : 174 - 179
  • [49] Efficient On-Line Error Detection and Mitigation for Deep Neural Network Accelerators
    Schorn, Christoph
    Guntoro, Andre
    Ascheid, Gerd
    COMPUTER SAFETY, RELIABILITY, AND SECURITY (SAFECOMP 2018), 2018, 11093 : 205 - 219
  • [50] VLSI implementation of transcendental function hyperbolic tangent for deep neural network accelerators
    Rajput, Gunjan
    Raut, Gopal
    Chandra, Mahesh
    Vishvakarma, Santosh Kumar
    MICROPROCESSORS AND MICROSYSTEMS, 2021, 84