ParaML: A Polyvalent Multicore Accelerator for Machine Learning

被引:3
|
作者
Zhou, Shengyuan [1 ,2 ]
Guo, Qi [1 ,3 ]
Du, Zidong [1 ,3 ]
Liu, Daofu [1 ,3 ]
Chen, Tianshi [1 ,3 ,4 ]
Li, Ling [5 ]
Liu, Shaoli [1 ,3 ]
Zhou, Jinhong [1 ,3 ]
Temam, Olivier [6 ]
Feng, Xiaobing [7 ]
Zhou, Xuehai [8 ]
Chen, Yunji [1 ,2 ,4 ]
机构
[1] Chinese Acad Sci, Inst Comp Technol, Intelligent Processor Res Ctr, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Sch Comp Sci & Technol, Beijing 100049, Peoples R China
[3] Cambricon Technol Corp Ltd, Beijing 100191, Peoples R China
[4] CAS Ctr Excellence Brain Sci & Intelligence Techn, Beijing 100190, Peoples R China
[5] Chinese Acad Sci, Inst Software, Beijing 100190, Peoples R China
[6] Inria Scalay, F-91120 Palaiseau, France
[7] Chinese Acad Sci, Inst Comp Technol, State Key Lab Comp Architecture, Beijing 100190, Peoples R China
[8] Univ Sci & Technol China, Hefei 230026, Peoples R China
基金
北京市自然科学基金;
关键词
Neural networks; Machine learning; Testing; Support vector machines; Linear regression; Computers; Computer architecture; Accelerator; machine learning (ML) techniques; multicore accelerator;
D O I
10.1109/TCAD.2019.2927523
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, machine learning (ML) techniques are proven to be powerful tools in various emerging applications. Traditionally, ML techniques are processed on general-purpose CPUs and GPUs, but their energy efficiencies are limited due to their excessive support for flexibility. As an efficient alternative to CPUs/GPUs, hardware accelerators are still limited as they often accommodate only a single ML technique (family). However, different problems may require different ML techniques, which implies that such accelerators may achieve poor learning accuracy or even be ineffective. In this paper, we present a polyvalent accelerator architecture integrated with multiple processing cores, called ParaML, which accommodates ten representative ML techniques, including k-means, k-nearest neighbors (k-NN), naive Bayes (NB), support vector machine (SVM), linear regression (LR), classification tree (CT), deep neural network (DNN), learning vector quantization (LVQ), parzen window (PW), and principal component analysis (PCA). Benefited from our thorough analysis on computational primitives and locality properties of different ML techniques, the single-core ParaML can perform up to 1056 GOP/s (e.g., additions and multiplications) in an area of 3.51 mm(2) and consumes 596 mW only, estimated by ICC and PrimeTime PX with post-synthesis netlist, respectively. Compared with the NVIDIA K20M GPU (28-nm process), the single-core ParaML (65-nm process) is 1.21x faster, and can reduce the energy by 137.93x. We also compare the single-core ParaML with other accelerators. Compared with PRINS, single-core ParaML achieves 72.09x and 2.57x energy benefit for k-NN and k-means, respectively, and speeds up each query in k-NN by 44.76x. Compared with EIE, the single-core ParaML achieves 5.02x speedup and 4.97x energy benefit with 11.62x less area when evaluating with dense DNN. Compared with TPU, the single-core ParaML achieves 2.45x better power efficiency (5647 Gop/W versus 2300 Gop/W) with 321.36x less area. Compared to the single-core version, the 8-core ParaML will further improve the speedup up to 3.98x with an area of 13.44 mm(2) and a power of 2036 mW.
引用
收藏
页码:1764 / 1777
页数:14
相关论文
共 50 条
  • [1] PuDianNao: A Polyvalent Machine Learning Accelerator
    Liu, Daofu
    Chen, Tianshi
    Liu, Shaoli
    Zhou, Jinhong
    Zhou, Shengyuan
    Teman, Olivier
    Feng, Xiaobing
    Zhou, Xuehai
    Chen, Yunji
    ACM SIGPLAN NOTICES, 2015, 50 (04) : 369 - 381
  • [2] Accelerator for Sparse Machine Learning
    Yavits, Leonid
    Ginosar, Ran
    IEEE COMPUTER ARCHITECTURE LETTERS, 2018, 17 (01) : 21 - 24
  • [3] A Multicore GNN Training Accelerator
    Mondal, Sudipta
    Ramprasath, S.
    Zeng, Ziqing
    Kunal, Kishor
    Sapatnekar, Sachin S.
    2023 IEEE/ACM INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN, ISLPED, 2023,
  • [4] DYNAMIC MULTICORE RESOURCE MANAGEMENT: A MACHINE LEARNING APPROACH
    Martinez, Jose F.
    Ipek, Engin
    IEEE MICRO, 2009, 29 (05) : 8 - 17
  • [5] Prediction of accelerator operation using machine learning
    Yamanokuchi, Tomoya
    Ando, Shin
    Kinoshita, Koji
    Bahadori, Alireza
    Kashiwao, Tomoaki
    IEEJ TRANSACTIONS ON ELECTRICAL AND ELECTRONIC ENGINEERING, 2018, 13 (04) : 656 - 657
  • [6] Accelerator and detector control for the EIC with machine learning
    Britton, T.
    Nachman, B.
    JOURNAL OF INSTRUMENTATION, 2022, 17 (02)
  • [7] DiVa: An Accelerator for Differentially Private Machine Learning
    Park, Beomsik
    Hwang, Ranggi
    Yoon, Dongho
    Choi, Yoonhyuk
    Rhu, Minsoo
    2022 55TH ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO), 2022, : 1200 - 1217
  • [8] Deploying OpenMP on an Embedded Multicore Accelerator
    Agathos, Spiros N.
    Dimakopoulos, Vassilios V.
    Mourelis, Aggelos
    Papadogiannakis, Alexandros
    2013 INTERNATIONAL CONFERENCE ON EMBEDDED COMPUTER SYSTEMS: ARCHITECTURES, MODELING AND SIMULATION (IC-SAMOS), 2013, : 180 - 187
  • [9] Generating Optimized Multicore Accelerator Architectures
    Lopes, Alba S. B.
    Brandalero, Marcelo
    Beck, Antonio C. S.
    Pereira, Monica Magalhaes
    2019 IX BRAZILIAN SYMPOSIUM ON COMPUTING SYSTEMS ENGINEERING (SBESC), 2019,
  • [10] Applying Statistical Machine Learning to Multicore Voltage & Frequency Scaling
    Moeng, Michael
    Melhem, Rami
    PROCEEDINGS OF THE 2010 COMPUTING FRONTIERS CONFERENCE (CF 2010), 2010, : 277 - 286