ParaML: A Polyvalent Multicore Accelerator for Machine Learning

被引:3
|
作者
Zhou, Shengyuan [1 ,2 ]
Guo, Qi [1 ,3 ]
Du, Zidong [1 ,3 ]
Liu, Daofu [1 ,3 ]
Chen, Tianshi [1 ,3 ,4 ]
Li, Ling [5 ]
Liu, Shaoli [1 ,3 ]
Zhou, Jinhong [1 ,3 ]
Temam, Olivier [6 ]
Feng, Xiaobing [7 ]
Zhou, Xuehai [8 ]
Chen, Yunji [1 ,2 ,4 ]
机构
[1] Chinese Acad Sci, Inst Comp Technol, Intelligent Processor Res Ctr, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Sch Comp Sci & Technol, Beijing 100049, Peoples R China
[3] Cambricon Technol Corp Ltd, Beijing 100191, Peoples R China
[4] CAS Ctr Excellence Brain Sci & Intelligence Techn, Beijing 100190, Peoples R China
[5] Chinese Acad Sci, Inst Software, Beijing 100190, Peoples R China
[6] Inria Scalay, F-91120 Palaiseau, France
[7] Chinese Acad Sci, Inst Comp Technol, State Key Lab Comp Architecture, Beijing 100190, Peoples R China
[8] Univ Sci & Technol China, Hefei 230026, Peoples R China
基金
北京市自然科学基金;
关键词
Neural networks; Machine learning; Testing; Support vector machines; Linear regression; Computers; Computer architecture; Accelerator; machine learning (ML) techniques; multicore accelerator;
D O I
10.1109/TCAD.2019.2927523
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, machine learning (ML) techniques are proven to be powerful tools in various emerging applications. Traditionally, ML techniques are processed on general-purpose CPUs and GPUs, but their energy efficiencies are limited due to their excessive support for flexibility. As an efficient alternative to CPUs/GPUs, hardware accelerators are still limited as they often accommodate only a single ML technique (family). However, different problems may require different ML techniques, which implies that such accelerators may achieve poor learning accuracy or even be ineffective. In this paper, we present a polyvalent accelerator architecture integrated with multiple processing cores, called ParaML, which accommodates ten representative ML techniques, including k-means, k-nearest neighbors (k-NN), naive Bayes (NB), support vector machine (SVM), linear regression (LR), classification tree (CT), deep neural network (DNN), learning vector quantization (LVQ), parzen window (PW), and principal component analysis (PCA). Benefited from our thorough analysis on computational primitives and locality properties of different ML techniques, the single-core ParaML can perform up to 1056 GOP/s (e.g., additions and multiplications) in an area of 3.51 mm(2) and consumes 596 mW only, estimated by ICC and PrimeTime PX with post-synthesis netlist, respectively. Compared with the NVIDIA K20M GPU (28-nm process), the single-core ParaML (65-nm process) is 1.21x faster, and can reduce the energy by 137.93x. We also compare the single-core ParaML with other accelerators. Compared with PRINS, single-core ParaML achieves 72.09x and 2.57x energy benefit for k-NN and k-means, respectively, and speeds up each query in k-NN by 44.76x. Compared with EIE, the single-core ParaML achieves 5.02x speedup and 4.97x energy benefit with 11.62x less area when evaluating with dense DNN. Compared with TPU, the single-core ParaML achieves 2.45x better power efficiency (5647 Gop/W versus 2300 Gop/W) with 321.36x less area. Compared to the single-core version, the 8-core ParaML will further improve the speedup up to 3.98x with an area of 13.44 mm(2) and a power of 2036 mW.
引用
收藏
页码:1764 / 1777
页数:14
相关论文
共 50 条
  • [21] A Machine Learning Approach to Accelerating DSE of Reconfigurable Accelerator Systems
    Bezerra Lopes, Alba Sandyra
    Pereira, Monica Magalhaes
    33RD SYMPOSIUM ON INTEGRATED CIRCUITS AND SYSTEMS DESIGN (SBCCI 2020), 2020,
  • [22] A Reliability-Oriented Machine Learning Strategy for Heterogeneous Multicore Application Mapping
    Tonetto, Rafael B.
    Rocha, Hiago M. G. de A.
    Zatt, Bruno
    Beck, Antonio Carlos S.
    Nazar, Gabriel L.
    2020 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2020,
  • [23] Dynamic workload-aware DVFS for multicore systems using machine learning
    Manjari Gupta
    Lava Bhargava
    S. Indu
    Computing, 2021, 103 : 1747 - 1769
  • [24] A Parallel Approach to Enhance the Performance of Supervised Machine Learning Realized in a Multicore Environment
    Ghimire, Ashutosh
    Amsaad, Fathi
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2024, 6 (03): : 1840 - 1856
  • [25] Dynamic workload-aware DVFS for multicore systems using machine learning
    Gupta, Manjari
    Bhargava, Lava
    Indu, S.
    COMPUTING, 2021, 103 (08) : 1747 - 1769
  • [26] Design Space Exploration of a Reconfigurable Accelerator in a Heterogeneous Multicore
    Silva Jr, Francisco Carlos
    Patrocinio, Joao P. dos S.
    Silva, Ivan Saraiva
    Jacobi, Ricardo Pezzuol
    33RD SYMPOSIUM ON INTEGRATED CIRCUITS AND SYSTEMS DESIGN (SBCCI 2020), 2020,
  • [27] Evaluation and Proposal of a Lightweight Reconfigurable Accelerator for Heterogeneous Multicore
    Silva Junior, Francisco Carlos
    Silva, Ivan Saraiva
    Jacobi, Ricardo Pezzuol
    IEEE LATIN AMERICA TRANSACTIONS, 2021, 19 (04) : 559 - 566
  • [28] A machine learning enhanced approximate message passing massive MIMO accelerator
    Brennsteiner, Stefan
    Arslan, Tughrul
    Thompson, John S.
    McCormick, Andrew
    2022 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS 2022): INTELLIGENT TECHNOLOGY IN THE POST-PANDEMIC ERA, 2022, : 443 - 446
  • [29] Online accelerator optimization with a machine learning-based stochastic algorithm
    Zhang, Zhe
    Song, Minghao
    Huang, Xiaobiao
    MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2021, 2 (01):
  • [30] Fast DSE of reconfigurable accelerator systems via ensemble machine learning
    Alba Lopes
    Monica Pereira
    Analog Integrated Circuits and Signal Processing, 2021, 108 : 495 - 509