Design space exploration of neural network accelerator based on transfer learning

被引:0
|
作者
Wu Y. [1 ]
Zhi T. [2 ]
Song X. [2 ]
Li X. [1 ]
机构
[1] School of Computer Science, University of Science and Technology of China, Hefei
[2] State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences, Beijing
基金
中国国家自然科学基金;
关键词
design space exploration (DSE); multi-task learning; neural network accelerator; transfer learning;
D O I
10.3772/j.issn.1006-6748.2023.04.009
中图分类号
学科分类号
摘要
With the increasing demand of computational power in artificial intelligence (AI) algorithms, dedicated accelerators have become a necessity. However, the complexity of hardware architectures, vast design search space, and complex tasks of accelerators have posed significant challenges. Traditional search methods can become prohibitively slow if the search space continues to be expanded. A design space exploration (DSE) method is proposed based on transfer learning, which reduces the time for repeated training and uses multi-task models for different tasks on the same processor. The proposed method accurately predicts the latency and energy consumption associated with neural network accelerator design parameters, enabling faster identification of optimal outcomes compared with traditional methods. And compared with other DSE methods by using multilayer perceptron (MLP), the required training time is shorter. Comparative experiments with other methods demonstrate that the proposed method improves the efficiency of DSE without compromising the accuracy of the results. © 2023 Inst. of Scientific and Technical Information of China. All rights reserved.
引用
收藏
页码:416 / 426
页数:10
相关论文
共 50 条
  • [31] Neural Network-Based Limiter with Transfer Learning
    Rémi Abgrall
    Maria Han Veiga
    Communications on Applied Mathematics and Computation, 2023, 5 (2) : 532 - 572
  • [32] Neural Network-Based Limiter with Transfer Learning
    Abgrall, Remi
    Han Veiga, Maria
    COMMUNICATIONS ON APPLIED MATHEMATICS AND COMPUTATION, 2023, 5 (02) : 532 - 572
  • [33] Transfer Learning for Design-Space Exploration with High-Level Synthesis
    Kwon, Jihye
    Carloni, Luca P.
    PROCEEDINGS OF THE 2020 ACM/IEEE 2ND WORKSHOP ON MACHINE LEARNING FOR CAD (MLCAD '20), 2020, : 163 - 168
  • [34] FSS: algorithm and neural network accelerator for style transfer
    Ling, Yi
    Huang, Yujie
    Cai, Yujie
    Li, Zhaojie
    Wang, Mingyu
    Li, Wenhong
    Zeng, Xiaoyang
    SCIENCE CHINA-INFORMATION SCIENCES, 2024, 67 (02)
  • [35] FSS: algorithm and neural network accelerator for style transfer
    Yi LING
    Yujie HUANG
    Yujie CAI
    Zhaojie LI
    Mingyu WANG
    Wenhong LI
    Xiaoyang ZENG
    Science China(Information Sciences), 2024, 67 (02) : 253 - 266
  • [36] ARS-Flow 2.0: An enhanced design space exploration flow for accelerator-rich system based on active learning
    Huang, Shuaibo
    Ye, Yuyang
    Yan, Hao
    Shi, Longxing
    INTEGRATION-THE VLSI JOURNAL, 2025, 101
  • [37] mRNA: Enabling Efficient Mapping Space Exploration for a Reconfigurable Neural Accelerator
    Zhao, Zhongyuan
    Kwon, Hyoukjun
    Kuhar, Sachit
    Sheng, Weiguang
    Mao, Zhigang
    Krishna, Tushar
    2019 IEEE INTERNATIONAL SYMPOSIUM ON PERFORMANCE ANALYSIS OF SYSTEMS AND SOFTWARE (ISPASS), 2019, : 282 - 292
  • [38] A HARDWARE ACCELERATOR OF THE CONVOLUTIONAL SPIKE NEURAL NETWORK BASED ON STDP ONLINE LEARNING
    Chen, Qinxin
    Dong, Xiao
    Ma, De
    Zhu, Xiaolei
    CONFERENCE OF SCIENCE & TECHNOLOGY FOR INTEGRATED CIRCUITS, 2024 CSTIC, 2024,
  • [39] Design and implementation of convolution neural network accelerator for Winograd algorithm based on FPGA
    Niu Zhao-xu
    Sun Hai-jiang
    CHINESE JOURNAL OF LIQUID CRYSTALS AND DISPLAYS, 2023, 38 (11) : 1521 - 1530
  • [40] Design of a Convolutional Neural Network Accelerator Based on On-Chip Data Reordering
    Liu, Yang
    Zhang, Yiheng
    Hao, Xiaoran
    Chen, Lan
    Ni, Mao
    Chen, Ming
    Chen, Rong
    ELECTRONICS, 2024, 13 (05)