NASA plus : Neural Architecture Search and Acceleration for Multiplication-Reduced Hybrid Networks

被引:3
|
作者
Shi, Huihong [1 ,2 ]
You, Haoran [3 ]
Wang, Zhongfeng [4 ]
Lin, Yingyan [3 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] Nanjing Univ, Sch Elect Sci & Engn, Nanjing 210093, Peoples R China
[3] Georgia Inst Technol, Sch Comp Sci, Atlanta, GA 30332 USA
[4] Nanjing Univ, Sch Elect Sci & Engn, Nanjing 210093, Peoples R China
关键词
Multiplication-reduced hybrid networks; neural architecture search; chunk-based accelerator; reconfigurable PE; algorithm-hardware co-design;
D O I
10.1109/TCSI.2023.3256700
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Multiplication is arguably the most computation-intensive operation in modern deep neural networks (DNNs), limiting their extensive deployment on resource-constrained devices. Thereby, pioneering works have handcrafted multiplication-free DNNs, which are hardware-efficient but generally inferior to their multiplication-based counterparts in task accuracy, calling for multiplication-reduced hybrid DNNs to marry the best of both worlds. To this end, we propose a Neural Architecture Search and Acceleration (NASA) framework for the above hybrid models, dubbed NASA+, to boost both task accuracy and hardware efficiency. Specifically, NASA+ augments the state-of-the-art (SOTA) search space with multiplication-free operators to construct hybrid ones, and then adopts a novel progressive pretraining strategy to enable the effective search. Furthermore, NASA+ develops a chunk-based accelerator with novel reconfigurable processing elements to better support searched hybrid models, and integrates an auto-mapper to search for optimal dataflows. Experimental results and ablation studies consistently validate the effectiveness of our NASA+ algorithm-hardware co-design framework, e.g., we can achieve up to 65.1% lower energy-delay-product with comparable accuracy over the SOTA multiplication-based system on CIFAR100. Codes are available at https://github.com/GATECH-EIC/NASA.
引用
收藏
页码:2523 / 2536
页数:14
相关论文
共 50 条
  • [21] NASB: Neural Architecture Search for Binary Convolutional Neural Networks
    Zhu, Baozhou
    Al-Ars, Zaid
    Hofstee, H. Peter
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [22] BNNAS plus plus : Towards Unbiased Neural Architecture Search With Batch Normalization
    Zhu, Yichen
    Fu, Xiaowei
    IEEE ACCESS, 2022, 10 : 128424 - 128432
  • [23] Hybrid Architecture-Based Evolutionary Robust Neural Architecture Search
    Yang, Shangshang
    Sun, Xiangkun
    Xu, Ke
    Liu, Yuanchao
    Tian, Ye
    Zhang, Xingyi
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024, 8 (04): : 2919 - 2934
  • [24] Efficient evolutionary neural architecture search based on hybrid search space
    Gong, Tao
    Ma, Yongjie
    Xu, Yang
    Song, Changwei
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 15 (08) : 3313 - 3326
  • [25] Hybrid evolutionary network architecture search (HyENAS) for convolution class of deep neural networks with applications
    Soniya
    Singh, Lotika
    Paul, Sandeep
    EXPERT SYSTEMS, 2023, 40 (04)
  • [26] A hybrid neural architecture search for hyperspectral image classification
    Wang, Aili
    Song, Yingluo
    Wu, Haibin
    Liu, Chengyang
    Iwahori, Yuji
    FRONTIERS IN PHYSICS, 2023, 11
  • [27] Neural Architecture Search Using Deep Neural Networks and Monte Carlo Tree Search
    Wang, Linnan
    Zhao, Yiyang
    Yuu Jinnai
    Tian, Yuandong
    Fonseca, Rodrigo
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 9983 - 9991
  • [28] ENASA: Towards Edge Neural Architecture Search based on CIM acceleration
    Zhao, Shixin
    Qu, Songyun
    Wang, Ying
    Han, Yinhe
    2023 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION, DATE, 2023,
  • [29] AutoGAN: Neural Architecture Search for Generative Adversarial Networks
    Gong, Xinyu
    Chang, Shiyu
    Jiang, Yifan
    Wang, Zhangyang
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 3223 - 3233
  • [30] Learning deep morphological networks with neural architecture search
    Hu, Yufei
    Belkhir, Nacim
    Angulo, Jesus
    Yao, Angela
    Franchi, Gianni
    PATTERN RECOGNITION, 2022, 131