FLASH: Fast Neural Architecture Search with Hardware Optimization

被引:9
|
作者
Li, Guihong [1 ]
Mandal, Sumit K. [2 ]
Ogras, Umit Y. [2 ]
Marculescu, Radu [1 ]
机构
[1] Univ Texas Austin, Austin, TX 78712 USA
[2] Univ Wisconsin, Madison, WI USA
基金
美国国家科学基金会;
关键词
Neural networks; network science; hardware optimization; neural architecture search; model-architecture co-design; resource-constrained devices;
D O I
10.1145/3476994
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Neural architecture search (NAS) is a promising technique to design efficient and high-performance deep neural networks (DNNs). As the performance requirements of ML applications grow continuously, the hardware accelerators start playing a central role in DNN design. This trend makes NAS even more complicated and time-consuming for most real applications. This paper proposes FLASH, a very fast NAS methodology that co-optimizes the DNN accuracy and performance on a real hardware platform. As the main theoretical contribution, we first propose the NN-Degree, an analytical metric to quantify the topological characteristics of DNNs with skip connections (e.g., DenseNets, ResNets, Wide-ResNets, and MobileNets). The newly proposed NN-Degree allows us to do training-free NAS within one second and build an accuracy predictor by training as few as 25 samples out of a vast search space with more than 63 billion configurations. Second, by performing inference on the target hardware, we fine-tune and validate our analytical models to estimate the latency, area, and energy consumption of various DNN architectures while executing standard ML datasets. Third, we construct a hierarchical algorithm based on simplicial homology global optimization (SHGO) to optimize the model-architecture co-design process, while considering the area, latency, and energy consumption of the target hardware. We demonstrate that, compared to the state-of-the-art NAS approaches, our proposed hierarchical SHGO-based algorithm enables more than four orders of magnitude speedup (specifically, the execution time of the proposed algorithm is about 0.1 seconds). Finally, our experimental evaluations show that FLASH is easily transferable to different hardware architectures, thus enabling us to do NAS on a Raspberry Pi-3B processor in less than 3 seconds.
引用
收藏
页数:26
相关论文
共 50 条
  • [1] Fast Hardware-Aware Neural Architecture Search
    Zhang, Li Lyna
    Yang, Yuqing
    Jiang, Yuhang
    Zhu, Wenwu
    Liu, Yunxin
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 2959 - 2967
  • [2] Neural Architecture Search Survey: A Hardware Perspective
    Chitty-Venkata, Krishna Teja
    Somani, Arun K.
    [J]. ACM COMPUTING SURVEYS, 2023, 55 (04)
  • [3] Fast and Practical Neural Architecture Search
    Cui, Jiequan
    Chen, Pengguang
    Li, Ruiyu
    Liu, Shu
    Shen, Xiaoyong
    Jia, Jiaya
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 6211 - 6220
  • [4] S3NAS: Fast Hardware-Aware Neural Architecture Search Methodology
    Lee, Jaeseong
    Rhim, Jungsub
    Kang, Duseok
    Ha, Soonhoi
    [J]. IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2022, 41 (11) : 4826 - 4836
  • [5] Neural Architecture Search and Hardware Accelerator Co-Search: A Survey
    Sekanina, Lukas
    [J]. IEEE ACCESS, 2021, 9 : 151337 - 151362
  • [6] A Fast Compressed Hardware Architecture for Deep Neural Networks
    Ansari, Anaam
    Shelton, Allen
    Ogunfunmi, Tokunbo
    Panchbhaiyye, Vineet
    [J]. 2022 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS 22), 2022, : 370 - 374
  • [7] An Optimization Technique for General Neural Network Hardware Architecture
    Ravichandran, Jayalakshmi
    Nair, Radeep Krishna Radhakrishnan
    [J]. 2020 5TH INTERNATIONAL CONFERENCE ON DEVICES, CIRCUITS AND SYSTEMS (ICDCS' 20), 2020, : 252 - 255
  • [8] Neural architecture search for resource constrained hardware devices: A survey
    Yang, Yongjia
    Zhan, Jinyu
    Jiang, Wei
    Jiang, Yucheng
    Yu, Antai
    [J]. IET CYBER-PHYSICAL SYSTEMS: THEORY & APPLICATIONS, 2023, 8 (03) : 149 - 159
  • [9] Hardware-Aware Neural Architecture Search: Survey and Taxonomy
    Benmeziane, Hadjer
    El Maghraoui, Kaoutar
    Ouarnoughi, Hamza
    Niar, Smail
    Wistuba, Martin
    Wang, Naigang
    [J]. PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 4322 - 4329
  • [10] Evolution of Hardware-Aware Neural Architecture Search on the Edge
    Richey, Blake
    Clay, Mitchell
    Grecos, Christos
    Shirvaikar, Mukul
    [J]. REAL-TIME IMAGE PROCESSING AND DEEP LEARNING 2023, 2023, 12528