HGNAS: Hardware-Aware Graph Neural Architecture Search for Edge Devices

被引:0
|
作者
Zhou, Ao [1 ]
Yang, Jianlei [1 ]
Qi, Yingjie [1 ]
Qiao, Tong [1 ]
Shi, Yumeng [1 ]
Duan, Cenlin [2 ]
Zhao, Weisheng [2 ]
Hu, Chunming [1 ]
机构
[1] Beihang University, School of Computer Science and Engineering, Beijing,100191, China
[2] Beihang University, School of Integrated Circuits and Engineering, Beijing,100191, China
关键词
D O I
10.1109/TC.2024.3449108
中图分类号
学科分类号
摘要
Graph Neural Networks (GNNs) are becoming increasingly popular for graph-based learning tasks such as point cloud processing due to their state-of-the-art (SOTA) performance. Nevertheless, the research community has primarily focused on improving model expressiveness, lacking consideration of how to design efficient GNN models for edge scenarios with real-time requirements and limited resources. Examining existing GNN models reveals varied execution across platforms and frequent Out-Of-Memory (OOM) problems, highlighting the need for hardware-aware GNN design. To address this challenge, this work proposes a novel hardware-aware graph neural architecture search framework tailored for resource constraint edge devices, namely HGNAS. To achieve hardware awareness, HGNAS integrates an efficient GNN hardware performance predictor that evaluates the latency and peak memory usage of GNNs in milliseconds. Meanwhile, we study GNN memory usage during inference and offer a peak memory estimation method, enhancing the robustness of architecture evaluations when combined with predictor outcomes. Furthermore, HGNAS constructs a fine-grained design space to enable the exploration of extreme performance architectures by decoupling the GNN paradigm. In addition, the multi-stage hierarchical search strategy is leveraged to facilitate the navigation of huge candidates, which can reduce the single search time to a few GPU hours. To the best of our knowledge, HGNAS is the first automated GNN design framework for edge devices, and also the first work to achieve hardware awareness of GNNs across different platforms. Extensive experiments across various applications and edge devices have proven the superiority of HGNAS. It can achieve up to a 10.6× speedup and an 82.5 peak memory reduction with negligible accuracy loss compared to DGCNN on ModelNet40. © 1968-2012 IEEE.
引用
收藏
页码:2693 / 2707
相关论文
共 50 条
  • [21] Generating Neural Networks for Diverse Networking Classification Tasks via Hardware-Aware Neural Architecture Search
    Xie, Guorui
    Li, Qing
    Shi, Zhenning
    Fang, Hanbin
    Ji, Shengpeng
    Jiang, Yong
    Yuan, Zhenhui
    Ma, Lianbo
    Xu, Mingwei
    IEEE TRANSACTIONS ON COMPUTERS, 2024, 73 (02) : 481 - 494
  • [22] SARNas: A Hardware-Aware SAR Target Detection Algorithm via Multiobjective Neural Architecture Search
    Du, Wentian
    Chen, Jie
    Zhang, Chaochen
    Zhao, Po
    Wan, Huiyao
    Zhou, Zheng
    Cao, Yice
    Huang, Zhixiang
    Li, Yingsong
    Wu, Bocai
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [23] Hardware-aware Model Architecture for Ternary Spiking Neural Networks
    Wu, Nai-Chun
    Chen, Tsu-Hsiang
    Huang, Chih-Tsun
    2023 INTERNATIONAL VLSI SYMPOSIUM ON TECHNOLOGY, SYSTEMS AND APPLICATIONS, VLSI-TSA/VLSI-DAT, 2023,
  • [24] HAO: Hardware-aware Neural Architecture Optimization for Efficient Inference
    Dong, Zhen
    Gao, Yizhao
    Huang, Qijing
    Wawrzynek, John
    So, Hayden K. H.
    Keutzer, Kurt
    2021 IEEE 29TH ANNUAL INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE CUSTOM COMPUTING MACHINES (FCCM 2021), 2021, : 50 - 59
  • [25] Hardware-Aware design for edge intelligence
    Gross W.J.
    Meyer B.H.
    Ardakani A.
    IEEE Open Journal of Circuits and Systems, 2021, 2 : 113 - 127
  • [26] HGNAS plus plus : Efficient Architecture Search for Heterogeneous Graph Neural Networks
    Gao, Yang
    Zhang, Peng
    Zhou, Chuan
    Yang, Hong
    Li, Zhao
    Hu, Yue
    Yu, Philip S. S.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (09) : 9448 - 9461
  • [27] THNAS-GA: A Genetic Algorithm for Training-free Hardware-aware Neural Architecture Search
    Hai Tran Thanh
    Long Doan
    Ngoc Hoang Luong
    Huynh Thi Thanh Binh
    PROCEEDINGS OF THE 2024 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, GECCO 2024, 2024, : 1128 - 1136
  • [28] Block-Level Surrogate Models for Inference Time Estimation in Hardware-Aware Neural Architecture Search
    Stolle, Kurt
    Vogel, Sebastian
    van der Sommen, Fons
    Sanberg, Willem
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2022, PT V, 2023, 13717 : 463 - 479
  • [29] On Hardware-Aware Design and Optimization of Edge Intelligence
    Huai, Shuo
    Kong, Hao
    Luo, Xiangzhong
    Liu, Di
    Subramaniam, Ravi
    Makaya, Christian
    Lin, Qian
    Liu, Weichen
    IEEE DESIGN & TEST, 2023, 40 (06) : 149 - 162
  • [30] Hardware-aware Automated Architecture Search for Brain-inspired Hyperdimensional Computing
    Yang, Junhuan
    Yasa, Venkat Kalyan Reddy
    Sheng, Yi
    Reis, Dayane
    Jiao, Xun
    Jiang, Weiwen
    Yang, Lei
    2022 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI 2022), 2022, : 352 - 357