Generating Neural Networks for Diverse Networking Classification Tasks via Hardware-Aware Neural Architecture Search

被引:1
|
作者
Xie, Guorui [1 ,2 ]
Li, Qing [2 ]
Shi, Zhenning [1 ]
Fang, Hanbin [3 ]
Ji, Shengpeng [4 ]
Jiang, Yong [1 ,2 ]
Yuan, Zhenhui [5 ]
Ma, Lianbo [6 ]
Xu, Mingwei [7 ,8 ]
机构
[1] Tsinghua Shenzhen Int Grad Sch, Shenzhen 518055, Peoples R China
[2] Peng Cheng Lab PCL, Shenzhen 518066, Peoples R China
[3] Jilin Univ, Changchun 135124, Peoples R China
[4] Zhejiang Univ, Hangzhou 310058, Peoples R China
[5] Northumbria Univ, Dept Comp & Informat Sci, Newcastle Upon Tyne NE1 8ST, England
[6] Northeastern Univ, Coll Software, Shenyang 110169, Peoples R China
[7] Tsinghua Univ, Inst Network Sci & Cyberspace, Dept Comp Sci & Technol, Beijing 100084, Peoples R China
[8] Quan Cheng Lab, Jinan 250103, Peoples R China
关键词
Neural network; automated design; traffic classification; attack detection;
D O I
10.1109/TC.2023.3333253
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Neural networks (NNs) are widely used in classification-based networking analysis to help traffic transmission and system security. However, there are heterogeneous network devices (e.g., switches and routers) in a network. Manually customizing NNs with specific device requirements (e.g., max allowed running latency) can be time-consuming and labor-intensive. Furthermore, the diverse data characteristics of different networking classification tasks add to the burden of NN customization. This paper introduces Loong, a neural architecture search (NAS) based system that automatically generates NNs for various networking tasks and devices. Loong includes a neural operation embedding module, which embeds candidate neural operations into the layer to be designed. Then, the layer-wise training is used to generate a task-specific NN layer by layer. This layer-wise scheme simultaneously trains and selects candidate neural operations using gradient feedback. Finally, only the important operations are selected to form the layer, maximizing accuracy. By incorporating multiple objectives, including deployment memory and running latency of devices, into the training and selection of NNs, Loong is able to customize NNs for heterogeneous network devices. Experiments show that Loong's NNs outperform 13 manual-designed and NAS-based NNs, with a 4.11% improvement in F1-score. Additionally, Loong's NNs achieve faster (7.92X) speeds on commodity devices.
引用
收藏
页码:481 / 494
页数:14
相关论文
共 50 条
  • [1] Fast Hardware-Aware Neural Architecture Search
    Zhang, Li Lyna
    Yang, Yuqing
    Jiang, Yuhang
    Zhu, Wenwu
    Liu, Yunxin
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 2959 - 2967
  • [2] Hardware-Aware Neural Architecture Search: Survey and Taxonomy
    Benmeziane, Hadjer
    El Maghraoui, Kaoutar
    Ouarnoughi, Hamza
    Niar, Smail
    Wistuba, Martin
    Wang, Naigang
    [J]. PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 4322 - 4329
  • [3] Evolution of Hardware-Aware Neural Architecture Search on the Edge
    Richey, Blake
    Clay, Mitchell
    Grecos, Christos
    Shirvaikar, Mukul
    [J]. REAL-TIME IMAGE PROCESSING AND DEEP LEARNING 2023, 2023, 12528
  • [4] Designing Efficient DNNs via Hardware-Aware Neural Architecture Search and Beyond
    Luo, Xiangzhong
    Liu, Di
    Huai, Shuo
    Kong, Hao
    Chen, Hui
    Liu, Weichen
    [J]. IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2022, 41 (06) : 1799 - 1812
  • [5] Hardware-Aware Zero-Shot Neural Architecture Search
    Yoshihama, Yutaka
    Yadani, Kenichi
    Isobe, Shota
    [J]. 2023 18TH INTERNATIONAL CONFERENCE ON MACHINE VISION AND APPLICATIONS, MVA, 2023,
  • [6] Hardware-aware neural architecture search for stochastic computing-based neural networks on tiny devices
    Song, Yuhong
    Sha, Edwin Hsing-Mean
    Zhuge, Qingfeng
    Xu, Rui
    Xu, Xiaowei
    Li, Bingzhe
    Yang, Lei
    [J]. JOURNAL OF SYSTEMS ARCHITECTURE, 2023, 135
  • [7] FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search
    Wu, Bichen
    Dai, Xiaoliang
    Zhang, Peizhao
    Wang, Yanghan
    Sun, Fei
    Wu, Yiming
    Tian, Yuandong
    Vajda, Peter
    Jia, Yangqing
    Keutzer, Kurt
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 10726 - 10734
  • [8] Hardware-aware Model Architecture for Ternary Spiking Neural Networks
    Wu, Nai-Chun
    Chen, Tsu-Hsiang
    Huang, Chih-Tsun
    [J]. 2023 INTERNATIONAL VLSI SYMPOSIUM ON TECHNOLOGY, SYSTEMS AND APPLICATIONS, VLSI-TSA/VLSI-DAT, 2023,
  • [9] Hardware-Aware Multi-Objective Neural Architecture Search Approach
    Xu, Ke
    Meng, Yuan
    Yang, Shang-Shang
    Tian, Ye
    Zhang, Xing-Yi
    [J]. Jisuanji Xuebao/Chinese Journal of Computers, 2023, 46 (12): : 2652 - 2669
  • [10] Pareto Rank Surrogate Model for Hardware-aware Neural Architecture Search
    Benmeziane, Hadjer
    Niar, Smail
    Ouarnoughi, Hamza
    El Maghraoui, Kaoutar
    [J]. 2022 IEEE INTERNATIONAL SYMPOSIUM ON PERFORMANCE ANALYSIS OF SYSTEMS AND SOFTWARE (ISPASS 2022), 2022, : 267 - 276