DDPNAS: Efficient Neural Architecture Search via Dynamic Distribution Pruning

被引:15
|
作者
Zheng, Xiawu [1 ,2 ]
Yang, Chenyi [1 ]
Zhang, Shaokun [1 ]
Wang, Yan [5 ]
Zhang, Baochang [6 ]
Wu, Yongjian [7 ]
Wu, Yunsheng [7 ]
Shao, Ling [8 ]
Ji, Rongrong [1 ,2 ,3 ,4 ]
机构
[1] Xiamen Univ, Sch Informat, Dept Artificial Intelligence, Media Analyt & Comp Lab, Xiamen, Peoples R China
[2] Peng Cheng Lab, Shenzhen, Peoples R China
[3] Xiamen Univ, Inst Artificial Intelligence, Xiamen 361005, Peoples R China
[4] Xiamen Univ, Fujian Engn Res Ctr Trusted Artificial Intelligenc, Xiamen 361005, Peoples R China
[5] Samsara, Seattle, WA USA
[6] Beihang Univ, Beijing, Peoples R China
[7] Tencent Co Ltd, BestImage Lab, Shanghai 200233, Peoples R China
[8] Terminus Grp, Beijing, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Neural architecture search; Dynamic distribution pruning; Efficient network generation;
D O I
10.1007/s11263-023-01753-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural Architecture Search (NAS) has demonstrated state-of-the-art performance on various computer vision tasks. Despite the superior performance achieved, the efficiency and generality of existing methods are highly valued due to their high computational complexity and low generality. In this paper, we propose an efficient and unified NAS framework termed DDPNAS via dynamic distribution pruning, facilitating a theoretical bound on accuracy and efficiency. In particular, we first sample architectures from a joint categorical distribution. Then the search space is dynamically pruned and its distribution is updated every few epochs. With the proposed efficient network generation method, we directly obtain the optimal neural architectures on given constraints, which is practical for on-device models across diverse search spaces and constraints. The architectures searched by our method achieve remarkable top-1 accuracies, 97.56 and 77.2 on CIFAR-10 and ImageNet (mobile settings), respectively, with the fastest search process, i.e., only 1.8 GPU hours on a Tesla V100. Codes for searching and network generation are available at: https://openi.pcl.ac.cn/PCL_AutoML/XNAS.
引用
收藏
页码:1234 / 1249
页数:16
相关论文
共 50 条
  • [41] DFD-NAS: General deepfake detection via efficient neural architecture search
    Jin, Xiao
    Yu, Wen
    Chen, Dai-Wei
    Shi, Wei
    NEUROCOMPUTING, 2025, 619
  • [42] Generic Neural Architecture Search via Regression
    Li, Yuhong
    Hao, Cong
    Li, Pan
    Xiong, Jinjun
    Chen, Deming
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [43] Neural Architecture Search via Proxy Validation
    Li, Yanxi
    Dong, Minjing
    Wang, Yunhe
    Xu, Chang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (06) : 7595 - 7610
  • [44] EMNAPE: Efficient Multi-Dimensional Neural Architecture Pruning for EdgeAI
    Kong, Hao
    Luo, Xiangzhong
    Huai, Shuo
    Liu, Di
    Subramaniam, Ravi
    Makaya, Christian
    Lin, Qian
    Liu, Weichen
    2023 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION, DATE, 2023,
  • [45] DyGNN: Algorithm and Architecture Support of Dynamic Pruning for Graph Neural Networks
    Chen, Cen
    Li, Kenli
    Zou, Xiaofeng
    Li, Yangfan
    2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2021, : 1201 - 1206
  • [46] Efficient Multi-Objective Neural Architecture Search via Pareto Dominance-based Novelty Search
    An Vo
    Ngoc Hoang Luong
    PROCEEDINGS OF THE 2024 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, GECCO 2024, 2024, : 1146 - 1155
  • [47] TRAINING-FREE ROBUST NEURAL NETWORK SEARCH VIA PRUNING
    Yang, Qiancheng
    Luo, Yong
    Du, Bo
    2024 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME 2024, 2024,
  • [48] Towards efficient filter pruning via adaptive automatic structure search
    Xu, Xiaozhou
    Chen, Jun
    Li, Zhishan
    Su, Hongye
    Xie, Lei
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133
  • [49] Efficient evolutionary neural architecture search based on hybrid search space
    Gong, Tao
    Ma, Yongjie
    Xu, Yang
    Song, Changwei
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 15 (08) : 3313 - 3326
  • [50] BNAS: Efficient Neural Architecture Search Using Broad Scalable Architecture
    Ding, Zixiang
    Chen, Yaran
    Li, Nannan
    Zhao, Dongbin
    Sun, Zhiquan
    Chen, C. L. Philip
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (09) : 5004 - 5018