DDPNAS: Efficient Neural Architecture Search via Dynamic Distribution Pruning

被引:15
|
作者
Zheng, Xiawu [1 ,2 ]
Yang, Chenyi [1 ]
Zhang, Shaokun [1 ]
Wang, Yan [5 ]
Zhang, Baochang [6 ]
Wu, Yongjian [7 ]
Wu, Yunsheng [7 ]
Shao, Ling [8 ]
Ji, Rongrong [1 ,2 ,3 ,4 ]
机构
[1] Xiamen Univ, Sch Informat, Dept Artificial Intelligence, Media Analyt & Comp Lab, Xiamen, Peoples R China
[2] Peng Cheng Lab, Shenzhen, Peoples R China
[3] Xiamen Univ, Inst Artificial Intelligence, Xiamen 361005, Peoples R China
[4] Xiamen Univ, Fujian Engn Res Ctr Trusted Artificial Intelligenc, Xiamen 361005, Peoples R China
[5] Samsara, Seattle, WA USA
[6] Beihang Univ, Beijing, Peoples R China
[7] Tencent Co Ltd, BestImage Lab, Shanghai 200233, Peoples R China
[8] Terminus Grp, Beijing, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Neural architecture search; Dynamic distribution pruning; Efficient network generation;
D O I
10.1007/s11263-023-01753-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural Architecture Search (NAS) has demonstrated state-of-the-art performance on various computer vision tasks. Despite the superior performance achieved, the efficiency and generality of existing methods are highly valued due to their high computational complexity and low generality. In this paper, we propose an efficient and unified NAS framework termed DDPNAS via dynamic distribution pruning, facilitating a theoretical bound on accuracy and efficiency. In particular, we first sample architectures from a joint categorical distribution. Then the search space is dynamically pruned and its distribution is updated every few epochs. With the proposed efficient network generation method, we directly obtain the optimal neural architectures on given constraints, which is practical for on-device models across diverse search spaces and constraints. The architectures searched by our method achieve remarkable top-1 accuracies, 97.56 and 77.2 on CIFAR-10 and ImageNet (mobile settings), respectively, with the fastest search process, i.e., only 1.8 GPU hours on a Tesla V100. Codes for searching and network generation are available at: https://openi.pcl.ac.cn/PCL_AutoML/XNAS.
引用
收藏
页码:1234 / 1249
页数:16
相关论文
共 50 条
  • [1] DDPNAS: Efficient Neural Architecture Search via Dynamic Distribution Pruning
    Xiawu Zheng
    Chenyi Yang
    Shaokun Zhang
    Yan Wang
    Baochang Zhang
    Yongjian Wu
    Yunsheng Wu
    Ling Shao
    Rongrong Ji
    International Journal of Computer Vision, 2023, 131 : 1234 - 1249
  • [2] PSP: Progressive Space Pruning for Efficient Graph Neural Architecture Search
    Zhu, Guanghui
    Wang, Wenjie
    Xu, Zhuoer
    Cheng, Feng
    Qiu, Mengchuan
    Yuan, Chunfeng
    Huang, Yihua
    2022 IEEE 38TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2022), 2022, : 2168 - 2181
  • [3] NAP: Neural architecture search with pruning
    Ding, Yadong
    Wu, Yu
    Huang, Chengyue
    Tang, Siliang
    Wu, Fei
    Yang, Yi
    Zhu, Wenwu
    Zhuang, Yueting
    NEUROCOMPUTING, 2022, 477 : 85 - 95
  • [4] Efficient Architecture Search via Bi-Level Data Pruning
    Tu, Chongjun
    Ye, Peng
    Lin, Weihao
    Ye, Hancheng
    Yu, Chong
    Chen, Tao
    Li, Baopu
    Ouyang, Wanli
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (02) : 1265 - 1275
  • [5] Efficient Neural Architecture Search via Proximal Iterations
    Yao, Quanming
    Xu, Ju
    Tu, Wei-Wei
    Zhu, Zhanxing
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 6664 - 6671
  • [6] Efficient Neural Architecture Search via Parameter Sharing
    Pham, Hieu
    Guan, Melody Y.
    Zoph, Barret
    Le, Quoc, V
    Dean, Jeff
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [7] Subarchitecture Ensemble Pruning in Neural Architecture Search
    Bian, Yijun
    Song, Qingquan
    Du, Mengnan
    Yao, Jun
    Chen, Huanhuan
    Hu, Xia
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (12) : 7928 - 7936
  • [8] Efficient Channel Pruning via Architecture-Guided Search Space Shrinking
    Yang, Zhi
    Li, Zheyang
    PATTERN RECOGNITION AND COMPUTER VISION, PT I, PRCV 2022, 2022, 13534 : 540 - 551
  • [9] Efficient spiking neural network design via neural architecture search
    Yan, Jiaqi
    Liu, Qianhui
    Zhang, Malu
    Feng, Lang
    Ma, De
    Li, Haizhou
    Pan, Gang
    NEURAL NETWORKS, 2024, 173
  • [10] Network Pruning via Transformable Architecture Search
    Dong, Xuanyi
    Yang, Yi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32