BNAS-v2: Memory-Efficient and Performance-Collapse-Prevented Broad Neural Architecture Search

被引:13
|
作者
Ding, Zixiang [1 ,2 ]
Chen, Yaran [1 ,2 ]
Li, Nannan [1 ,2 ]
Zhao, Dongbin [1 ,2 ]
机构
[1] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
[2] Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Computer architecture; Training; Graphics processing units; Optimization; Convolution; Topology; Task analysis; Broad neural architecture search (BNAS); confident learning rate (CLR); continuous relaxation; image classification; partial channel connections (PC); LEARNING-SYSTEM;
D O I
10.1109/TSMC.2022.3143201
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this article, we propose BNAS-v2 to further improve the efficiency of broad neural architecture search (BNAS), which employs a broad convolutional neural network (BCNN) as the search space. In BNAS, the single-path sampling-updating strategy of an overparameterized BCNN leads to terrible unfair training issue, which restricts the efficiency improvement. To mitigate the unfair training issue, we employ a continuous relaxation strategy to optimize all paths of the overparameterized BCNN simultaneously. However, continuous relaxation leads to a performance collapse issue that leads to the unsatisfactory performance of the learned BCNN. For that, we propose the confident learning rate (CLR) and introduce the combination of partial channel connections and edge normalization. Experimental results show that 1) BNAS-v2 delivers state-of-the-art search efficiency on both CIFAR-10 (0.05 GPU days, which is 4x faster than BNAS) and ImageNet (0.19 GPU days) with better or competitive performance; 2) the above two solutions are effectively alleviating the performance collapse issue; and 3) BNAS-v2 achieves powerful generalization ability on multiple transfer tasks, e.g., MNIST, FashionMNIST, NORB, and SVHN. The code is available at https://github.com/zixiangding/BNASv2.
引用
收藏
页码:6259 / 6272
页数:14
相关论文
共 13 条
  • [1] BNAS: Efficient Neural Architecture Search Using Broad Scalable Architecture
    Ding, Zixiang
    Chen, Yaran
    Li, Nannan
    Zhao, Dongbin
    Sun, Zhiquan
    Chen, C. L. Philip
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (09) : 5004 - 5018
  • [2] AutoDerain: Memory-efficient Neural Architecture Search for Image Deraining
    Fu, Jun
    Hou, Chen
    Chen, Zhibo
    [J]. 2021 INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP), 2021,
  • [3] PertNAS: Architectural Perturbations for Memory-Efficient Neural Architecture Search
    Ahmad, Afzal
    Xie, Zhiyao
    Zhang, Wei
    [J]. 2023 60TH ACM/IEEE DESIGN AUTOMATION CONFERENCE, DAC, 2023,
  • [4] Memory-Efficient Hierarchical Neural Architecture Search for Image Restoration
    Haokui Zhang
    Ying Li
    Hao Chen
    Chengrong Gong
    Zongwen Bai
    Chunhua Shen
    [J]. International Journal of Computer Vision, 2022, 130 : 157 - 178
  • [5] Memory-Efficient Hierarchical Neural Architecture Search for Image Restoration
    Zhang, Haokui
    Li, Ying
    Chen, Hao
    Gong, Chengrong
    Bai, Zongwen
    Shen, Chunhua
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2022, 130 (01) : 157 - 178
  • [6] Memory-Efficient Hierarchical Neural Architecture Search for Image Denoising
    Zhang, Haokui
    Li, Ying
    Chen, Hao
    Shen, Chunhua
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 3654 - 3663
  • [7] Memory-Efficient Models for Scene Text Recognition via Neural Architecture Search
    Hong, SeulGi
    Kim, DongHyun
    Choi, Min-Kook
    [J]. 2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS (WACVW), 2020, : 183 - 191
  • [8] MemNAS: Memory-Efficient Neural Architecture Search with Grow-Trim Learning
    Liu, Peiye
    Wu, Bo
    Ma, Huadong
    Seok, Mingoo
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 2105 - 2113
  • [9] MEMA-NAS: Memory-Efficient Multi-Agent Neural Architecture Search
    Kong, Qi
    Xu, Xin
    Zhang, Liangliang
    [J]. PATTERN RECOGNITION AND COMPUTER VISION, PT IV, 2021, 13022 : 176 - 187
  • [10] M3NAS: Multi-Scale and Multi-Level Memory-Efficient Neural Architecture Search for Low-Dose CT Denoising
    Lu, Zexin
    Xia, Wenjun
    Huang, Yongqiang
    Hou, Mingzheng
    Chen, Hu
    Zhou, Jiliu
    Shan, Hongming
    Zhang, Yi
    [J]. IEEE TRANSACTIONS ON MEDICAL IMAGING, 2023, 42 (03) : 850 - 863