Memory-Efficient Hierarchical Neural Architecture Search for Image Restoration

被引:8
|
作者
Zhang, Haokui [1 ,2 ]
Li, Ying [1 ]
Chen, Hao [4 ]
Gong, Chengrong [1 ]
Bai, Zongwen [3 ]
Shen, Chunhua [4 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci, Xian, Peoples R China
[2] Intellifus, Shenzhen, Peoples R China
[3] Yanan Univ, Shaanxi Key Lab Intelligent Proc Big Energy Data, Yanan, Peoples R China
[4] Zhejiang Univ, Hangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Neural architecture search; Hierarchical search space; Image denosing; Super-resolution; SUPERRESOLUTION;
D O I
10.1007/s11263-021-01537-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, much attention has been spent on neural architecture search (NAS), aiming to outperform those manually-designed neural architectures on high-level vision recognition tasks. Inspired by the success, here we attempt to leverage NAS techniques to automatically design efficient network architectures for low-level image restoration tasks. In particular, we propose a memory-efficient hierarchical NAS (termed HiNAS) and apply it to two such tasks: image denoising and image super-resolution. HiNAS adopts gradient based search strategies and builds a flexible hierarchical search space, including the inner search space and outer search space. They are in charge of designing cell architectures and deciding cell widths, respectively. For the inner search space, we propose a layer-wise architecture sharing strategy, resulting in more flexible architectures and better performance. For the outer search space, we design a cell-sharing strategy to save memory, and considerably accelerate the search speed. The proposed HiNAS method is both memory and computation efficient. With a single GTX1080Ti GPU, it takes only about 1 h for searching for denoising network on the BSD-500 dataset and 3.5 h for searching for the super-resolution structure on the DIV2K dataset. Experiments show that the architectures found by HiNAS have fewer parameters and enjoy a faster inference speed, while achieving highly competitive performance compared with state-of-the-art methods. Code is available at: https://github.com/hkzhang91/HiNAS
引用
收藏
页码:157 / 178
页数:22
相关论文
共 50 条
  • [21] SparseHC: a memory-efficient online hierarchical clustering algorithm
    Thuy-Diem Nguyen
    Schmidt, Bertil
    Kwoh, Chee-Keong
    2014 INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE, 2014, 29 : 8 - 19
  • [22] Hierarchical Neural Architecture Search for Single Image Super-Resolution
    Guo, Yong
    Luo, Yongsheng
    He, Zhenhao
    Huang, Jin
    Chen, Jian
    IEEE SIGNAL PROCESSING LETTERS, 2020, 27 : 1255 - 1259
  • [23] A fast and memory-efficient hierarchical graph clustering algorithm
    Szilágyi, László
    Szilágyi, Sándor Miklós
    Hirsbrunner, Béat
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2014, 8834 : 247 - 254
  • [24] Memory-Efficient Reversible Spiking Neural Networks
    Zhang, Hong
    Zhang, Yu
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 15, 2024, : 16759 - 16767
  • [25] Fast and Memory-Efficient Neural Code Completion
    Svyatkovskiy, Alexey
    Lee, Sebastian
    Hadjitofi, Anna
    Riechert, Maik
    Franco, Juliana Vicente
    Allamanis, Miltiadis
    2021 IEEE/ACM 18TH INTERNATIONAL CONFERENCE ON MINING SOFTWARE REPOSITORIES (MSR 2021), 2021, : 329 - 340
  • [26] EFFICIENT OCT IMAGE SEGMENTATION USING NEURAL ARCHITECTURE SEARCH
    Gheshlaghi, Saba Heidari
    Dehzangi, Omid
    Dahouei, Ali
    Amireskandari, Annahita
    Rezai, Ali
    Nasrabadi, Nasser M.
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 428 - 432
  • [27] Memory-Efficient Backpropagation for Recurrent Neural Networks
    Ayoub, Issa
    Al Osman, Hussein
    ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, 11489 : 274 - 283
  • [28] HYPERSPECTRAL IMAGE RECONSTRUCTION USING HIERARCHICAL NEURAL ARCHITECTURE SEARCH FROM A SNAPSHOT IMAGE
    Han, Xian-Hua
    Jiang, Huiyan
    Chen, Yen-Wei
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 2500 - 2504
  • [29] A Memory-Efficient Architecture for Low Latency Viterbi Decoders
    Tang, Yun-Ching
    Hu, Do-Chen
    Wei, Weiyi
    Lin, Wen-Chung
    Lin, Hongchin
    2009 INTERNATIONAL SYMPOSIUM ON VLSI DESIGN, AUTOMATION AND TEST (VLSI-DAT), PROCEEDINGS OF TECHNICAL PROGRAM, 2009, : 335 - 338
  • [30] A Memory-Efficient Hardware Architecture for Deformable Convolutional Networks
    Yu, Yue
    Luo, Jiapeng
    Mao, Wendong
    Wang, Zhongfeng
    2021 IEEE WORKSHOP ON SIGNAL PROCESSING SYSTEMS (SIPS 2021), 2021, : 140 - 145