AutoDerain: Memory-efficient Neural Architecture Search for Image Deraining

被引:0
|
作者
Fu, Jun [1 ]
Hou, Chen [1 ]
Chen, Zhibo [1 ]
机构
[1] Univ Sci & Technol China, CAS Key Lab Technol Geospatial Informat Proc & Ap, Hefei 230027, Peoples R China
关键词
memory-efficient; differentiable architecture search; image deraining;
D O I
10.1109/VCIP53242.2021.9675339
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning-based image deraining methods have achieved remarkable success in the past few decades. Currently, most deraining architectures are developed by human experts, which is a laborious and error-prone process. In this paper, we present a study on employing neural architecture search (NAS) to automatically design deraining architectures, dubbed AutoDerain. Specifically, we first propose an U-shaped deraining architecture, which mainly consists of residual squeeze-andexcitation blocks (RSEBs). Then, we define a search space, where we search for the convolutional types and the use of the squeeze-and-excitation block. Considering that the differentiable architecture search is memory-intensive, we propose a memory-efficient differentiable architecture search scheme (MDARTS). In light of the success of training binary neural networks, MDARTS optimizes architecture parameters through the proximal gradient, which only consumes the same GPU memory as training a single deraining model. Experimental results demonstrate that the architecture designed by MDARTS is superior to manually designed derainers.
引用
收藏
页数:5
相关论文
共 50 条
  • [31] MEC Memory-efficient Convolution for Deep Neural Network
    Cho, Minsik
    Brand, Daniel
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [32] A Memory-Efficient Adaptive Optimal Binary Search Tree Architecture for IPV6 Lookup Address
    Vijay, M. M.
    Punithavathani, D. Shalini
    MOBILE COMPUTING AND SUSTAINABLE INFORMATICS, 2022, 68 : 749 - 764
  • [33] Memory-Efficient Training of Binarized Neural Networks on the Edge
    Yayla, Mikail
    Chen, Jian-Jia
    PROCEEDINGS OF THE 59TH ACM/IEEE DESIGN AUTOMATION CONFERENCE, DAC 2022, 2022, : 661 - 666
  • [34] Accelerating Recurrent Neural Networks: A Memory-Efficient Approach
    Wang, Zhisheng
    Lin, Jun
    Wang, Zhongfeng
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2017, 25 (10) : 2763 - 2775
  • [35] Adaptive Weight Compression for Memory-Efficient Neural Networks
    Ko, Jong Hwan
    Kim, Duckhwan
    Na, Taesik
    Kung, Jaeha
    Mukhopadhyay, Saibal
    PROCEEDINGS OF THE 2017 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE), 2017, : 199 - 204
  • [36] Early burst detection for memory-efficient image retrieval
    Shi, Miaojing
    Avrithis, Yannis
    Jegou, Herve
    2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2015, : 605 - 613
  • [37] A memory-efficient block-wise MAP decoder architecture
    Kim, S
    Hwang, SY
    Kang, MJ
    ETRI JOURNAL, 2004, 26 (06) : 615 - 621
  • [38] A Memory-Efficient Self-Supervised Dynamic Image Reconstruction Method Using Neural Fields
    Lozenski, Luke
    Anastasio, Mark A.
    Villa, Umberto
    IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2022, 8 : 879 - 892
  • [39] A Memory-Efficient IDMA Architecture Based on On-the-Fly Despreading
    Kong, Byeong Yong
    Park, In-Cheol
    IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2018, 53 (11) : 3327 - 3337
  • [40] Memory-Efficient Architecture for Hysteresis Thresholding and Object Feature Extraction
    Najjar, Mayssaa A.
    Karlapudi, Swetha
    Bayoumi, Magdy A.
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2011, 20 (12) : 3566 - 3579