Learning to Generate Parameters of ConvNets for Unseen Image Data

被引:0
|
作者
Wang, Shiye [1 ]
Feng, Kaituo [1 ]
Li, Changsheng [1 ]
Yuan, Ye [1 ]
Wang, Guoren [1 ]
机构
[1] Beijing Inst Technol, Sch Comp Sci & Technol, Beijing 100081, Peoples R China
关键词
Training; Task analysis; Correlation; Metalearning; Graphics processing units; Vectors; Adaptive systems; Parameter generation; hypernetwork; adaptive hyper-recurrent units;
D O I
10.1109/TIP.2024.3445731
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Typical Convolutional Neural Networks (ConvNets) depend heavily on large amounts of image data and resort to an iterative optimization algorithm (e.g., SGD or Adam) to learn network parameters, making training very time- and resource-intensive. In this paper, we propose a new training paradigm and formulate the parameter learning of ConvNets into a prediction task: considering that there exist correlations between image datasets and their corresponding optimal network parameters of a given ConvNet, we explore if we can learn a hyper-mapping between them to capture the relations, such that we can directly predict the parameters of the network for an image dataset never seen during the training phase. To do this, we put forward a new hypernetwork-based model, called PudNet, which intends to learn a mapping between datasets and their corresponding network parameters, then predicts parameters for unseen data with only a single forward propagation. Moreover, our model benefits from a series of adaptive hyper-recurrent units sharing weights to capture the dependencies of parameters among different network layers. Extensive experiments demonstrate that our proposed method achieves good efficacy for unseen image datasets in two kinds of settings: Intra-dataset prediction and Inter-dataset prediction. Our PudNet can also well scale up to large-scale datasets, e.g., ImageNet-1K. It takes 8,967 GPU seconds to train ResNet-18 on the ImageNet-1K using GC from scratch and obtain a top-5 accuracy of 44.65%. However, our PudNet costs only 3.89 GPU seconds to predict the network parameters of ResNet-18 achieving comparable performance (44.92%), more than 2,300 times faster than the traditional training paradigm.
引用
收藏
页码:5577 / 5592
页数:16
相关论文
共 50 条
  • [41] Learning to Generate Diverse Data From a Temporal Perspective for Data-Free Quantization
    Luo, Hui
    Zhang, Shuhai
    Zhuang, Zhuangwei
    Mai, Jiajie
    Tan, Mingkui
    Zhang, Jianlin
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (10) : 9484 - 9498
  • [42] FOF: Fusing object features into deep learning model to generate image caption
    Zhou, Hang
    Lv, Xue-Qiang
    You, Xin-Dong
    Dong, Zhi-An
    Zhang, Kai
    Journal of Computers (Taiwan), 2019, 30 (04) : 206 - 216
  • [43] Generalizing Deep Learning for Medical Image Segmentation to Unseen Domains via Deep Stacked Transformation
    Zhang, Ling
    Wang, Xiaosong
    Yang, Dong
    Sanford, Thomas
    Harmon, Stephanie
    Turkbey, Baris
    Wood, Bradford J.
    Roth, Holger
    Myronenko, Andriy
    Xu, Daguang
    Xu, Ziyue
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2020, 39 (07) : 2531 - 2540
  • [44] Multi-View Product Image Search with Deep ConvNets Representations
    Bastan, Muhammet
    Yilmaz, Ozgur
    INTERNATIONAL JOURNAL ON ARTIFICIAL INTELLIGENCE TOOLS, 2018, 27 (08)
  • [45] Cascaded and Recursive ConvNets (CRCNN): An effective and flexible approach for image denoising
    Khowaja, Sunder Ali
    Yahya, Bernardo Nugroho
    Lee, Seok-Lyong
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2021, 99
  • [46] Matching Disparate Image Pairs Using Shape-Aware ConvNets
    Srivastava, Shefali
    Chopra, Abhimanyu
    Kumar, Arun C. S.
    Bhandarkar, Suchendra M.
    Sharma, Deepak
    2019 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2019, : 531 - 540
  • [47] Using diffusion models to generate synthetic labeled data for medical image segmentation
    Saragih, Daniel G.
    Hibi, Atsuhiro
    Tyrrell, Pascal N.
    INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY, 2024, 19 (08) : 1615 - 1625
  • [48] Safe Deep Semi-Supervised Learning for Unseen-Class Unlabeled Data
    Guo, Lan-Zhe
    Zhang, Zhen-Yu
    Jiang, Yuan
    Li, Yu-Feng
    Zhou, Zhi-Hua
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [49] Zero-Shot Learning Using Synthesised Unseen Visual Data with Diffusion Regularisation
    Long, Yang
    Liu, Li
    Shen, Fumin
    Shao, Ling
    Li, Xuelong
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (10) : 2498 - 2512
  • [50] Action Recognition in Surveillance Video Using ConvNets and Motion History Image
    Luo, Sheng
    Yang, Haojin
    Wang, Cheng
    Che, Xiaoyin
    Meinel, Christoph
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2016, PT II, 2016, 9887 : 187 - 195