Learning to Generate Parameters of ConvNets for Unseen Image Data

被引:0
|
作者
Wang, Shiye [1 ]
Feng, Kaituo [1 ]
Li, Changsheng [1 ]
Yuan, Ye [1 ]
Wang, Guoren [1 ]
机构
[1] Beijing Inst Technol, Sch Comp Sci & Technol, Beijing 100081, Peoples R China
关键词
Training; Task analysis; Correlation; Metalearning; Graphics processing units; Vectors; Adaptive systems; Parameter generation; hypernetwork; adaptive hyper-recurrent units;
D O I
10.1109/TIP.2024.3445731
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Typical Convolutional Neural Networks (ConvNets) depend heavily on large amounts of image data and resort to an iterative optimization algorithm (e.g., SGD or Adam) to learn network parameters, making training very time- and resource-intensive. In this paper, we propose a new training paradigm and formulate the parameter learning of ConvNets into a prediction task: considering that there exist correlations between image datasets and their corresponding optimal network parameters of a given ConvNet, we explore if we can learn a hyper-mapping between them to capture the relations, such that we can directly predict the parameters of the network for an image dataset never seen during the training phase. To do this, we put forward a new hypernetwork-based model, called PudNet, which intends to learn a mapping between datasets and their corresponding network parameters, then predicts parameters for unseen data with only a single forward propagation. Moreover, our model benefits from a series of adaptive hyper-recurrent units sharing weights to capture the dependencies of parameters among different network layers. Extensive experiments demonstrate that our proposed method achieves good efficacy for unseen image datasets in two kinds of settings: Intra-dataset prediction and Inter-dataset prediction. Our PudNet can also well scale up to large-scale datasets, e.g., ImageNet-1K. It takes 8,967 GPU seconds to train ResNet-18 on the ImageNet-1K using GC from scratch and obtain a top-5 accuracy of 44.65%. However, our PudNet costs only 3.89 GPU seconds to predict the network parameters of ResNet-18 achieving comparable performance (44.92%), more than 2,300 times faster than the traditional training paradigm.
引用
收藏
页码:5577 / 5592
页数:16
相关论文
共 50 条
  • [1] CompoNet: Learning to Generate the Unseen by Part Synthesis and Composition
    Schor, Nadav
    Katzir, Oren
    Zhang, Hao
    Cohen-Or, Daniel
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 8758 - 8767
  • [2] A Survey of Automated Data Augmentation for Image Classification: Learning to Compose, Mix, and Generate
    Cheung, Tsz-Him
    Yeung, Dit-Yan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (10) : 13185 - 13205
  • [3] Fine-tuning ConvNets with Novel Leather Image Data for Species Identification
    Varghese, Anjli
    Jawahar, Malathy
    Prince, A. Amalin
    FIFTEENTH INTERNATIONAL CONFERENCE ON MACHINE VISION, ICMV 2022, 2023, 12701
  • [4] Crowdsourcing image analysis for plant phenomics to generate ground truth data for machine learning
    Zhou, Naihui
    Siegel, Zachary D.
    Zarecor, Scott
    Lee, Nigel
    Campbell, Darwin A.
    Andorf, Carson M.
    Nettleton, Dan
    Lawrence-Dill, Carolyn J.
    Ganapathysubramanian, Baskar
    Kelly, Jonathan W.
    Friedberg, Iddo
    PLOS COMPUTATIONAL BIOLOGY, 2018, 14 (07)
  • [5] What Convnets Make for Image Captioning?
    Liu, Yu
    Guo, Yanming
    Lew, Michael S.
    MULTIMEDIA MODELING (MMM 2017), PT I, 2017, 10132 : 416 - 428
  • [6] WILDCAT: Weakly Supervised Learning of Deep ConvNets for Image Classification, Pointwise Localization and Segmentation
    Durand, Thibaut
    Mordan, Taylor
    Thome, Nicolas
    Cord, Matthieu
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 5957 - 5966
  • [7] Meta-learning for Classifying Previously Unseen Data Source into Previously Unseen Emotional Categories
    Guibon, Gael
    Labeau, Matthieu
    Flamein, Helene
    Lefeuvre, Luce
    Clavel, Chloe
    1ST WORKSHOP ON META LEARNING AND ITS APPLICATIONS TO NATURAL LANGUAGE PROCESSING (METANLP 2021), 2021, : 76 - 89
  • [8] Learning to Generate Synthetic Data via Compositing
    Tripathi, Shashank
    Chandra, Siddhartha
    Agrawal, Amit
    Tyagi, Ambrish
    Rehg, James M.
    Chari, Visesh
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 461 - 470
  • [9] Stabilizing Deep Q-Learning with ConvNets and Vision Transformers under Data Augmentation
    Hansen, Nicklas
    Su, Hao
    Wang, Xiaolong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,
  • [10] Learning to Validate the Predictions of Black Box Classifiers on Unseen Data
    Schelter, Sebastian
    Rukat, Tammo
    Biessmann, Felix
    SIGMOD'20: PROCEEDINGS OF THE 2020 ACM SIGMOD INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA, 2020, : 1289 - 1299