Learning to Generate Parameters of ConvNets for Unseen Image Data

被引:0
|
作者
Wang, Shiye [1 ]
Feng, Kaituo [1 ]
Li, Changsheng [1 ]
Yuan, Ye [1 ]
Wang, Guoren [1 ]
机构
[1] Beijing Inst Technol, Sch Comp Sci & Technol, Beijing 100081, Peoples R China
关键词
Training; Task analysis; Correlation; Metalearning; Graphics processing units; Vectors; Adaptive systems; Parameter generation; hypernetwork; adaptive hyper-recurrent units;
D O I
10.1109/TIP.2024.3445731
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Typical Convolutional Neural Networks (ConvNets) depend heavily on large amounts of image data and resort to an iterative optimization algorithm (e.g., SGD or Adam) to learn network parameters, making training very time- and resource-intensive. In this paper, we propose a new training paradigm and formulate the parameter learning of ConvNets into a prediction task: considering that there exist correlations between image datasets and their corresponding optimal network parameters of a given ConvNet, we explore if we can learn a hyper-mapping between them to capture the relations, such that we can directly predict the parameters of the network for an image dataset never seen during the training phase. To do this, we put forward a new hypernetwork-based model, called PudNet, which intends to learn a mapping between datasets and their corresponding network parameters, then predicts parameters for unseen data with only a single forward propagation. Moreover, our model benefits from a series of adaptive hyper-recurrent units sharing weights to capture the dependencies of parameters among different network layers. Extensive experiments demonstrate that our proposed method achieves good efficacy for unseen image datasets in two kinds of settings: Intra-dataset prediction and Inter-dataset prediction. Our PudNet can also well scale up to large-scale datasets, e.g., ImageNet-1K. It takes 8,967 GPU seconds to train ResNet-18 on the ImageNet-1K using GC from scratch and obtain a top-5 accuracy of 44.65%. However, our PudNet costs only 3.89 GPU seconds to predict the network parameters of ResNet-18 achieving comparable performance (44.92%), more than 2,300 times faster than the traditional training paradigm.
引用
收藏
页码:5577 / 5592
页数:16
相关论文
共 50 条
  • [21] Learning to Generate Image Embeddings with User-level Differential Privacy
    Xu, Zheng
    Collins, Maxwell
    Wang, Yuxiao
    Panait, Liviu
    Oh, Sewoong
    Augenstein, Sean
    Liu, Ting
    Schroff, Florian
    McMahan, H. Brendan
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 7969 - 7980
  • [22] SMARTTEXT: LEARNING TO GENERATE HARMONIOUS TEXTUAL LAYOUT OVER NATURAL IMAGE
    Zhang, Peiying
    Li, Chenhui
    Wang, Changbo
    2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2020,
  • [23] Evolution of SE- Workbench-EO to generate synthetic EO/IR image data sets for machine learning
    Le Goff, Alain
    Latger, Jean
    Cathala, Thierry
    AUTOMATIC TARGET RECOGNITION XXXII, 2022, 12096
  • [24] SemStyle: Learning to Generate Stylised Image Captions using Unaligned Text
    Mathews, Alexander
    Xie, Lexing
    He, Xuming
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 8591 - 8600
  • [25] Person Reidentification Using Deep Convnets With Multitask Learning
    McLaughlin, Niall
    del Rincon, Jesus Martinez
    Miller, Paul C.
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2017, 27 (03) : 525 - 539
  • [26] Dictionary-enabled efficient training of ConvNets for image classification
    Haider, Usman
    Hanif, Muhammad
    Rashid, Ahmar
    Hussain, Syed Fawad
    IMAGE AND VISION COMPUTING, 2023, 135
  • [27] Controlled Hallucinations: Learning to Generate Faithfully from Noisy Data
    Fillippova, Katja
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 864 - 870
  • [28] Learning to Generate Descriptions of Visual Data Anchored in Spatial Relations
    Muscat, Adrian
    Belz, Anja
    IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, 2017, 12 (03) : 29 - 42
  • [29] Use of a virtual human phantom to generate dose and image quality data
    Winslow, M
    Huda, W
    Xu, G
    Ogden, KM
    Scalzetti, E
    MEDICAL PHYSICS, 2003, 30 (06) : 1353 - 1354
  • [30] Neural-Sim: Learning to Generate Training Data with NeRF
    Ge, Yunhao
    Behl, Harkirat
    Xu, Jiashu
    Gunasekar, Suriya
    Joshi, Neel
    Song, Yale
    Wang, Xin
    Itti, Laurent
    Vineet, Vibhav
    COMPUTER VISION, ECCV 2022, PT XXIII, 2022, 13683 : 477 - 493