Generative Latent Implicit Conditional Optimization when Learning from Small Sample

被引:6
|
作者
Azuri, Idan [1 ]
Weinshall, Daphna [1 ]
机构
[1] Hebrew Univ Jerusalem, Sch Comp Sci & Engn, Jerusalem, Israel
基金
以色列科学基金会;
关键词
D O I
10.1109/ICPR48806.2021.9413259
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We revisit the long-standing problem of learning from small sample, to which end we propose a novel method called GLICO (Generative Latent Implicit Conditional Optimization). GLICO learns a mapping from the training examples to a latent space, and a generator that generates images from vectors in the latent space. Unlike most recent works, which rely on access to large amounts of unlabeled data, GLICO does not require access to any additional data other than the small set of labeled points. In fact, GLICO learns to synthesize completely new samples for every class using as little as 5 or 10 examples per class, with as few as 10 such classes without imposing any prior. GLICO is then used to augment the small training set while training a classifier on the small sample. To this end our proposed method samples the learned latent space using spherical interpolation, and generates new examples using the trained generator. Empirical results show that the new sampled set is diverse enough, leading to improvement in image classification in comparison with the state of the art, when trained on small samples obtained from CIFAR-10, CIFAR-100, and CUB-200.
引用
收藏
页码:8584 / 8591
页数:8
相关论文
共 50 条
  • [21] When Newer is Not Better: Does Deep Learning Really Benefit Recommendation From Implicit Feedback?
    Dong, Yushun
    Li, Jundong
    Schnabel, Tobias
    PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023, 2023, : 942 - 952
  • [22] Deep Learning-Based Ligand Design Using Shared Latent Implicit Fingerprints from Collaborative Filtering
    Srinivas, Raghuram
    Verma, Niraj
    Kraka, Elfi
    Larson, Eric C.
    JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2021, 61 (05) : 2159 - 2174
  • [23] A Small Sample Focused Intelligent Fault Diagnosis Scheme of Machines via Multimodules Learning With Gradient Penalized Generative Adversarial Networks
    Zhang, Tianci
    Chen, Jinglong
    Li, Fudong
    Pan, Tongyang
    He, Shuilong
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2021, 68 (10) : 10130 - 10141
  • [24] USE CAUTION WHEN EXTRAPOLATING FROM A SMALL SAMPLE-SIZE TO THE GENERAL-POPULATION
    BENEFIEL, DJ
    EISLER, EA
    SHEPHERD, R
    ANESTHESIOLOGY, 1989, 70 (01) : 160 - 161
  • [25] Weld Defect Detection From Imbalanced Radiographic Images Based on Contrast Enhancement Conditional Generative Adversarial Network and Transfer Learning
    Guo, Runyuan
    Liu, Han
    Xie, Guo
    Zhang, Youmin
    IEEE SENSORS JOURNAL, 2021, 21 (09) : 10844 - 10853
  • [26] Small Sample Reliability Assessment With Online Time-Series Data Based on a Worm Wasserstein Generative Adversarial Network Learning Method
    Sun, Bo
    Wu, Zeyu
    Feng, Qiang
    Wang, Zili
    Ren, Yi
    Yang, Dezhen
    Xia, Quan
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (02) : 1207 - 1216
  • [27] Learning image filtering from a gold sample based on genetic optimization of morphological processing
    Rahnamayan, S
    Tizhoosh, HR
    Salama, MMA
    Adaptive and Natural Computing Algorithms, 2005, : 478 - 481
  • [28] Learning from Small Sample Sets by Combining Unsupervised Meta-Training with CNNs
    Wang, Yu-Xiong
    Hebert, Martial
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [29] New insights into olivo-cerebellar circuits for learning from a small training sample
    Tokuda, Isao T.
    Hoang, Huu
    Kawato, Mitsuo
    CURRENT OPINION IN NEUROBIOLOGY, 2017, 46 : 58 - 67
  • [30] USE CAUTION WHEN EXTRAPOLATING FROM A SMALL SAMPLE-SIZE TO THE GENERAL-POPULATION - REPLY
    SEARS, DH
    KATZ, RL
    ANESTHESIOLOGY, 1989, 70 (01) : 161 - 161