Effective Data Augmentation with Multi-Domain Learning GANs

被引:0
|
作者
Yamaguchi, Shin'ya [1 ]
Kanai, Sekitoshi [1 ,2 ]
Eda, Takeharu [1 ]
机构
[1] NTT Software Innovat Ctr, Tokyo, Japan
[2] Keio Univ, Tokyo, Japan
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
For deep learning applications, the massive data development (e.g., collecting, labeling), which is an essential process in building practical applications, still incurs seriously high costs. In this work, we propose an effective data augmentation method based on generative adversarial networks (GANs), called Domain Fusion. Our key idea is to import the knowledge contained in an outer dataset to a target model by using a multi-domain learning GAN. The multi-domain learning GAN simultaneously learns the outer and target dataset and generates new samples for the target tasks. The simultaneous learning process makes GANs generate the target samples with high fidelity and variety. As a result, we can obtain accurate models for the target tasks by using these generated samples even if we only have an extremely low volume target dataset. We experimentally evaluate the advantages of Domain Fusion in image classification tasks on 3 target datasets: CIFAR-100, FGVC-Aircraft, and Indoor Scene Recognition. When trained on each target dataset reduced the samples to 5,000 images, Domain Fusion achieves better classification accuracy than the data augmentation using fine-tuned GANs. Furthermore, we show that Domain Fusion improves the quality of generated samples, and the improvements can contribute to higher accuracy.
引用
收藏
页码:6566 / 6574
页数:9
相关论文
共 50 条
  • [1] EFFICIENT MULTI-DOMAIN DICTIONARY LEARNING WITH GANS
    Wu, Cho Ying
    Neumann, Ulrich
    [J]. 2019 7TH IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (IEEE GLOBALSIP), 2019,
  • [2] Data Augmentation for Predictive Digital Twin Channel: Learning Multi-Domain Correlations by Convolutional TimeGAN
    Liang, Guangming
    Hu, Jie
    Yang, Kun
    Song, Siyao
    Liu, Tingcai
    Xie, Ning
    Yu, Yijun
    [J]. IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2024, 18 (01) : 18 - 33
  • [3] Exploiting data diversity in multi-domain federated learning
    Madni, Hussain Ahmad
    Umer, Rao Muhammad
    Foresti, Gian Luca
    [J]. MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2024, 5 (02):
  • [4] Unpaired Multi-Domain Image Generation via Regularized Conditional GANs
    Mao, Xudong
    Li, Qing
    [J]. PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 2553 - 2559
  • [5] Secure and Efficient Federated Learning for Multi-domain Data Scenarios
    Jin, Chunhua
    Li, Lulu
    Wang, Jiahao
    Ji, Ling
    Liu, Xinying
    Chen, Liqing
    Zhang, Hao
    Weng, Jian
    [J]. Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence, 2024, 37 (09): : 824 - 838
  • [6] Data Augmentation using GAN for Multi-Domain Network-based Human Tracking
    Chen, Kexin
    Zhou, Xue
    Xiang, Wei
    Zhou, Qidong
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (IEEE VCIP), 2018,
  • [7] Multi-Domain Active Learning for Recommendation
    Zhang, Zihan
    Jin, Xiaoming
    Li, Lianghao
    Ding, Guiguang
    Yang, Qiang
    [J]. THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 2358 - 2364
  • [8] Self-Supervised Representation Learning From Multi-Domain Data
    Feng, Zeyu
    Xu, Chang
    Tao, Dacheng
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 3244 - 3254
  • [9] Unified Multi-Domain Learning and Data Imputation using Adversarial Autoencoder
    Mendes, Andre
    Togelius, Julian
    Coelho, Leandro dos Santos
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [10] Data-selective Transfer Learning for Multi-Domain Speech Recognition
    Doulaty, Mortaza
    Saz, Oscar
    Hain, Thomas
    [J]. 16TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2015), VOLS 1-5, 2015, : 2897 - 2901