Gradient Layer: Enhancing the Convergence of Adversarial Training for Generative Models

被引:0
|
作者
Nitanda, Atsushi [1 ,2 ]
Suzuki, Taiji [1 ,2 ,3 ]
机构
[1] Univ Tokyo, Grad Sch Informat Sci & Technol, Tokyo, Japan
[2] RIKEN, Ctr Adv Intelligence Project, Wako, Saitama, Japan
[3] Japan Sci & Technol Agcy, PRESTO, Tokyo, Japan
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose a new technique that boosts the convergence of training generative adversarial networks. Generally, the rate of training deep models reduces severely after multiple iterations. A key reason for this phenomenon is that a deep network is expressed using a highly non-convex finite-dimensional model, and thus the parameter gets stuck in a local optimum. Because of this, methods often suffer not only from degeneration of the convergence speed but also from limitations in the representational power of the trained network. To overcome this issue, we propose an additional layer called the gradient layer to seek a descent direction in an infinite-dimensional space. Because the layer is constructed in the infinite-dimensional space, we are not restricted by the specific model structure of finite-dimensional models. As a result, we can get out of the local optima in finite-dimensional models and move towards the global optimal function more directly. In this paper, this phenomenon is explained from the functional gradient method perspective of the gradient layer. Interestingly, the optimization procedure using the gradient layer naturally constructs the deep structure of the network. Moreover, we demonstrate that this procedure can be regarded as a discretization method of the gradient flow that naturally reduces the objective function. Finally, the method is tested using several numerical experiments, which show its fast convergence.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Training Generative Adversarial Networks with Adaptive Composite Gradient
    Qi, Huiqing
    Li, Fang
    Tan, Shengli
    Zhang, Xiangyun
    [J]. DATA INTELLIGENCE, 2024, 6 (01) : 120 - 157
  • [2] Composite Functional Gradient Learning of Generative Adversarial Models
    Johnson, Rie
    Zhang, Tong
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [3] PolicyGAN: Training generative adversarial networks using policy gradient
    Paria, Biswajit
    Lahiri, Avisek
    Biswas, Prabir Kumar
    [J]. 2017 NINTH INTERNATIONAL CONFERENCE ON ADVANCES IN PATTERN RECOGNITION (ICAPR), 2017, : 151 - 156
  • [4] A Framework of Composite Functional Gradient Methods for Generative Adversarial Models
    Johnson, Rie
    Zhang, Tong
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (01) : 17 - 32
  • [5] Exploring generative adversarial networks and adversarial training
    Institute of Information Technology, University of Dhaka, Dhaka, Bangladesh
    [J]. Int. J. Cogn. Comp. Eng., (78-89): : 78 - 89
  • [6] Gradient Normalization for Generative Adversarial Networks
    Wu, Yi-Lun
    Shuai, Hong-Han
    Tam, Zhi-Rui
    Chiu, Hong-Yu
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 6353 - 6362
  • [7] GAT--GMM: Generative Adversarial Training for Gaussian Mixture Models
    Farnia, Farzan
    Wang, William W.
    Das, Subhro
    Jadbabaie, Ali
    [J]. SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE, 2023, 5 (01): : 122 - 146
  • [8] Adversarial Training Time Attack Against Discriminative and Generative Convolutional Models
    Chaudhury, Subhajit
    Roy, Hiya
    Mishra, Sourav
    Yamasaki, Toshihiko
    [J]. IEEE ACCESS, 2021, 9 : 109241 - 109259
  • [9] Adversarial examples for generative models
    Kos, Jernej
    Fischer, Ian
    Song, Dawn
    [J]. 2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2018), 2018, : 36 - 42
  • [10] Stabilizing Training of Generative Adversarial Nets via Langevin Stein Variational Gradient Descent
    Wang, Dong
    Qin, Xiaoqian
    Song, Fengyi
    Cheng, Li
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (07) : 2768 - 2780