WL-GAN: Learning to sample in generative latent space

被引:0
|
作者
Hou, Zeyi [1 ]
Lang, Ning [2 ]
Zhou, Xiuzhuang [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Artificial Intelligence, Beijing 100876, Peoples R China
[2] Peking Univ Third Hosp, Beijing 100876, Peoples R China
基金
北京市自然科学基金; 中国国家自然科学基金;
关键词
Generative adversarial networks; Markov chain Monte Carlo; Energy based model; Mode dropping; CHAIN MONTE-CARLO; STOCHASTIC-APPROXIMATION;
D O I
10.1016/j.ins.2024.121834
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent advances in generative latent space sampling for enhanced generation quality have demonstrated the benefits from the Energy-Based Model (EBM), which is often defined by both the generator and the discriminator of off-the-shelf Generative Adversarial Networks (GANs) of many types. However, such latent space sampling may still suffer from mode dropping even sampling in a low-dimensional latent space, due to the inherent complexity of the data distributions with rugged energy landscapes. Motivated by the success of Wang-Landau (WL) sampling in statistical physics, we propose WL-GAN, a collaborative learning framework for generative latent space sampling, where both the invariant distribution and the proposal distribution of the Markov chain are jointly learned on the fly, by exploiting the historical statistics behind the simulated samples. We show that the two learning modules work together for better balance between exploration and exploitation over the energy space in GAN sampling, alleviating mode dropping and improving the sample quality of GAN. Empirically, the efficacy of WL-GAN is demonstrated on both synthetic datasets and real-world image datasets, using multiple GANs. Code is available at https://github.com/zeyihou/collaborative-learn.
引用
收藏
页数:16
相关论文
共 50 条
  • [21] A Latent Space Understandable Generative Adversarial Network: SelfExGAN
    Liu, Yongjie
    Wang, Qianlong
    Gu, Yanlei
    Kamijo, Shunsuke
    2017 INTERNATIONAL CONFERENCE ON DIGITAL IMAGE COMPUTING - TECHNIQUES AND APPLICATIONS (DICTA), 2017, : 353 - 360
  • [22] Latent feature reconstruction generative GAN model for ICS anomaly detection
    Gu, Zhaojun
    Liu, Tingting
    Sui, He
    Xi'an Dianzi Keji Daxue Xuebao/Journal of Xidian University, 2022, 49 (02): : 173 - 181
  • [23] Learning Dynamic Latent Spaces for Lifelong Generative Modelling
    Ye, Fei
    Bors, Adrian G.
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 9, 2023, : 10891 - 10899
  • [24] GINT: A Generative Interpretability method via perturbation in the latent space
    Tang, Caizhi
    Cui, Qing
    Li, Longfei
    Zhou, Jun
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 232
  • [25] Illuminating Mario Scenes in the Latent Space of a Generative Adversarial Network
    Fontaine, Matthew C.
    Liu, Ruilin
    Khalifa, Ahmed
    Modi, Jignesh
    Togelius, Julian
    Hoover, Amy K.
    Nikolaidis, Stefanos
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 5922 - 5930
  • [26] Desirable molecule discovery via generative latent space exploration
    Zheng, Wanjie
    Li, Jie
    Zhang, Yang
    VISUAL INFORMATICS, 2023, 7 (04) : 13 - 21
  • [27] Latent Space Clustering via Dual Discriminator GAN
    He, Heng-Ping
    Li, Pei-Zhen
    Huang, Ling
    Ji, Yu-Xuan
    Wang, Chang-Dong
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS (DASFAA 2020), PT I, 2020, 12112 : 671 - 679
  • [28] Toward a Visual Concept Vocabulary for GAN Latent Space
    Schwettmann, Sarah
    Hernandez, Evan
    Bau, David
    Klein, Samuel
    Andreas, Jacob
    Torralba, Antonio
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 6784 - 6792
  • [29] Learning a Deep Reinforcement Learning Policy Over the Latent Space of a Pre-trained GAN for Semantic Age Manipulation
    Shubham, Kumar
    Venkatesh, Gopalakrishnan
    Sachdev, Reijul
    Akshi
    Jayagopi, Dinesh Babu
    Srinivasaraghavan, G.
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [30] Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling
    Wu, Jiajun
    Zhang, Chengkai
    Xue, Tianfan
    Freeman, William T.
    Tenenbaum, Joshua B.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29