WL-GAN: Learning to sample in generative latent space

被引:0
|
作者
Hou, Zeyi [1 ]
Lang, Ning [2 ]
Zhou, Xiuzhuang [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Artificial Intelligence, Beijing 100876, Peoples R China
[2] Peking Univ Third Hosp, Beijing 100876, Peoples R China
基金
北京市自然科学基金; 中国国家自然科学基金;
关键词
Generative adversarial networks; Markov chain Monte Carlo; Energy based model; Mode dropping; CHAIN MONTE-CARLO; STOCHASTIC-APPROXIMATION;
D O I
10.1016/j.ins.2024.121834
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent advances in generative latent space sampling for enhanced generation quality have demonstrated the benefits from the Energy-Based Model (EBM), which is often defined by both the generator and the discriminator of off-the-shelf Generative Adversarial Networks (GANs) of many types. However, such latent space sampling may still suffer from mode dropping even sampling in a low-dimensional latent space, due to the inherent complexity of the data distributions with rugged energy landscapes. Motivated by the success of Wang-Landau (WL) sampling in statistical physics, we propose WL-GAN, a collaborative learning framework for generative latent space sampling, where both the invariant distribution and the proposal distribution of the Markov chain are jointly learned on the fly, by exploiting the historical statistics behind the simulated samples. We show that the two learning modules work together for better balance between exploration and exploitation over the energy space in GAN sampling, alleviating mode dropping and improving the sample quality of GAN. Empirically, the efficacy of WL-GAN is demonstrated on both synthetic datasets and real-world image datasets, using multiple GANs. Code is available at https://github.com/zeyihou/collaborative-learn.
引用
收藏
页数:16
相关论文
共 50 条
  • [31] L2M-GAN: Learning to Manipulate Latent Space Semantics for Facial Attribute Editing
    Yang, Guoxing
    Fei, Nanyi
    Ding, Mingyu
    Liu, Guangzhen
    Lu, Zhiwu
    Xiang, Tao
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 2950 - 2959
  • [32] Learning to Importance Sample in Primary Sample Space
    Zheng, Quan
    Zwicker, Matthias
    COMPUTER GRAPHICS FORUM, 2019, 38 (02) : 169 - 179
  • [33] Bilevel Multiview Latent Space Learning
    Xue, Zhe
    Li, Guorong
    Wang, Shuhui
    Zhang, Weigang
    Huang, Qingming
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2018, 28 (02) : 327 - 341
  • [34] BinPlay: A Binary Latent Autoencoder for Generative Replay Continual Learning
    Deja, Kamil
    Wawrzynski, Pawel
    Marczak, Daniel
    Masarczyk, Wojciech
    Trzcinski, Tomasz
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [35] Learning to Jump: Thinning and Thickening Latent Counts for Generative Modeling
    Chen, Tianqi
    Zhou, Mingyuan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [36] Attention Autoencoder for Generative Latent Representational Learning in Anomaly Detection
    Oluwasanmi, Ariyo
    Aftab, Muhammad Umar
    Baagyere, Edward
    Qin, Zhiguang
    Ahmad, Muhammad
    Mazzara, Manuel
    SENSORS, 2022, 22 (01)
  • [37] Modelling Latent Travel Behaviour Characteristics with Generative Machine Learning
    Wong, Melvin
    Farooq, Bilal
    2018 21ST INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2018, : 749 - 754
  • [38] Searching the Latent Space of a Generative Adversarial Network to Generate DOOM Levels
    Giacomello, Edoardo
    Lanzi, Pier Luca
    Loiacono, Daniele
    2019 IEEE CONFERENCE ON GAMES (COG), 2019,
  • [39] Latent generative landscapes as maps of functional diversity in protein sequence space
    Ziegler, Cheyenne
    Martin, Jonathan
    Sinner, Claude
    Morcos, Faruck
    NATURE COMMUNICATIONS, 2023, 14 (01)
  • [40] Latent Space Visualization of Half Face and Full Face by Generative Model
    Zou, Min
    Akashi, Takuya
    FIFTEENTH INTERNATIONAL CONFERENCE ON QUALITY CONTROL BY ARTIFICIAL VISION, 2021, 11794