Style Agnostic 3D Reconstruction via Adversarial Style Transfer

被引:0
|
作者
Petersen, Felix [1 ]
Goldluecke, Bastian [1 ]
Deussen, Oliver [1 ]
Kuehne, Hilde [2 ,3 ]
机构
[1] Univ Konstanz, Constance, Germany
[2] Goethe Univ Frankfurt, Frankfurt, Germany
[3] IBM MIT Watson AI Lab, Cambridge, MA USA
关键词
D O I
10.1109/WACV51458.2022.00233
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reconstructing the 3D geometry of an object from an image is a major challenge in computer vision. Recently introduced differentiable renderers can be leveraged to learn the 3D geometry of objects from 2D images, but those approaches require additional supervision to enable the renderer to produce an output that can be compared to the input image. This can be scene information or constraints such as object silhouettes, uniform backgrounds, material, texture, and lighting. In this paper, we propose an approach that enables a differentiable rendering-based learning of 3D objects from images with backgrounds without the need for silhouette supervision. Instead of trying to render an image close to the input, we propose an adversarial style-transfer and domain adaptation pipeline that allows to translate the input image domain to the rendered image domain. This allows us to directly compare between a translated image and the differentiable rendering of a 3D object reconstruction in order to train the 3D object reconstruction network. We show that the approach learns 3D geometry from images with backgrounds and provides a better performance than constrained methods for single-view 3D object reconstruction on this task.
引用
收藏
页码:2273 / 2282
页数:10
相关论文
共 50 条
  • [21] Adversarial Separation Network for Text Style Transfer
    Yang, Haitong
    Zhou, Guangyou
    He, Tingting
    ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2022, 21 (02)
  • [22] Image Style Transfer with Generative Adversarial Networks
    Li, Ru
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 2950 - 2954
  • [23] Adversarial training for fast arbitrary style transfer
    Xu, Zheng
    Wilber, Michael
    Fang, Chen
    Hertzmann, Aaron
    Jin, Hailin
    COMPUTERS & GRAPHICS-UK, 2020, 87 : 1 - 11
  • [24] Preserving Content in Text Style Transfer via Normalizing Flow and Adversarial Learning
    Dai, Jinqiao
    Chen, Pengsen
    Song, Yan
    Liu, Jiayong
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT IV, NLPCC 2024, 2025, 15362 : 303 - 316
  • [25] Hiding in Plain Sight: Adversarial Attack via Style Transfer on Image Borders
    Zhang, Haiyan
    Li, Xinghua
    Tang, Jiawei
    Peng, Chunlei
    Wang, Yunwei
    Zhang, Ning
    Miao, Yingbin
    Liu, Ximeng
    Choo, Kim-Kwang Raymond
    IEEE TRANSACTIONS ON COMPUTERS, 2024, 73 (10) : 2405 - 2419
  • [26] 3D Segmentation Guided Style-Based Generative Adversarial Networks for PET Synthesis
    Zhou, Yang
    Yang, Zhiwen
    Zhang, Hui
    Chang, Eric I-Chao
    Fan, Yubo
    Xu, Yan
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2022, 41 (08) : 2092 - 2104
  • [27] GEAST-RF: Geometry Enhanced 3D Arbitrary Style Transfer Via Neural Radiance Fields
    He, Dong
    Qian, Wenhua
    Cao, Jinde
    COMPUTERS & GRAPHICS-UK, 2025, 127
  • [28] Style Compatibility for 3D Furniture Models
    Liu, Tianqiang
    Hertzmann, Aaron
    Li, Wilmot
    Funkhouser, Thomas
    ACM TRANSACTIONS ON GRAPHICS, 2015, 34 (04):
  • [29] 3D Shape Synthesis via Content-Style Revealing Priors
    Remil, Oussama
    Xie, Qian
    Chen, Honghua
    Wang, Jun
    COMPUTER-AIDED DESIGN, 2019, 115 : 87 - 97
  • [30] Arbitrary style transfer via content consistency and style consistency
    Yu, Xiaoming
    Zhou, Gan
    VISUAL COMPUTER, 2024, 40 (03): : 1369 - 1382