WildLight: In-the-wild Inverse Rendering with a Flashlight

被引:0
|
作者
Cheng, Ziang [1 ]
Li, Junxuan [1 ]
Li, Hongdong [1 ]
机构
[1] Australian Natl Univ, Canberra, ACT, Australia
关键词
MULTIVIEW PHOTOMETRIC STEREO;
D O I
10.1109/CVPR52729.2023.00419
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper proposes a practical photometric solution for the challenging problem of in-the-wild inverse rendering under unknown ambient lighting. Our system recovers scene geometry and reflectance using only multi-view images captured by a smartphone. The key idea is to exploit smartphone's built-in flashlight as a minimally controlled light source, and decompose image intensities into two photometric components - a static appearance corresponds to ambient flux, plus a dynamic reflection induced by the moving flashlight. Our method does not require flash/non-flash images to be captured in pairs. Building on the success of neural light fields, we use an off-the-shelf method to capture the ambient reflections, while the flashlight component enables physically accurate photometric constraints to decouple reflectance and illumination. Compared to existing inverse rendering methods, our setup is applicable to non-darkroom environments yet sidesteps the inherent difficulties of explicit solving ambient reflections. We demonstrate by extensive experiments that our method is easy to implement, casual to set up, and consistently outperforms existing in-the-wild inverse rendering techniques. Finally, our neural reconstruction can be easily exported to PBR textured triangle mesh ready for industrial renderers. Our source code and data are released to https://github.com/za-cheng/WildLight.
引用
收藏
页码:4305 / 4314
页数:10
相关论文
共 50 条
  • [21] Active Orientation Models for Face Alignment In-the-Wild
    Tzimiropoulos, Georgios
    Alabort-i-Medina, Joan
    Zafeiriou, Stefanos P.
    Pantic, Maja
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2014, 9 (12) : 2024 - 2034
  • [22] Audio Visual Recognition of Spontaneous Emotions In-the-Wild
    Xia, Xiaohan
    Guo, Liyong
    Jiang, Dongmei
    Pei, Ercheng
    Yang, Le
    Sahli, Hichem
    PATTERN RECOGNITION (CCPR 2016), PT II, 2016, 663 : 692 - 706
  • [23] Facial Affect "in-the-wild": A survey and a new database
    Zafeiriou, Stefanos
    Papaioannou, Athanasios
    Kotsia, Irene
    Nicolaou, Mihalis
    Zhao, Guoying
    PROCEEDINGS OF 29TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, (CVPRW 2016), 2016, : 1487 - 1498
  • [24] An Assessment of In-the-Wild Datasets for Multimodal Emotion Recognition
    Aguilera, Ana
    Mellado, Diego
    Rojas, Felipe
    SENSORS, 2023, 23 (11)
  • [25] Directions Robot: In-the-Wild Experiences and Lessons Learned
    Bohus, Dan
    Saw, Chit W.
    Horvitz, Eric
    AAMAS'14: PROCEEDINGS OF THE 2014 INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS & MULTIAGENT SYSTEMS, 2014, : 637 - 644
  • [26] Tracking Authentic and In-the-wild Emotions Using Speech
    Pandit, Vedhas
    Cummins, Nicholas
    Schmitt, Maximilian
    Hantke, Simone
    Graf, Franz
    Paletta, Lucas
    Schuller, Bjoern
    2018 FIRST ASIAN CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION (ACII ASIA), 2018,
  • [27] Learning Fashion Compatibility from In-the-wild Images
    Popli, Additya
    Kumar, Vijay
    Jos, Sujit
    Tandon, Saraansh
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 4920 - 4926
  • [28] In-the-wild Facial Expression Recognition in Extreme Poses
    Yang, Fei
    Zhang, Qian
    Zheng, Chi
    Qiu, Guoping
    NINTH INTERNATIONAL CONFERENCE ON GRAPHIC AND IMAGE PROCESSING (ICGIP 2017), 2018, 10615
  • [29] ReActLab: A Custom Framework for Sensorimotor Experiments "in-the-wild"
    Balestrucci, Priscilla
    Wiebusch, Dennis
    Ernst, Marc O.
    FRONTIERS IN PSYCHOLOGY, 2022, 13
  • [30] Lessons From a Robot Asking for Directions In-The-Wild
    Liang, Claire
    Ricci, Andy Elliot
    Kress-Gazit, Hadas
    Jung, Malte F.
    COMPANION OF THE ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, HRI 2023, 2023, : 617 - 620