Adaptive Incident Radiance Field Sampling and Reconstruction Using Deep Reinforcement Learning

被引:22
|
作者
Huo, Yuchi [1 ]
Wang, Rui [2 ]
Zheng, Ruzahng [2 ]
Xu, Hualin [2 ]
Bao, Hujun [2 ]
Yoon, Sung-Eui [3 ]
机构
[1] Korea Adv Inst Sci & Technol, State Key Lab CAD&CG, Daejeon, South Korea
[2] State Key Lab CAD&CG, Daejeon, South Korea
[3] Korea Adv Inst Sci & Technol, Daejeon, South Korea
来源
ACM TRANSACTIONS ON GRAPHICS | 2020年 / 39卷 / 01期
基金
国家重点研发计划;
关键词
Incident radiance field; deep neural network; adaptive sampling; NEURAL-NETWORKS;
D O I
10.1145/3368313
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Serious noise affects the rendering of global illumination using Monte Carlo (MC) path tracing when insufficient samples are used. The two common solutions to this problem are filtering noisy inputs to generate smooth but biased results and sampling the MC integrand with a carefully crafted probability distribution function (PDF) to produce unbiased results. Both solutions benefit from an efficient incident radiance field sampling and reconstruction algorithm. This study proposes a method for training quality and reconstruction networks (Q- and R-networks, respectively) with a massive offline dataset for the adaptive sampling and reconstruction of first-bounce incident radiance fields. The convolutional neural network (CNN)-based R-network reconstructs the incident radiance field in a 4D space, whereas the deep reinforcement learning (DRL)-based Q-network predicts and guides the adaptive sampling process. The approach is verified by comparing it with state-of-the-art unbiased path guiding methods and filtering methods. Results demonstrate improvements for unbiased path guiding and competitive performance in biased applications, including filtering and irradiance caching.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] ASRL: An Adaptive GPS Sampling Method Using Deep Reinforcement Learning
    Qu, Boting
    Zhao, Mengjiao
    Feng, Jun
    Wang, Xin
    [J]. 2022 23RD IEEE INTERNATIONAL CONFERENCE ON MOBILE DATA MANAGEMENT (MDM 2022), 2022, : 153 - 158
  • [2] A Novel Adaptive Sampling Strategy for Deep Reinforcement Learning
    Liang, Xingxing
    Chen, Li
    Feng, Yanghe
    Liu, Zhong
    Ma, Yang
    Huang, Kuihua
    [J]. INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE AND APPLICATIONS, 2021, 20 (02)
  • [3] Image adaptive sampling using reinforcement learning
    Gong, Wenyong
    Fan, Xu-Qian
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (2) : 5511 - 5530
  • [4] Image adaptive sampling using reinforcement learning
    Gong, Wenyong
    Fan, Xu-Qian
    [J]. Multimedia Tools and Applications, 2024, 83 (02) : 5511 - 5530
  • [5] Image adaptive sampling using reinforcement learning
    Wenyong Gong
    Xu-Qian Fan
    [J]. Multimedia Tools and Applications, 2024, 83 : 5511 - 5530
  • [6] Multi-robot Information Sampling Using Deep Mean Field Reinforcement Learning
    Said, Tuft
    Wolbert, Jeffery
    Khodadadeh, Siavash
    Dutta, Ayan
    Kreidl, O. Patrick
    Boloni, Adislau
    Roy, Swapnoneel
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2021, : 1215 - 1220
  • [7] Deep Adaptive Sampling and Reconstruction using Analytic Distributions
    Salehi, Farnood
    Manzi, Marco
    Roethlin, Gerhard
    Weber, Romann
    Schroers, Christopher
    Papas, Marios
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2022, 41 (06):
  • [8] Predictive Energy-Aware Adaptive Sampling with Deep Reinforcement Learning
    Heo, Seonyeong
    Mayer, Philipp
    Magno, Michele
    [J]. 2022 29TH IEEE INTERNATIONAL CONFERENCE ON ELECTRONICS, CIRCUITS AND SYSTEMS (IEEE ICECS 2022), 2022,
  • [9] Data-driven Energy-efficient Adaptive Sampling Using Deep Reinforcement Learning
    Demirel, Berken Utku
    Chen, Luke
    Al Faruque, Mohammad Abdullah
    [J]. ACM Transactions on Computing for Healthcare, 2023, 4 (03):
  • [10] Deep Reinforcement Learning for Adaptive Learning Systems
    Li, Xiao
    Xu, Hanchen
    Zhang, Jinming
    Chang, Hua-hua
    [J]. JOURNAL OF EDUCATIONAL AND BEHAVIORAL STATISTICS, 2023, 48 (02) : 220 - 243