Universal Framework for Joint Image Restoration and 3D Body Reconstruction

被引:1
|
作者
Lumentut, Jonathan Samuel [1 ]
Marchellus, Matthew [1 ]
Santoso, Joshua [1 ]
Kim, Tae Hyun [2 ]
Chang, Ju Yong [3 ]
Park, In Kyu [1 ]
机构
[1] Inha Univ, Dept Informat & Commun Engn, Incheon 22212, South Korea
[2] Hanyang Univ, Dept Comp Sci, Seoul 04763, South Korea
[3] Kwangwoon Univ, Dept Elect & Commun Engn, Seoul 01897, South Korea
来源
IEEE ACCESS | 2021年 / 9卷
关键词
Image reconstruction; Three-dimensional displays; Image restoration; Task analysis; Training; Noise reduction; Noise measurement; Restoration; deblur; super-resolution; denoising; 3D body reconstruction; meta-learning; self-adaptive; pseudo-data; DEEP; NETWORK;
D O I
10.1109/ACCESS.2021.3132148
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent works have demonstrated excellent state-of-the-art achievements in image restoration and 3D body reconstruction from an input image. The 3D body reconstruction task, however, relies heavily on the input image's quality. A straightforward way to solve this issue is by generating vast degraded datasets and using them in a re-finetuned or newly-crafted body reconstruction network. However, in future usage, these datasets may become obsolete, leaving the newly-crafted network outdated. Unlike this approach, we design a universal framework that is able to utilize prior state-of-the-art restoration works and then self-boosts their performances during test-time while jointly carrying out the 3D body reconstruction. The self-boosting mechanism is adopted via test-time parameter adaptation capable of handling various types of degradation. To accommodate, we also propose a strategy that generates pseudo-data on the fly during test-time, allowing both restoration and reconstruction modules to be learned in a self-supervised manner. With this advantage, the universal framework intelligently enhances the performance without any new dataset or new neural network model involvement. Our experimental results show that using the proposed framework and pseudo-data strategies significantly improves the performances of both scenarios.
引用
收藏
页码:162543 / 162552
页数:10
相关论文
共 50 条
  • [1] Universal Framework for Joint Image Restoration and 3D Body Reconstruction
    Lumentut, Jonathan Samuel
    Marchellus, Matthew
    Santoso, Joshua
    Kim, Tae Hyun
    Chang, Ju Yong
    Park, In Kyu
    IEEE Access, 2021, 9 : 162543 - 162552
  • [2] Joint image registration and volume reconstruction for 3D ultrasound
    Sanches, JM
    Marques, JS
    PATTERN RECOGNITION LETTERS, 2003, 24 (4-5) : 791 - 800
  • [3] A Hybrid Image Enhancement Framework for Underwater 3D Reconstruction
    Li, Tengyue
    Ma, Chen
    2022 OCEANS HAMPTON ROADS, 2022,
  • [4] Holistic 3D Body Reconstruction From a Blurred Single Image
    Santoso, Joshua
    Williem
    Park, In Kyu
    IEEE ACCESS, 2022, 10 : 115399 - 115410
  • [5] Image2Mesh: A Learning Framework for Single Image 3D Reconstruction
    Pontes, Jhony K.
    Kong, Chen
    Sridharan, Sridha
    Lucey, Simon
    Eriksson, Anders
    Fookes, Clinton
    COMPUTER VISION - ACCV 2018, PT I, 2019, 11361 : 365 - 381
  • [6] 3D Reconstruction and Restoration Monitoring of Sculptural Artworks by a Multi-Sensor Framework
    Barone, Sandro
    Paoli, Alessandro
    Razionale, Armando Viviano
    SENSORS, 2012, 12 (12): : 16785 - 16801
  • [7] Off-axis aperture camera: 3D shape reconstruction and image restoration
    Dou, Qingxu
    Favaro, Paolo
    2008 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-12, 2008, : 2736 - 2742
  • [8] Strategies to improve 3D whole-body PET image reconstruction
    Cutler, PD
    Xu, M
    PHYSICS IN MEDICINE AND BIOLOGY, 1996, 41 (08): : 1453 - 1467
  • [9] Occlusion Detection and Image Restoration in 3D Face Image
    Srinivasan, A.
    Balamurugan, V
    TENCON 2014 - 2014 IEEE REGION 10 CONFERENCE, 2014,
  • [10] Joint 3D facial shape reconstruction and texture completion from a single image
    Xiaoxing Zeng
    Zhelun Wu
    Xiaojiang Peng
    Yu Qiao
    Computational Visual Media, 2022, 8 : 239 - 256