Towards Reconstruction of 3D Shapes in a Realistic Environment

被引:3
|
作者
Zohaib, Mohammad [1 ,2 ]
Taiana, Matteo [1 ]
Del Bue, Alessio [1 ]
机构
[1] Italian Inst Technol, Pattern Anal & Comp Vis PAVIS, Genoa, Italy
[2] Univ Genoa, Dept Marine Elect Elect & Telecommun Engn, Genoa, Italy
来源
IMAGE ANALYSIS AND PROCESSING, ICIAP 2022, PT II | 2022年 / 13232卷
关键词
3D reconstruction; Single-view image; Realistic background;
D O I
10.1007/978-3-031-06430-2_1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents an end-to-end approach for single-view 3D object reconstruction in a realistic environment. Most of the existing reconstruction approaches are trained on synthetic data and they fail when evaluated on real images. On the other hand, some of the methods require pre-processing in order to separate an object from the background. In contrast, the proposed approach learns to compute stable features for an object by reducing the influence of image background. This is achieved by feeding two images simultaneously to the model; synthetic with white background and its realistic variant with a natural background. The encoder extracts the common features from both images and hence separates features of the object from features of the background. The extracted features allow the model to predict an accurate 3D object surface from a real image. The approach is evaluated for both real images of the Pix3D dataset and realistic images rendered from the ShapeNet dataset. The results are compared with state-of-the-art approaches in order to highlight the significance of the proposed approach. Our approach achieves an increase in reconstruction accuracy of approximately 6.1% points in F-1 score with respect to Mesh R-CNN on the Pix3D dataset.
引用
收藏
页码:3 / 14
页数:12
相关论文
共 50 条
  • [21] Bayesian reconstruction of 3D shapes and scenes from a single image
    Han, F
    Zhu, SC
    FIRST IEEE INTERNATIONAL WORKSHOP ON HIGHER-LEVEL KNOWLEDGE IN 3D MODELING AND MOTION ANALYSIS, PROCEEDINGS, 2003, : 12 - 20
  • [22] A 3D endoscopy reconstruction as a saliency map for analysis of polyp shapes
    Ruano, Josue
    Martinez, Fabio
    Gomez, Martin
    Romero, Eduardo
    10TH INTERNATIONAL SYMPOSIUM ON MEDICAL INFORMATION PROCESSING AND ANALYSIS, 2015, 9287
  • [23] HOLOFUSION: Towards Photo-realistic 3D Generative Modeling
    Karnewar, Animesh
    Mitra, Niloy J.
    Vedaldi, Andrea
    Novotny, David
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 22919 - 22928
  • [24] Towards a realistic traffic and driving simulation using 3D rendering
    Paz, Alexander
    Vecramisti, Naveen
    de la Fuente-Mella, Hanns
    Modorcca, Luiza Vasilica
    Monteiro, Heather
    2015 IEEE 11TH INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTER COMMUNICATION AND PROCESSING (ICCP), 2015, : 351 - 356
  • [25] 3D Reconstruction of Novel Object Shapes from Single Images
    Thai, Anh
    Stojanov, Stefan
    Upadhya, Vijay
    Rehg, James M.
    2021 INTERNATIONAL CONFERENCE ON 3D VISION (3DV 2021), 2021, : 85 - 95
  • [26] Through the Looking Glass: Neural 3D Reconstruction of Transparent Shapes
    Li, Zhengqin
    Yeh, Yu-Ying
    Chandraker, Manmohan
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 1259 - 1268
  • [27] Towards realistic automated 3D modelling of metal forming problems
    Sci. Computation Research Center, Rensselaer Polytechnic Institute, Troy, NY, United States
    不详
    不详
    Eng Comput, 4 (356-374):
  • [28] Towards realistic automated 3D modelling of metal forming problems
    Hattangady, NV
    Shephard, MS
    Chaudhary, AB
    ENGINEERING WITH COMPUTERS, 1999, 15 (04) : 356 - 374
  • [29] Visual 3D Environment Reconstruction for Autonomous Vehicles
    Kadiofsky, Thomas
    Rossler, Robert
    Zinner, Christian
    ERCIM NEWS, 2013, (95): : 29 - 30
  • [30] 3D Environment Measurement and Reconstruction Based on LiDAR
    Fan, Yu-Cheng
    Zheng, Li-Juan
    Liu, Yi-Cheng
    2018 IEEE INTERNATIONAL INSTRUMENTATION AND MEASUREMENT TECHNOLOGY CONFERENCE (I2MTC): DISCOVERING NEW HORIZONS IN INSTRUMENTATION AND MEASUREMENT, 2018, : 1248 - 1251