Photo-Inspired Model-Driven 3D Object Modeling

被引:67
|
作者
Xu, Kai [1 ,2 ]
Zheng, Hanlin [3 ]
Zhang, Hao [2 ]
Cohen-Or, Daniel [4 ]
Liu, Ligang [3 ]
Xiong, Yueshan [2 ]
机构
[1] Natl Univ Def Technol, Changsha 410073, Hunan, Peoples R China
[2] Simon Fraser Univ, Burnaby, BC V5A 1S6, Canada
[3] Zhejiang Univ, Hangzhou, Zhejiang, Peoples R China
[4] Tel Aviv Univ, Tel Aviv, Israel
来源
ACM TRANSACTIONS ON GRAPHICS | 2011年 / 30卷 / 04期
基金
中国国家自然科学基金; 以色列科学基金会; 加拿大自然科学与工程研究理事会;
关键词
RECONSTRUCTION; SHAPES;
D O I
10.1145/1964921.1964975
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
We introduce an algorithm for 3D object modeling where the user draws creative inspiration from an object captured in a single photograph. Our method leverages the rich source of photographs for creative 3D modeling. However, with only a photo as a guide, creating a 3D model from scratch is a daunting task. We support the modeling process by utilizing an available set of 3D candidate models. Specifically, the user creates a digital 3D model as a geometric variation from a 3D candidate. Our modeling technique consists of two major steps. The first step is a user-guided image-space object segmentation to reveal the structure of the photographed object. The core step is the second one, in which a 3D candidate is automatically deformed to fit the photographed target under the guidance of silhouette correspondence. The set of candidate models have been pre-analyzed to possess useful high-level structural information, which is heavily utilized in both steps to compensate for the ill-posedness of the analysis and modeling problems based only on content in a single image. Equally important, the structural information is preserved by the geometric variation so that the final product is coherent with its inherited structural information readily usable for subsequent model refinement or processing.
引用
收藏
页数:10
相关论文
共 50 条
  • [41] Rapid Model-Driven Annotation and Evaluation for Object Detection in Videos
    Ritter, Marc
    Storz, Michael
    Heinzig, Manuel
    Eibl, Maximilian
    UNIVERSAL ACCESS IN HUMAN-COMPUTER INTERACTION: ACCESS TO TODAY'S TECHNOLOGIES, PT I, 2015, 9175 : 464 - 474
  • [42] A domain model-driven approach for telecom network object platform
    Lan, Qingguo
    Liu, Shufen
    Ga, Mingsong
    Pang, Shichun
    Zhang, Shuying
    2006 10TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, PROCEEDINGS, VOLS 1 AND 2, 2006, : 867 - 871
  • [43] Sketch-Driven Mental 3D Object Retrieval
    Napoleon, Thibault
    Sahbi, Hichem
    THREE-DIMENSIONAL IMAGE PROCESSING (3DIP) AND APPLICATIONS, 2010, 7526
  • [44] PDE-driven implicit reconstruction of 3D object
    Zeng, HF
    Liu, ZG
    Lin, ZH
    Computer Graphics, Imaging and Vision: New Trends, 2005, : 251 - 256
  • [45] A representation model and modeling method of 3D reconstruction object supporting product redesign
    Tao, J
    Tong, SG
    PROCEEDINGS OF THE 2004 INTERNATIONAL CONFERENCE ON INTELLIGENT MECHATRONICS AND AUTOMATION, 2004, : 66 - 71
  • [46] Large Language Model-Driven 3D Hyper-Realistic Interactive Intelligent Digital Human System
    Song, Yanying
    Xiong, Wei
    SENSORS, 2025, 25 (06)
  • [47] Keypoints-based surface representation for 3D modeling and 3D object recognition
    Shah, Syed Afaq Ali
    Bennamoun, Mohammed
    Boussaid, Farid
    PATTERN RECOGNITION, 2017, 64 : 29 - 38
  • [48] A method for Modeling Wrinkles of 3D face based on Photo
    Li, Li
    Liu, Fei
    Li, Jian
    Wang, Yuanyuan
    DIGITAL DESIGN AND MANUFACTURING TECHNOLOGY, PTS 1 AND 2, 2010, 102-104 : 875 - +
  • [49] Photo-realistic 3D model reconstruction
    Se, Stephen
    Jasiobedzki, Piotr
    2006 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), VOLS 1-10, 2006, : 3076 - +
  • [50] EpiMDE: A Model-Driven Engineering Platform for Epidemiological Modeling
    Curzi-Laliberte, Bruno
    Fokaefs, Marios
    Famelis, Michalis
    Hamdaqa, Mohammad
    27TH INTERNATIONAL ACM/IEEE CONFERENCE ON MODEL DRIVEN ENGINEERING LANGUAGES AND SYSTEMS, MODELS, 2024, : 226 - 236