Boosting Local Shape Matching for Dense 3D Face Correspondence

被引:8
|
作者
Fan, Zhenfeng [1 ,2 ]
Hu, Xiyuan [1 ,2 ]
Chen, Chen [1 ,2 ]
Peng, Silong [1 ,2 ,3 ]
机构
[1] Chinese Acad Sci, Inst Automat, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Beijing, Peoples R China
[3] Beijing Visytem Co Ltd, Beijing, Peoples R China
基金
国家重点研发计划;
关键词
REGISTRATION; MODELS; TRENDS; POINT;
D O I
10.1109/CVPR.2019.01120
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Dense 3D face correspondence is a fundamental and challenging issue in the literature of 3D face analysis. Correspondence between two 3D faces can be viewed as a nonrigid registration problem that one deforms into the other, which is commonly guided by a few facial landmarks in many existing works. However, the current works seldom consider the problem of incoherent deformation caused by landmarks. In this paper, we explicitly formulate the deformation as locally rigid motions guided by some seed points, and the formulated deformation satisfies coherent local motions everywhere on a face. The seed points are initialized by a few landmarks, and are then augmented to boost shape matching between the template and the target face step by step, to finally achieve dense correspondence. In each step, we employ a hierarchical scheme for local shape registration, together with a Gaussian reweighting strategy for accurate matching of local features around the seed points. In our experiments, we evaluate the proposed method extensively on several datasets, including two publicly available ones: FRGC v2.0 and BU-3DFE. The experimental results demonstrate that our method can achieve accurate feature correspondence, coherent local shape motion, and compact data representation. These merits actually settle some important issues for practical applications, such as expressions, noise, and partial data.
引用
收藏
页码:10936 / 10946
页数:11
相关论文
共 50 条
  • [21] Simulated Annealing for 3D Shape Correspondence
    Holzschuh, Benjamin
    Laehner, Zorah
    Cremers, Daniel
    2020 INTERNATIONAL CONFERENCE ON 3D VISION (3DV 2020), 2020, : 252 - 260
  • [22] 3D human pose and shape estimation with dense correspondence from a single depth image
    Kangkan Wang
    Guofeng Zhang
    Jian Yang
    The Visual Computer, 2023, 39 : 429 - 441
  • [23] 3D Human Mesh Regression with Dense Correspondence
    Zeng, Wang
    Ouyang, Wanli
    Luo, Ping
    Liu, Wentao
    Wang, Xiaogang
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 7052 - 7061
  • [24] Investigation of the Effect of Face Regions on Local Shape Descriptor Based 3D Face Recognition
    Inan, Tolga
    Halici, Ugur
    2013 21ST SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2013,
  • [25] New 3D Face Matching Technique for 3D Model Based Face Recognition
    Chew, Wei Jen
    Seng, Kah Phooi
    Liau, Heng Fui
    Ang, Li-Minn
    2008 INTERNATIONAL SYMPOSIUM ON INTELLIGENT SIGNAL PROCESSING AND COMMUNICATIONS SYSTEMS (ISPACS 2008), 2008, : 379 - +
  • [26] An Automatic Non-rigid Point Matching Method for Dense 3D Face Scans
    Hu, Yongli
    Zhou, Mingquan
    Wu, Zhongke
    PROCEEDINGS OF THE 2009 INTERNATIONAL CONFERENCE OF COMPUTATIONAL SCIENCES AND ITS APPLICATIONS, 2009, : 215 - 221
  • [27] Deformation analysis for 3D face matching
    Lu, XG
    Jain, AK
    WACV 2005: SEVENTH IEEE WORKSHOP ON APPLICATIONS OF COMPUTER VISION, PROCEEDINGS, 2005, : 99 - 104
  • [28] Shape-based 3D surface correspondence using geodesics and local geometry
    Wang, YM
    Peterson, BS
    Staib, LH
    IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, PROCEEDINGS, VOL II, 2000, : 644 - 651
  • [29] Accurate 3D Shape Correspondence by a Local Description Darcyan Principal Curvature Fields
    Sboui, Ilhem
    Jribi, Majdi
    Ghorbel, Faouzi
    REPRESENTATIONS, ANALYSIS AND RECOGNITION OF SHAPE AND MOTION FROM IMAGING DATA, 2017, 684 : 15 - 26
  • [30] The correspondence framework for 3D surface matching algorithms
    Planitz, BM
    Maeder, AJ
    Williams, JA
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2005, 97 (03) : 347 - 383