Maximal Likelihood Correspondence Estimation for Face Recognition Across Pose

被引:17
|
作者
Li, Shaoxin [1 ]
Liu, Xin [1 ]
Chai, Xiujuan [1 ]
Zhang, Haihong [2 ]
Lao, Shihong [2 ]
Shan, Shiguang [1 ]
机构
[1] Chinese Acad Sci, Inst Comp Technol, Key Lab Intelligent Informat Proc, Beijing 100190, Peoples R China
[2] OMRON Corp, Core Technol Ctr, Kyoto 6190283, Japan
关键词
Face recognition; pose-invariant face recognition; 3D face model; 2D displacement field; MODEL; IMAGE;
D O I
10.1109/TIP.2014.2351265
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database.
引用
收藏
页码:4587 / 4600
页数:14
相关论文
共 50 条
  • [1] Head pose estimation in face recognition across pose scenarios
    Sarfraz, A. Saquib
    Hellwich, Olaf
    [J]. VISAPP 2008: PROCEEDINGS OF THE THIRD INTERNATIONAL CONFERENCE ON COMPUTER VISION THEORY AND APPLICATIONS, VOL 1, 2008, : 235 - 242
  • [2] Face recognition across pose: A review
    Zhang, Xiaozheng
    Gao, Yongsheng
    [J]. PATTERN RECOGNITION, 2009, 42 (11) : 2876 - 2896
  • [3] Pose estimation and transformation for face recognition
    Talukder, A
    Casasent, D
    [J]. OPTICAL PATTERN RECOGNITION XI, 2000, 4043 : 11 - 14
  • [4] On Head Pose Estimation in Face Recognition
    Sarfraz, M. Saquib
    Hellwich, Olaf
    [J]. COMPUTER VISION AND COMPUTER GRAPHICS: THEORY AND APPLICATIONS, 2009, 24 : 162 - 175
  • [5] Pose estimation and frontal face detection for face recognition
    Lim, ET
    Wang, J
    Xie, W
    Venkarteswarlu, R
    [J]. Visual Information Processing XIV, 2005, 5817 : 97 - 105
  • [6] Bidirectional representation for face recognition across pose
    Jinrong Cui
    [J]. Neural Computing and Applications, 2013, 23 : 1437 - 1442
  • [7] Bidirectional representation for face recognition across pose
    Cui, Jinrong
    [J]. NEURAL COMPUTING & APPLICATIONS, 2013, 23 (05): : 1437 - 1442
  • [8] Orthogonal discriminant vector for face recognition across pose
    Wang, Jinghua
    You, Jane
    Li, Qin
    Xu, Yong
    [J]. PATTERN RECOGNITION, 2012, 45 (12) : 4069 - 4079
  • [9] Robust Face Recognition using Automatic Pose Clustering and Pose Estimation
    Beham, M. Parisa
    Roomi, S. Mohamed Mansoor
    Kapileshwaran, V.
    [J]. 2013 FIFTH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTING (ICOAC), 2013, : 51 - 55
  • [10] Using partial infomation for face recognition and pose estimation
    Rama, A
    Tarres, F
    Onofrio, D
    Tubaro, S
    [J]. 2005 IEEE International Conference on Multimedia and Expo (ICME), Vols 1 and 2, 2005, : 1007 - 1010