Pose Estimation and Conversion to Front Viewing Facial Image Using 3D Head Model

被引:1
|
作者
Shiau, Jyh-Bin [1 ]
Pu, Chang-En [1 ]
Leu, Jia-Guu [2 ]
机构
[1] Minist Justice Invest, Dept Forens Sci, Bureau, Taipei County, Peoples R China
[2] Natl Taipei Univ, Grad Sch Commun Eneg, Taipei, Taiwan
关键词
3D Head Model; Pose Estimation; Conversion to Front Viewing Facial Image; Pose Invariant Face Recognition; FACE DETECTION;
D O I
10.1109/CCST.2010.5678691
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Surveillance cameras are usually mounted near ceiling and pointing downward at an angle. Face images acquired usually are not frontal instead of faces looking downward or sideways. However, the face images collected in databases are frontal face images posing a problem of face recognition. Using skin color and shape analysis to detect the face, eyes and mouth then location of the head determined in 3D space based on perspective projection. Face parts not visible by the camera including parts facing away from the camera and that are obstructed by other face parts, that can be estimated from the angle between the surface normal and the vector pointing toward the camera. For converting to front viewing, two camera calibration matrices were established, one for physical camera situated right in front of the detected head and the other for virtual camera. By using calibration matrices location of the projection of a point in the 3D space on the image planes determined. For each face part (a triangle) that is visible by both cameras, we find its image in the face seen by the physical camera, apply affine transform to change its shape and size, and paste it onto its location seen by the virtual camera to accomplish pose conversion to a front viewing face. We verified that for a tilt angle (looking downward) ranging from 0 to 40 degrees, and/or a slant angle (face turning left or right) ranging from -60 degrees to +60 degrees, our approach is able to convert non-front viewing face to a front viewing face.
引用
收藏
页码:179 / 184
页数:6
相关论文
共 50 条
  • [1] 3D Facial Pose Estimation by Image Retrieval
    Grujic, Nemanja
    Ilic, Slobodan
    Lepetit, Vincent
    Fua, Pascal
    2008 8TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION (FG 2008), VOLS 1 AND 2, 2008, : 552 - +
  • [2] Robust Head Pose Estimation Using a 3D Morphable Model
    Cai, Ying
    Yang, Menglong
    Li, Ziqiang
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2015, 2015
  • [3] Locating Facial Features and Pose Estimation Using a 3D Shape Model
    Caunce, Angela
    Cristinacce, David
    Taylor, Chris
    Cootes, Tim
    ADVANCES IN VISUAL COMPUTING, PT 1, PROCEEDINGS, 2009, 5875 : 750 - 761
  • [4] Image-Based Pose Estimation Using a Compact 3D Model
    Heisterklaus, Iris
    Qian, Ningqing
    Miller, Artur
    2014 IEEE FOURTH INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS BERLIN (ICCE-BERLIN), 2014, : 327 - 330
  • [5] 3D head pose estimation using color information
    Chen, Q
    Wu, HY
    Shioyama, T
    Shimada, T
    IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA COMPUTING AND SYSTEMS, PROCEEDINGS VOL 1, 1999, : 697 - 702
  • [6] Robust Model-based 3D Head Pose Estimation
    Meyer, Gregory P.
    Gupta, Shalini
    Frosio, Iuri
    Reddy, Dikpal
    Kautz, Jan
    2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 3649 - 3657
  • [7] Model-based head tracking and 3D pose estimation
    Prêteux, F
    Malciu, M
    MATHEMATICAL MODELING AND ESTIMATION TECHNIQUES IN COMPUTER VISION, 1998, 3457 : 94 - 108
  • [8] Automatic Pose Estimation of 3D Facial Models
    Sun, Yi
    Yin, Lijun
    19TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOLS 1-6, 2008, : 104 - 107
  • [9] 3D Head Pose and Facial Expression Tracking using a Single Camera
    Terissi, Lucas D.
    Gomez, Juan C.
    JOURNAL OF UNIVERSAL COMPUTER SCIENCE, 2010, 16 (06) : 903 - 920
  • [10] Head Pose Recovery Using 3D Cross Model
    Xu, Yifei
    Zeng, Jinhua
    Sun, Yaoru
    2012 4TH INTERNATIONAL CONFERENCE ON INTELLIGENT HUMAN-MACHINE SYSTEMS AND CYBERNETICS (IHMSC), VOL 2, 2012, : 63 - 66