Learning Deep Sharable and Structural Detectors for Face Alignment

被引:36
|
作者
Liu, Hao [1 ]
Lu, Jiwen [2 ]
Feng, Jianjiang [2 ]
Zhou, Jie [2 ]
机构
[1] Tsinghua Univ, Tsinghua Natl Lab Informat Sci & Technol, Dept Automat, Beijing 100084, Peoples R China
[2] Tsinghua Univ, Tsinghua Natl Lab Informat Sci & Technol, Dept Automat, State Key Lab Intelligent Technol & Syst, Beijing 100084, Peoples R China
基金
中国国家自然科学基金;
关键词
Face alignment; deep learning; biometrics; ALGORITHM;
D O I
10.1109/TIP.2017.2657118
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Face alignment aims at localizing multiple facial landmarks for a given facial image, which usually suffers from large variances of diverse facial expressions, aspect ratios and partial occlusions, especially when face images were captured in wild conditions. Conventional face alignment methods extract local features and then directly concatenate these features for global shape regression. Unlike these methods which cannot explicitly model the correlation of neighbouring landmarks and motivated by the fact that individual landmarks are usually correlated, we propose a deep sharable and structural detectors (DSSD) method for face alignment. To achieve this, we firstly develop a structural feature learning method to explicitly exploit the correlation of neighbouring landmarks, which learns to cover semantic information to disambiguate the neighbouring landmarks. Moreover, our model selectively learns a subset of sharable latent tasks across neighbouring landmarks under the paradigm of the multi-task learning framework, so that the redundancy information of the overlapped patches can be efficiently removed. To better improve the performance, we extend our DSSD to a recurrent DSSD (R-DSSD) architecture by integrating with the complementary information from multiscale perspectives. Experimental results on the widely used benchmark datasets show that our methods achieve very competitive performance compared to the state-of-the-arts.
引用
收藏
页码:1666 / 1678
页数:13
相关论文
共 50 条
  • [1] A deep learning framework for face verification without alignment
    Fan, Zhongkui
    Guan, Ye-peng
    [J]. JOURNAL OF REAL-TIME IMAGE PROCESSING, 2021, 18 (04) : 999 - 1009
  • [2] Deep multi-center learning for face alignment
    Shao, Zhiwen
    Zhu, Hengliang
    Tan, Xin
    Hao, Yangyang
    Ma, Lizhuang
    [J]. NEUROCOMPUTING, 2020, 396 : 477 - 486
  • [3] A deep learning framework for face verification without alignment
    Zhongkui Fan
    Ye-peng Guan
    [J]. Journal of Real-Time Image Processing, 2021, 18 : 999 - 1009
  • [4] Learning Deep Representation for Face Alignment with Auxiliary Attributes
    Zhang, Zhanpeng
    Luo, Ping
    Loy, Chen Change
    Tang, Xiaoou
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (05) : 918 - 930
  • [5] Protein structural alignment using deep learning
    Li, Wei
    [J]. NATURE GENETICS, 2023, 55 (10) : 1609 - 1609
  • [6] Protein structural alignment using deep learning
    Wei Li
    [J]. Nature Genetics, 2023, 55 : 1609 - 1609
  • [7] Tuning of Deep Learning Algorithms for Face Alignment and Pose Estimation
    Pilarczyk, Rafal
    Skarbek, Wladyslaw
    [J]. PHOTONICS APPLICATIONS IN ASTRONOMY, COMMUNICATIONS, INDUSTRY, AND HIGH-ENERGY PHYSICS EXPERIMENTS 2018, 2018, 10808
  • [8] FACE ALIGNMENT BY DEEP CONVOLUTIONAL NETWORK WITH ADAPTIVE LEARNING RATE
    Shao, Zhiwen
    Ding, Shouhong
    Zhu, Hengliang
    Wang, Chengjie
    Ma, Lizhuang
    [J]. 2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGS, 2016, : 1283 - 1287
  • [9] Large-pose Face Alignment Based on Deep Learning
    Jiang, Yue-Hui
    Zhang, Qian
    Wang, Bin
    Shen, Hui-Zhong
    Huang, Ji-Feng
    Yan, Tao
    [J]. Ruan Jian Xue Bao/Journal of Software, 2019, 30 : 1 - 8
  • [10] Learning Relational-Structural Networks for Robust Face Alignment
    Zhu, Congcong
    Wang, Xing
    Wu, Suping
    Yu, Zhenhua
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: IMAGE PROCESSING, PT III, 2019, 11729 : 306 - 316